练集样本数: 3296
验证集样本数: 825
🚀 开始训练模型: CNN
(base) PS F:\博士开题\YOLO\异常信号二次筛选\PS\ch1> f:; cd 'f:\博士开题\YOLO\异常信号二次筛选\PS\ch1'; & 'd:\anaconda1\python.exe' 'c:\Users\lejmj\.vscode\extensions\ms-python.debugpy-2025.15.2025101002-win32-x64\bundled\libs\debugpy\launcher' '57528' '--' 'F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\D1.py'
训练集样本数: 3296
验证集样本数: 825
🚀 开始训练模型: CNN
(base) PS F:\博士开题\YOLO\异常信号二次筛选\PS\ch1> f:; cd 'f:\博士开题\YOLO\异常信号二次筛选\PS\ch1'; & 'd:\anaconda1\python.exe' 'c:\Users\lejmj\.vscode\extensions\ms-python.debugpy-2025.15.2025101002-win32-x64\bundled\libs\debugpy\launcher' '55626' '--' 'F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py'
检查路径是否存在:
正常信号路径: False
异常信号路径: False
(base) PS F:\博士开题\YOLO\异常信号二次筛选\PS\ch1> f:; cd 'f:\博士开题\YOLO\异常信号二次筛选\PS\ch1'; & 'd:\anaconda1\python.exe' 'c:\Users\lejmj\.vscode\extensions\ms-python.debugpy-2025.15.2025101002-win32-x64\bundled\libs\debugpy\launcher' '55654' '--' 'F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py'
检查路径是否存在:
正常信号路径: True
异常信号路径: True
找到 157 个正常信号图像
找到 3964 个异常信号图像
总样本数: 4121
开始预处理图像...
处理进度: 100/4121
处理进度: 200/4121
处理进度: 300/4121
处理进度: 400/4121
处理进度: 500/4121
处理进度: 600/4121
处理进度: 700/4121
处理进度: 800/4121
处理进度: 900/4121
处理进度: 1000/4121
处理进度: 1100/4121
处理进度: 1200/4121
处理进度: 1300/4121
处理进度: 1400/4121
处理进度: 1500/4121
处理进度: 1600/4121
处理进度: 1700/4121
处理进度: 1800/4121
处理进度: 1900/4121
处理进度: 2000/4121
处理进度: 2100/4121
处理进度: 2200/4121
处理进度: 2300/4121
处理进度: 2400/4121
处理进度: 2500/4121
处理进度: 2600/4121
处理进度: 2700/4121
处理进度: 2800/4121
处理进度: 2900/4121
处理进度: 3000/4121
处理进度: 3100/4121
处理进度: 3200/4121
处理进度: 3300/4121
处理进度: 3400/4121
处理进度: 3500/4121
处理进度: 3600/4121
处理进度: 3700/4121
处理进度: 3800/4121
处理进度: 3900/4121
处理进度: 4000/4121
处理进度: 4100/4121
成功处理 4121 张图像
图像形状: (224, 224, 3)
类别分布: {0: 157, 1: 3964}
训练集: 2636 样本
验证集: 660 样本
测试集: 825 样本
=== 训练简单CNN模型 (TensorFlow) ===
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 222, 222, 32) 896
max_pooling2d (MaxPooling2D (None, 111, 111, 32) 0
)
conv2d_1 (Conv2D) (None, 109, 109, 64) 18496
max_pooling2d_1 (MaxPooling (None, 54, 54, 64) 0
2D)
conv2d_2 (Conv2D) (None, 52, 52, 64) 36928
flatten (Flatten) (None, 173056) 0
dense (Dense) (None, 64) 11075648
dropout (Dropout) (None, 64) 0
dense_1 (Dense) (None, 1) 65
=================================================================
Total params: 11,132,033
Trainable params: 11,132,033
Non-trainable params: 0
_________________________________________________________________
None
Epoch 1/30
83/83 [==============================] - 64s 755ms/step - loss: 0.1381 - accuracy: 0.9700 - val_loss: 0.0055 - val_accuracy: 0.9985 - lr: 0.0010
Epoch 2/30
83/83 [==============================] - 60s 719ms/step - loss: 0.0263 - accuracy: 0.9913 - val_loss: 0.0162 - val_accuracy: 0.9970 - lr: 0.0010
Epoch 3/30
83/83 [==============================] - 59s 707ms/step - loss: 0.0190 - accuracy: 0.9985 - val_loss: 0.0017 - val_accuracy: 1.0000 - lr: 0.0010
Epoch 4/30
83/83 [==============================] - 59s 716ms/step - loss: 0.0147 - accuracy: 0.9992 - val_loss: 0.0045 - val_accuracy: 0.9970 - lr: 0.0010
Epoch 5/30
83/83 [==============================] - 58s 705ms/step - loss: 0.0408 - accuracy: 0.9958 - val_loss: 0.0051 - val_accuracy: 0.9985 - lr: 0.0010
Epoch 6/30
83/83 [==============================] - 58s 704ms/step - loss: 0.0283 - accuracy: 0.9951 - val_loss: 0.0086 - val_accuracy: 0.9970 - lr: 0.0010
Epoch 7/30
83/83 [==============================] - 59s 715ms/step - loss: 0.0098 - accuracy: 0.9973 - val_loss: 0.0255 - val_accuracy: 0.9970 - lr: 0.0010
Epoch 8/30
83/83 [==============================] - 60s 719ms/step - loss: 0.0347 - accuracy: 0.9981 - val_loss: 0.0112 - val_accuracy: 0.9955 - lr: 0.0010
Epoch 9/30
83/83 [==============================] - 59s 715ms/step - loss: 0.0141 - accuracy: 0.9992 - val_loss: 0.0126 - val_accuracy: 0.9970 - lr: 5.0000e-04
Epoch 10/30
83/83 [==============================] - 59s 710ms/step - loss: 0.0039 - accuracy: 0.9981 - val_loss: 0.0102 - val_accuracy: 0.9970 - lr: 5.0000e-04
Epoch 11/30
83/83 [==============================] - 58s 705ms/step - loss: 0.0018 - accuracy: 0.9996 - val_loss: 0.0099 - val_accuracy: 0.9970 - lr: 5.0000e-04
Epoch 12/30
83/83 [==============================] - 59s 708ms/step - loss: 0.0058 - accuracy: 0.9996 - val_loss: 0.0169 - val_accuracy: 0.9970 - lr: 5.0000e-04
Epoch 13/30
83/83 [==============================] - 58s 704ms/step - loss: 8.5977e-04 - accuracy: 0.9996 - val_loss: 0.0029 - val_accuracy: 0.9985 - lr: 5.0000e-04
简单CNN测试准确率: 0.9988
=== 训练深度CNN模型 (TensorFlow) ===
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_3 (Conv2D) (None, 222, 222, 32) 896
batch_normalization (BatchN (None, 222, 222, 32) 128
ormalization)
max_pooling2d_2 (MaxPooling (None, 111, 111, 32) 0
2D)
conv2d_4 (Conv2D) (None, 109, 109, 64) 18496
batch_normalization_1 (Batc (None, 109, 109, 64) 256
hNormalization)
max_pooling2d_3 (MaxPooling (None, 54, 54, 64) 0
2D)
conv2d_5 (Conv2D) (None, 52, 52, 128) 73856
batch_normalization_2 (Batc (None, 52, 52, 128) 512
hNormalization)
max_pooling2d_4 (MaxPooling (None, 26, 26, 128) 0
2D)
conv2d_6 (Conv2D) (None, 24, 24, 256) 295168
batch_normalization_3 (Batc (None, 24, 24, 256) 1024
hNormalization)
global_average_pooling2d (G (None, 256) 0
lobalAveragePooling2D)
dense_2 (Dense) (None, 128) 32896
dropout_1 (Dropout) (None, 128) 0
dense_3 (Dense) (None, 1) 129
=================================================================
Total params: 423,361
Trainable params: 422,401
Non-trainable params: 960
_________________________________________________________________
None
Epoch 1/30
83/83 [==============================] - 106s 1s/step - loss: 0.1259 - accuracy: 0.9753 - val_loss: 0.2609 - val_accuracy: 0.9621 - lr: 0.0010
Epoch 2/30
83/83 [==============================] - 103s 1s/step - loss: 0.0300 - accuracy: 0.9920 - val_loss: 0.1903 - val_accuracy: 0.9621 - lr: 0.0010
Epoch 3/30
83/83 [==============================] - 105s 1s/step - loss: 0.0131 - accuracy: 0.9970 - val_loss: 0.3777 - val_accuracy: 0.9621 - lr: 0.0010
Epoch 4/30
83/83 [==============================] - 104s 1s/step - loss: 0.0132 - accuracy: 0.9966 - val_loss: 0.1598 - val_accuracy: 0.9621 - lr: 0.0010
Epoch 5/30
83/83 [==============================] - 104s 1s/step - loss: 0.0036 - accuracy: 0.9996 - val_loss: 0.5251 - val_accuracy: 0.9621 - lr: 0.0010
Epoch 6/30
83/83 [==============================] - 104s 1s/step - loss: 0.0064 - accuracy: 0.9977 - val_loss: 0.4136 - val_accuracy: 0.9621 - lr: 0.0010
Epoch 7/30
83/83 [==============================] - 103s 1s/step - loss: 0.0027 - accuracy: 0.9996 - val_loss: 0.0622 - val_accuracy: 0.9864 - lr: 0.0010
Epoch 8/30
83/83 [==============================] - 103s 1s/step - loss: 0.0042 - accuracy: 0.9992 - val_loss: 2.4309 - val_accuracy: 0.4212 - lr: 0.0010
Epoch 9/30
83/83 [==============================] - 103s 1s/step - loss: 3.2227e-04 - accuracy: 1.0000 - val_loss: 0.0398 - val_accuracy: 0.9894 - lr: 0.0010
Epoch 10/30
83/83 [==============================] - 103s 1s/step - loss: 0.0055 - accuracy: 0.9992 - val_loss: 2.7218 - val_accuracy: 0.4485 - lr: 0.0010
Epoch 11/30
83/83 [==============================] - 105s 1s/step - loss: 5.3121e-04 - accuracy: 1.0000 - val_loss: 0.0381 - val_accuracy: 0.9864 - lr: 0.0010
Epoch 12/30
83/83 [==============================] - 106s 1s/step - loss: 2.7901e-04 - accuracy: 1.0000 - val_loss: 3.2564e-04 - val_accuracy: 1.0000 - lr: 0.0010
Epoch 13/30
83/83 [==============================] - 103s 1s/step - loss: 1.4393e-04 - accuracy: 1.0000 - val_loss: 3.0266e-05 - val_accuracy: 1.0000 - lr: 0.0010
Epoch 14/30
83/83 [==============================] - 103s 1s/step - loss: 1.0170e-04 - accuracy: 1.0000 - val_loss: 0.0023 - val_accuracy: 0.9985 - lr: 0.0010
Epoch 15/30
83/83 [==============================] - 103s 1s/step - loss: 1.3439e-04 - accuracy: 1.0000 - val_loss: 0.0537 - val_accuracy: 0.9848 - lr: 0.0010
Epoch 16/30
83/83 [==============================] - 103s 1s/step - loss: 1.1198e-04 - accuracy: 1.0000 - val_loss: 0.0103 - val_accuracy: 0.9955 - lr: 0.0010
Epoch 17/30
83/83 [==============================] - 103s 1s/step - loss: 5.8652e-05 - accuracy: 1.0000 - val_loss: 0.0935 - val_accuracy: 0.9667 - lr: 0.0010
Epoch 18/30
83/83 [==============================] - 103s 1s/step - loss: 7.4556e-05 - accuracy: 1.0000 - val_loss: 0.0354 - val_accuracy: 0.9924 - lr: 0.0010
Epoch 19/30
83/83 [==============================] - 102s 1s/step - loss: 3.4750e-05 - accuracy: 1.0000 - val_loss: 0.0050 - val_accuracy: 0.9985 - lr: 5.0000e-04
Epoch 20/30
83/83 [==============================] - 104s 1s/step - loss: 5.1109e-05 - accuracy: 1.0000 - val_loss: 0.0034 - val_accuracy: 0.9985 - lr: 5.0000e-04
Epoch 21/30
83/83 [==============================] - 102s 1s/step - loss: 6.8591e-05 - accuracy: 1.0000 - val_loss: 9.5372e-04 - val_accuracy: 1.0000 - lr: 5.0000e-04
Epoch 22/30
83/83 [==============================] - 103s 1s/step - loss: 5.3082e-05 - accuracy: 1.0000 - val_loss: 6.3523e-04 - val_accuracy: 1.0000 - lr: 5.0000e-04
Epoch 23/30
83/83 [==============================] - 103s 1s/step - loss: 4.1724e-05 - accuracy: 1.0000 - val_loss: 0.0013 - val_accuracy: 0.9985 - lr: 5.0000e-04
深度CNN测试准确率: 1.0000
=== 训练预训练模型 (TensorFlow) ===
Model: "sequential_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
mobilenetv2_1.00_224 (Funct (None, 7, 7, 1280) 2257984
ional)
global_average_pooling2d_1 (None, 1280) 0
(GlobalAveragePooling2D)
dense_4 (Dense) (None, 128) 163968
dropout_2 (Dropout) (None, 128) 0
dense_5 (Dense) (None, 1) 129
=================================================================
Total params: 2,422,081
Trainable params: 164,097
Non-trainable params: 2,257,984
_________________________________________________________________
None
Epoch 1/20
83/83 [==============================] - 42s 451ms/step - loss: 0.0863 - accuracy: 0.9753 - val_loss: 0.0046 - val_accuracy: 1.0000 - lr: 0.0010
Epoch 2/20
83/83 [==============================] - 36s 434ms/step - loss: 0.0083 - accuracy: 0.9985 - val_loss: 0.0017 - val_accuracy: 1.0000 - lr: 0.0010
Epoch 3/20
83/83 [==============================] - 36s 433ms/step - loss: 0.0031 - accuracy: 0.9996 - val_loss: 7.3547e-04 - val_accuracy: 1.0000 - lr: 0.0010
Epoch 4/20
83/83 [==============================] - 36s 432ms/step - loss: 0.0023 - accuracy: 0.9996 - val_loss: 0.0012 - val_accuracy: 1.0000 - lr: 0.0010
Epoch 5/20
83/83 [==============================] - 36s 434ms/step - loss: 0.0027 - accuracy: 0.9996 - val_loss: 4.6097e-04 - val_accuracy: 1.0000 - lr: 0.0010
Epoch 6/20
83/83 [==============================] - 36s 433ms/step - loss: 0.0022 - accuracy: 0.9996 - val_loss: 4.8222e-04 - val_accuracy: 1.0000 - lr: 0.0010
Epoch 7/20
83/83 [==============================] - 36s 437ms/step - loss: 0.0014 - accuracy: 1.0000 - val_loss: 0.0011 - val_accuracy: 1.0000 - lr: 0.0010
Epoch 8/20
83/83 [==============================] - 36s 436ms/step - loss: 9.3960e-04 - accuracy: 1.0000 - val_loss: 1.9397e-04 - val_accuracy: 1.0000 - lr: 0.0010
Epoch 9/20
83/83 [==============================] - 36s 434ms/step - loss: 6.6778e-04 - accuracy: 1.0000 - val_loss: 8.4425e-04 - val_accuracy: 1.0000 - lr: 0.0010
Epoch 10/20
83/83 [==============================] - 36s 436ms/step - loss: 4.4138e-04 - accuracy: 1.0000 - val_loss: 2.6525e-04 - val_accuracy: 1.0000 - lr: 0.0010
Epoch 11/20
83/83 [==============================] - 36s 436ms/step - loss: 0.0012 - accuracy: 1.0000 - val_loss: 1.2021e-04 - val_accuracy: 1.0000 - lr: 0.0010
Epoch 12/20
83/83 [==============================] - 36s 434ms/step - loss: 9.6049e-04 - accuracy: 0.9996 - val_loss: 1.3571e-04 - val_accuracy: 1.0000 - lr: 0.0010
Epoch 13/20
83/83 [==============================] - 36s 435ms/step - loss: 4.1769e-04 - accuracy: 1.0000 - val_loss: 2.5014e-04 - val_accuracy: 1.0000 - lr: 0.0010
Epoch 14/20
83/83 [==============================] - 36s 433ms/step - loss: 5.9524e-04 - accuracy: 0.9996 - val_loss: 2.1576e-04 - val_accuracy: 1.0000 - lr: 5.0000e-04
Epoch 15/20
83/83 [==============================] - 37s 448ms/step - loss: 0.0014 - accuracy: 0.9992 - val_loss: 0.0025 - val_accuracy: 0.9985 - lr: 5.0000e-04
Epoch 16/20
83/83 [==============================] - 36s 434ms/step - loss: 4.3109e-04 - accuracy: 1.0000 - val_loss: 8.8710e-04 - val_accuracy: 1.0000 - lr: 5.0000e-04
Epoch 17/20
83/83 [==============================] - 36s 433ms/step - loss: 3.1498e-04 - accuracy: 1.0000 - val_loss: 6.9781e-04 - val_accuracy: 1.0000 - lr: 5.0000e-04
Epoch 18/20
83/83 [==============================] - 36s 434ms/step - loss: 1.6871e-04 - accuracy: 1.0000 - val_loss: 4.8953e-04 - val_accuracy: 1.0000 - lr: 5.0000e-04
Epoch 19/20
83/83 [==============================] - 36s 434ms/step - loss: 7.0924e-04 - accuracy: 1.0000 - val_loss: 2.1822e-04 - val_accuracy: 1.0000 - lr: 2.5000e-04
Epoch 20/20
83/83 [==============================] - 36s 433ms/step - loss: 3.8044e-04 - accuracy: 1.0000 - val_loss: 2.9509e-04 - val_accuracy: 1.0000 - lr: 2.5000e-04
预训练模型测试准确率: 0.9988
=== 训练简单CNN模型 (PyTorch) ===
SimpleCNN(
(conv1): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=50176, out_features=64, bias=True)
(fc2): Linear(in_features=64, out_features=1, bias=True)
(dropout): Dropout(p=0.5, inplace=False)
(relu): ReLU()
(sigmoid): Sigmoid()
)
使用设备: cpu
Epoch [10/30], Loss: 0.0095, Accuracy: 100.00%
=== 训练简单CNN模型 (PyTorch) ===
SimpleCNN(
(conv1): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=50176, out_features=64, bias=True)
(fc2): Linear(in_features=64, out_features=1, bias=True)
(dropout): Dropout(p=0.5, inplace=False)
(relu): ReLU()
(sigmoid): Sigmoid()
)
使用设备: cpu
Epoch [10/30], Loss: 0.0095, Accuracy: 100.00%
SimpleCNN(
(conv1): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=50176, out_features=64, bias=True)
(fc2): Linear(in_features=64, out_features=1, bias=True)
(dropout): Dropout(p=0.5, inplace=False)
(relu): ReLU()
(sigmoid): Sigmoid()
)
使用设备: cpu
Epoch [10/30], Loss: 0.0095, Accuracy: 100.00%
(conv3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=50176, out_features=64, bias=True)
(fc2): Linear(in_features=64, out_features=1, bias=True)
(dropout): Dropout(p=0.5, inplace=False)
(relu): ReLU()
(sigmoid): Sigmoid()
)
使用设备: cpu
Epoch [10/30], Loss: 0.0095, Accuracy: 100.00%
(fc1): Linear(in_features=50176, out_features=64, bias=True)
(fc2): Linear(in_features=64, out_features=1, bias=True)
(dropout): Dropout(p=0.5, inplace=False)
(relu): ReLU()
(sigmoid): Sigmoid()
)
使用设备: cpu
Epoch [10/30], Loss: 0.0095, Accuracy: 100.00%
(dropout): Dropout(p=0.5, inplace=False)
(relu): ReLU()
(sigmoid): Sigmoid()
)
使用设备: cpu
Epoch [10/30], Loss: 0.0095, Accuracy: 100.00%
(sigmoid): Sigmoid()
)
使用设备: cpu
Epoch [10/30], Loss: 0.0095, Accuracy: 100.00%
)
使用设备: cpu
Epoch [10/30], Loss: 0.0095, Accuracy: 100.00%
使用设备: cpu
Epoch [10/30], Loss: 0.0095, Accuracy: 100.00%
Epoch [10/30], Loss: 0.0095, Accuracy: 100.00%
Epoch [20/30], Loss: 0.0469, Accuracy: 99.85%
Epoch [30/30], Loss: 0.0416, Accuracy: 99.85%
简单CNN (PyTorch) 测试准确率: 99.88%
=== 训练ResNet风格模型 (PyTorch) ===
ResNetStyle(
(conv1): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bn1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(layer1): ResidualBlock(
(conv1): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shortcut): Sequential(
(0): Conv2d(32, 64, kernel_size=(1, 1), stride=(1, 1))
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(layer2): ResidualBlock(
(conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shortcut): Sequential(
(0): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1))
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(layer3): ResidualBlock(
(conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shortcut): Sequential(
(0): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1))
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(global_avg_pool): AdaptiveAvgPool2d(output_size=(1, 1))
(fc): Linear(in_features=256, out_features=1, bias=True)
(sigmoid): Sigmoid()
)
使用设备: cpu
Epoch [10/30], Loss: 0.0007, Accuracy: 77.27%
Epoch [20/30], Loss: 0.0001, Accuracy: 99.85%
Epoch [30/30], Loss: 0.0000, Accuracy: 99.85%
ResNet风格模型测试准确率: 100.00%
=== 训练注意力机制模型 (PyTorch) ===
AttentionCNN(
(conv1): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bn1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(attention1): AttentionModule(
(channel_attention): Sequential(
(0): AdaptiveAvgPool2d(output_size=1)
(1): Conv2d(32, 4, kernel_size=(1, 1), stride=(1, 1))
(2): ReLU()
(3): Conv2d(4, 32, kernel_size=(1, 1), stride=(1, 1))
(4): Sigmoid()
)
)
(conv2): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(attention2): AttentionModule(
(channel_attention): Sequential(
(0): AdaptiveAvgPool2d(output_size=1)
(1): Conv2d(64, 8, kernel_size=(1, 1), stride=(1, 1))
(2): ReLU()
(3): Conv2d(8, 64, kernel_size=(1, 1), stride=(1, 1))
(4): Sigmoid()
)
)
(conv3): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bn3): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(attention3): AttentionModule(
(channel_attention): Sequential(
(0): AdaptiveAvgPool2d(output_size=1)
(1): Conv2d(128, 16, kernel_size=(1, 1), stride=(1, 1))
(2): ReLU()
(3): Conv2d(16, 128, kernel_size=(1, 1), stride=(1, 1))
(4): Sigmoid()
)
)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(global_avg_pool): AdaptiveAvgPool2d(output_size=(1, 1))
(fc): Linear(in_features=128, out_features=1, bias=True)
(relu): ReLU()
(sigmoid): Sigmoid()
(dropout): Dropout(p=0.5, inplace=False)
)
使用设备: cpu
Epoch [10/30], Loss: 0.0086, Accuracy: 96.21%
Epoch [20/30], Loss: 0.0019, Accuracy: 96.36%
Epoch [30/30], Loss: 0.0006, Accuracy: 99.85%
注意力机制模型测试准确率: 100.00%
==================================================
模型性能比较:
==================================================
Simple_CNN_TF: 0.9988
Deep_CNN_TF: 1.0000
Pretrained_TF: 0.9988
Simple_CNN_PT: 0.9988
ResNet_PT: 1.0000
Attention_CNN_PT: 1.0000
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:615: UserWarning: Glyph 27169 (\N{CJK UNIFIED IDEOGRAPH-6A21}) missing from current font.
plt.tight_layout()
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:615: UserWarning: Glyph 22411 (\N{CJK UNIFIED IDEOGRAPH-578B}) missing from current font.
plt.tight_layout()
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:615: UserWarning: Glyph 27979 (\N{CJK UNIFIED IDEOGRAPH-6D4B}) missing from current font.
plt.tight_layout()
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:615: UserWarning: Glyph 35797 (\N{CJK UNIFIED IDEOGRAPH-8BD5}) missing from current font.
plt.tight_layout()
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:615: UserWarning: Glyph 20934 (\N{CJK UNIFIED IDEOGRAPH-51C6}) missing from current font.
plt.tight_layout()
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:615: UserWarning: Glyph 30830 (\N{CJK UNIFIED IDEOGRAPH-786E}) missing from current font.
plt.tight_layout()
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:615: UserWarning: Glyph 29575 (\N{CJK UNIFIED IDEOGRAPH-7387}) missing from current font.
plt.tight_layout()
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:615: UserWarning: Glyph 28145 (\N{CJK UNIFIED IDEOGRAPH-6DF1}) missing from current font.
plt.tight_layout()
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:615: UserWarning: Glyph 24230 (\N{CJK UNIFIED IDEOGRAPH-5EA6}) missing from current font.
plt.tight_layout()
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:615: UserWarning: Glyph 23398 (\N{CJK UNIFIED IDEOGRAPH-5B66}) missing from current font.
plt.tight_layout()
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:615: UserWarning: Glyph 20064 (\N{CJK UNIFIED IDEOGRAPH-4E60}) missing from current font.
plt.tight_layout()
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:615: UserWarning: Glyph 24615 (\N{CJK UNIFIED IDEOGRAPH-6027}) missing from current font.
plt.tight_layout()
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:615: UserWarning: Glyph 33021 (\N{CJK UNIFIED IDEOGRAPH-80FD}) missing from current font.
plt.tight_layout()
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:615: UserWarning: Glyph 27604 (\N{CJK UNIFIED IDEOGRAPH-6BD4}) missing from current font.
plt.tight_layout()
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:615: UserWarning: Glyph 36739 (\N{CJK UNIFIED IDEOGRAPH-8F83}) missing from current font.
plt.tight_layout()
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:616: UserWarning: Glyph 28145 (\N{CJK UNIFIED IDEOGRAPH-6DF1}) missing from current font.
plt.savefig('model_comparison.png', dpi=300, bbox_inches='tight')
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:616: UserWarning: Glyph 24230 (\N{CJK UNIFIED IDEOGRAPH-5EA6}) missing from current font.
plt.savefig('model_comparison.png', dpi=300, bbox_inches='tight')
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:616: UserWarning: Glyph 23398 (\N{CJK UNIFIED IDEOGRAPH-5B66}) missing from current font.
plt.savefig('model_comparison.png', dpi=300, bbox_inches='tight')
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:616: UserWarning: Glyph 20064 (\N{CJK UNIFIED IDEOGRAPH-4E60}) missing from current font.
plt.savefig('model_comparison.png', dpi=300, bbox_inches='tight')
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:616: UserWarning: Glyph 27169 (\N{CJK UNIFIED IDEOGRAPH-6A21}) missing from current font.
plt.savefig('model_comparison.png', dpi=300, bbox_inches='tight')
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:616: UserWarning: Glyph 22411 (\N{CJK UNIFIED IDEOGRAPH-578B}) missing from current font.
plt.savefig('model_comparison.png', dpi=300, bbox_inches='tight')
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:616: UserWarning: Glyph 24615 (\N{CJK UNIFIED IDEOGRAPH-6027}) missing from current font.
plt.savefig('model_comparison.png', dpi=300, bbox_inches='tight')
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:616: UserWarning: Glyph 33021 (\N{CJK UNIFIED IDEOGRAPH-80FD}) missing from current font.
plt.savefig('model_comparison.png', dpi=300, bbox_inches='tight')
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:616: UserWarning: Glyph 27604 (\N{CJK UNIFIED IDEOGRAPH-6BD4}) missing from current font.
plt.savefig('model_comparison.png', dpi=300, bbox_inches='tight')
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:616: UserWarning: Glyph 36739 (\N{CJK UNIFIED IDEOGRAPH-8F83}) missing from current font.
plt.savefig('model_comparison.png', dpi=300, bbox_inches='tight')
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:616: UserWarning: Glyph 27604 (\N{CJK UNIFIED IDEOGRAPH-6BD4}) missing from current font.
plt.savefig('model_comparison.png', dpi=300, bbox_inches='tight')
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:616: UserWarning: Glyph 36739 (\N{CJK UNIFIED IDEOGRAPH-8F83}) missing from current font.
plt.savefig('model_comparison.png', dpi=300, bbox_inches='tight')
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:616: UserWarning: Glyph 27979 (\N{CJK UNIFIED IDEOGRAPH-6D4B}) missing from current font.
plt.savefig('model_comparison.png', dpi=300, bbox_inches='tight')
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:616: UserWarning: Glyph 35797 (\N{CJK UNIFIED IDEOGRAPH-8BD5}) missing from current font.
plt.savefig('model_comparison.png', dpi=300, bbox_inches='tight')
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:616: UserWarning: Glyph 36739 (\N{CJK UNIFIED IDEOGRAPH-8F83}) missing from current font.
plt.savefig('model_comparison.png', dpi=300, bbox_inches='tight')
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:616: UserWarning: Glyph 27979 (\N{CJK UNIFIED IDEOGRAPH-6D4B}) missing from current font.
plt.savefig('model_comparison.png', dpi=300, bbox_inches='tight')
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:616: UserWarning: Glyph 35797 (\N{CJK UNIFIED IDEOGRAPH-8BD5}) missing from current font.
plt.savefig('model_comparison.png', dpi=300, bbox_inches='tight')
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:616: UserWarning: Glyph 20934 (\N{CJK UNIFIED IDEOGRAPH-51C6}) missing from current font.
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:616: UserWarning: Glyph 36739 (\N{CJK UNIFIED IDEOGRAPH-8F83}) missing from current font.
plt.savefig('model_comparison.png', dpi=300, bbox_inches='tight')
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:616: UserWarning: Glyph 27979 (\N{CJK UNIFIED IDEOGRAPH-6D4B}) missing from current font.
plt.savefig('model_comparison.png', dpi=300, bbox_inches='tight')
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:616: UserWarning: Glyph 35797 (\N{CJK UNIFIED IDEOGRAPH-8BD5}) missing from current font.
plt.savefig('model_comparison.png', dpi=300, bbox_inches='tight')
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:616: UserWarning: Glyph 20934 (\N{CJK UNIFIED IDEOGRAPH-51C6}) missing from current font.
plt.savefig('model_comparison.png', dpi=300, bbox_inches='tight')
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:616: UserWarning: Glyph 30830 (\N{CJK UNIFIED IDEOGRAPH-786E}) missing from current font.
plt.savefig('model_comparison.png', dpi=300, bbox_inches='tight')
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:616: UserWarning: Glyph 29575 (\N{CJK UNIFIED IDEOGRAPH-7387}) missing from current font.
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:616: UserWarning: Glyph 27979 (\N{CJK UNIFIED IDEOGRAPH-6D4B}) missing from current font.
plt.savefig('model_comparison.png', dpi=300, bbox_inches='tight')
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:616: UserWarning: Glyph 35797 (\N{CJK UNIFIED IDEOGRAPH-8BD5}) missing from current font.
plt.savefig('model_comparison.png', dpi=300, bbox_inches='tight')
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:616: UserWarning: Glyph 20934 (\N{CJK UNIFIED IDEOGRAPH-51C6}) missing from current font.
plt.savefig('model_comparison.png', dpi=300, bbox_inches='tight')
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:616: UserWarning: Glyph 30830 (\N{CJK UNIFIED IDEOGRAPH-786E}) missing from current font.
plt.savefig('model_comparison.png', dpi=300, bbox_inches='tight')
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:616: UserWarning: Glyph 29575 (\N{CJK UNIFIED IDEOGRAPH-7387}) missing from current font.
plt.savefig('model_comparison.png', dpi=300, bbox_inches='tight')
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:616: UserWarning: Glyph 20934 (\N{CJK UNIFIED IDEOGRAPH-51C6}) missing from current font.
plt.savefig('model_comparison.png', dpi=300, bbox_inches='tight')
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:616: UserWarning: Glyph 30830 (\N{CJK UNIFIED IDEOGRAPH-786E}) missing from current font.
plt.savefig('model_comparison.png', dpi=300, bbox_inches='tight')
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:616: UserWarning: Glyph 29575 (\N{CJK UNIFIED IDEOGRAPH-7387}) missing from current font.
plt.savefig('model_comparison.png', dpi=300, bbox_inches='tight')
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:616: UserWarning: Glyph 30830 (\N{CJK UNIFIED IDEOGRAPH-786E}) missing from current font.
plt.savefig('model_comparison.png', dpi=300, bbox_inches='tight')
F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\main.py:616: UserWarning: Glyph 29575 (\N{CJK UNIFIED IDEOGRAPH-7387}) missing from current font.
plt.savefig('model_comparison.png', dpi=300, bbox_inches='tight')
性能比较图已保存为 'model_comparison.png'
最佳模型: Deep_CNN_TF, 准确率: 1.0000
详细结果已保存为 'model_results.csv'
(base) PS F:\博士开题\YOLO\异常信号二次筛选\PS\ch1>
^C
(base) PS F:\博士开题\YOLO\异常信号二次筛选\PS\ch1> f:; cd 'f:\博士开题\YOLO\异常信号二次筛选\PS\ch1'; & 'd:\anaconda1\python.exe' 'c:\Users\lejmj\.vscode\extensions\ms-python.debugpy-2025.15.2025101002-win32-x64\bundled\libs\debugpy\launcher' '62751' '--' 'F:\博士开题\YOLO\异常信号二次筛选\PS\ch1\D1_20251013.py'
使用设备: cpu
数据目录: F:\博士开题\YOLO\异常信号二次筛选\PS\ch1
找到 4121 张图片,其中正常信号: 157, 异常信号: 3964
训练集样本数: 3296
验证集样本数: 825
================================================================================
🎯 开始训练所有模型
================================================================================
==================================================
训练模型: CNN
==================================================
🚀 开始训练模型: CNN
✅ 保存最佳模型: saved_models\CNN_best.pth
Epoch [1/20], Loss: 0.2865, Acc: 0.9624, F1: 0.9809, Time: 156.69s, LR: 1.00e-04, Patience: 0/5
Epoch [2/20], Loss: 0.1499, Acc: 0.9624, F1: 0.9809, Time: 153.04s, LR: 1.00e-04, Patience: 1/5
Epoch [3/20], Loss: 0.1147, Acc: 0.9624, F1: 0.9809, Time: 150.07s, LR: 1.00e-04, Patience: 2/5
Epoch [4/20], Loss: 0.0896, Acc: 0.9624, F1: 0.9809, Time: 149.15s, LR: 1.00e-04, Patience: 3/5
✅ 保存最佳模型: saved_models\CNN_best.pth
Epoch [5/20], Loss: 0.0745, Acc: 0.9903, F1: 0.9950, Time: 151.13s, LR: 1.00e-04, Patience: 0/5
✅ 保存最佳模型: saved_models\CNN_best.pth
Epoch [6/20], Loss: 0.0590, Acc: 0.9976, F1: 0.9987, Time: 157.00s, LR: 1.00e-04, Patience: 0/5
Epoch [7/20], Loss: 0.0485, Acc: 0.9891, F1: 0.9943, Time: 152.86s, LR: 1.00e-04, Patience: 1/5
Epoch [8/20], Loss: 0.0400, Acc: 0.9927, F1: 0.9962, Time: 150.87s, LR: 1.00e-04, Patience: 2/5
Epoch [9/20], Loss: 0.0330, Acc: 0.9964, F1: 0.9981, Time: 149.34s, LR: 1.00e-04, Patience: 3/5
Epoch 00010: reducing learning rate of group 0 to 5.0000e-05.
Epoch [10/20], Loss: 0.0317, Acc: 0.9964, F1: 0.9981, Time: 150.11s, LR: 5.00e-05, Patience: 4/5
Epoch [11/20], Loss: 0.0248, Acc: 0.9976, F1: 0.9987, Time: 150.36s, LR: 5.00e-05, Patience: 5/5
🛑 早停触发于第 11 轮
✅ CNN 训练完成: Acc=0.9976, F1=0.9987
==================================================
训练模型: VGG16
==================================================
Downloading: "https://download.pytorch.org/models/vgg16-397923af.pth" to C:\Users\lejmj/.cache\torch\hub\checkpoints\vgg16-397923af.pth
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 528M/528M [00:06<00:00, 90.5MB/s]
🚀 开始训练模型: VGG16
✅ 保存最佳模型: saved_models\VGG16_best.pth
Epoch [1/20], Loss: 0.0139, Acc: 0.9976, F1: 0.9987, Time: 611.10s, LR: 1.00e-04, Patience: 0/5
✅ 保存最佳模型: saved_models\VGG16_best.pth
Epoch [2/20], Loss: 0.0001, Acc: 0.9988, F1: 0.9994, Time: 597.28s, LR: 1.00e-04, Patience: 0/5
Epoch [3/20], Loss: 0.0000, Acc: 0.9988, F1: 0.9994, Time: 614.49s, LR: 1.00e-04, Patience: 1/5
Epoch [4/20], Loss: 0.0000, Acc: 0.9976, F1: 0.9987, Time: 614.28s, LR: 1.00e-04, Patience: 2/5
Epoch [5/20], Loss: 0.0223, Acc: 0.9624, F1: 0.9809, Time: 604.99s, LR: 1.00e-04, Patience: 3/5
Epoch 00006: reducing learning rate of group 0 to 5.0000e-05.
Epoch [6/20], Loss: 0.0040, Acc: 0.9976, F1: 0.9987, Time: 660.98s, LR: 5.00e-05, Patience: 4/5
Epoch [7/20], Loss: 0.0032, Acc: 0.9976, F1: 0.9987, Time: 33953.69s, LR: 5.00e-05, Patience: 5/5
🛑 早停触发于第 7 轮
✅ VGG16 训练完成: Acc=0.9988, F1=0.9994
==================================================
训练模型: ResNet50
==================================================
🚀 开始训练模型: ResNet50
✅ 保存最佳模型: saved_models\ResNet50_best.pth
Epoch [1/20], Loss: 0.1360, Acc: 0.9624, F1: 0.9809, Time: 287.18s, LR: 1.00e-04, Patience: 0/5
✅ 保存最佳模型: saved_models\ResNet50_best.pth
Epoch [2/20], Loss: 0.0656, Acc: 0.9952, F1: 0.9975, Time: 290.83s, LR: 1.00e-04, Patience: 0/5
Epoch [3/20], Loss: 0.0364, Acc: 0.9952, F1: 0.9975, Time: 294.10s, LR: 1.00e-04, Patience: 1/5
Epoch [4/20], Loss: 0.0229, Acc: 0.9952, F1: 0.9975, Time: 261.31s, LR: 1.00e-04, Patience: 2/5
Epoch [5/20], Loss: 0.0173, Acc: 0.9952, F1: 0.9975, Time: 270.63s, LR: 1.00e-04, Patience: 3/5
✅ 保存最佳模型: saved_models\ResNet50_best.pth
Epoch [6/20], Loss: 0.0135, Acc: 0.9964, F1: 0.9981, Time: 278.61s, LR: 1.00e-04, Patience: 0/5
Epoch [7/20], Loss: 0.0117, Acc: 0.9964, F1: 0.9981, Time: 265.74s, LR: 1.00e-04, Patience: 1/5
Epoch [8/20], Loss: 0.0091, Acc: 0.9964, F1: 0.9981, Time: 261.22s, LR: 1.00e-04, Patience: 2/5
Epoch [9/20], Loss: 0.0075, Acc: 0.9964, F1: 0.9981, Time: 260.32s, LR: 1.00e-04, Patience: 3/5
Epoch 00010: reducing learning rate of group 0 to 5.0000e-05.
Epoch [10/20], Loss: 0.0068, Acc: 0.9964, F1: 0.9981, Time: 255.99s, LR: 5.00e-05, Patience: 4/5
Epoch [11/20], Loss: 0.0060, Acc: 0.9964, F1: 0.9981, Time: 257.30s, LR: 5.00e-05, Patience: 5/5
🛑 早停触发于第 11 轮
✅ ResNet50 训练完成: Acc=0.9964, F1=0.9981
==================================================
训练模型: MobileNetV2
==================================================
🚀 开始训练模型: MobileNetV2
✅ 保存最佳模型: saved_models\MobileNetV2_best.pth
Epoch [1/20], Loss: 0.1558, Acc: 0.9624, F1: 0.9809, Time: 118.94s, LR: 1.00e-04, Patience: 0/5
✅ 保存最佳模型: saved_models\MobileNetV2_best.pth
Epoch [2/20], Loss: 0.0592, Acc: 0.9964, F1: 0.9981, Time: 119.53s, LR: 1.00e-04, Patience: 0/5
✅ 保存最佳模型: saved_models\MobileNetV2_best.pth
Epoch [3/20], Loss: 0.0294, Acc: 0.9976, F1: 0.9987, Time: 118.99s, LR: 1.00e-04, Patience: 0/5
✅ 保存最佳模型: saved_models\MobileNetV2_best.pth
Epoch [4/20], Loss: 0.0193, Acc: 0.9988, F1: 0.9994, Time: 119.40s, LR: 1.00e-04, Patience: 0/5
Epoch [5/20], Loss: 0.0150, Acc: 0.9988, F1: 0.9994, Time: 119.69s, LR: 1.00e-04, Patience: 1/5
Epoch [6/20], Loss: 0.0108, Acc: 0.9976, F1: 0.9987, Time: 118.99s, LR: 1.00e-04, Patience: 2/5
Epoch [7/20], Loss: 0.0092, Acc: 0.9988, F1: 0.9994, Time: 120.89s, LR: 1.00e-04, Patience: 3/5
Epoch 00008: reducing learning rate of group 0 to 5.0000e-05.
Epoch [8/20], Loss: 0.0067, Acc: 0.9988, F1: 0.9994, Time: 122.08s, LR: 5.00e-05, Patience: 4/5
Epoch [9/20], Loss: 0.0067, Acc: 0.9976, F1: 0.9987, Time: 131.98s, LR: 5.00e-05, Patience: 5/5
🛑 早停触发于第 9 轮
✅ MobileNetV2 训练完成: Acc=0.9988, F1=0.9994
==================================================
训练模型: DenseNet121
==================================================
🚀 开始训练模型: DenseNet121
✅ 保存最佳模型: saved_models\DenseNet121_best.pth
Epoch [1/20], Loss: 0.1522, Acc: 0.9624, F1: 0.9809, Time: 280.87s, LR: 1.00e-04, Patience: 0/5
✅ 保存最佳模型: saved_models\DenseNet121_best.pth
Epoch [2/20], Loss: 0.0761, Acc: 0.9903, F1: 0.9950, Time: 291.73s, LR: 1.00e-04, Patience: 0/5
✅ 保存最佳模型: saved_models\DenseNet121_best.pth
Epoch [3/20], Loss: 0.0426, Acc: 0.9939, F1: 0.9969, Time: 283.10s, LR: 1.00e-04, Patience: 0/5
Epoch [4/20], Loss: 0.0279, Acc: 0.9939, F1: 0.9969, Time: 283.65s, LR: 1.00e-04, Patience: 1/5
✅ 保存最佳模型: saved_models\DenseNet121_best.pth
Epoch [5/20], Loss: 0.0191, Acc: 0.9952, F1: 0.9975, Time: 283.20s, LR: 1.00e-04, Patience: 0/5
✅ 保存最佳模型: saved_models\DenseNet121_best.pth
Epoch [6/20], Loss: 0.0153, Acc: 0.9964, F1: 0.9981, Time: 282.58s, LR: 1.00e-04, Patience: 0/5
Epoch [7/20], Loss: 0.0128, Acc: 0.9964, F1: 0.9981, Time: 288.11s, LR: 1.00e-04, Patience: 1/5
Epoch [8/20], Loss: 0.0105, Acc: 0.9964, F1: 0.9981, Time: 302.87s, LR: 1.00e-04, Patience: 2/5
✅ 保存最佳模型: saved_models\DenseNet121_best.pth
Epoch [9/20], Loss: 0.0080, Acc: 0.9988, F1: 0.9994, Time: 300.54s, LR: 1.00e-04, Patience: 0/5
Epoch [10/20], Loss: 0.0071, Acc: 0.9964, F1: 0.9981, Time: 299.75s, LR: 1.00e-04, Patience: 1/5
Epoch [11/20], Loss: 0.0058, Acc: 0.9976, F1: 0.9987, Time: 299.77s, LR: 1.00e-04, Patience: 2/5
Epoch [12/20], Loss: 0.0058, Acc: 0.9988, F1: 0.9994, Time: 300.69s, LR: 1.00e-04, Patience: 3/5
Epoch 00013: reducing learning rate of group 0 to 5.0000e-05.
Epoch [13/20], Loss: 0.0041, Acc: 0.9976, F1: 0.9987, Time: 300.08s, LR: 5.00e-05, Patience: 4/5
Epoch [14/20], Loss: 0.0045, Acc: 0.9988, F1: 0.9994, Time: 300.08s, LR: 5.00e-05, Patience: 5/5
🛑 早停触发于第 14 轮
✅ DenseNet121 训练完成: Acc=0.9988, F1=0.9994
==================================================
训练模型: EfficientNetB0
==================================================
Downloading: "https://download.pytorch.org/models/efficientnet_b0_rwightman-3dd342df.pth" to C:\Users\lejmj/.cache\torch\hub\checkpoints\efficientnet_b0_rwightman-3dd342df.pth
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20.5M/20.5M [00:00<00:00, 27.3MB/s]
🚀 开始训练模型: EfficientNetB0
✅ 保存最佳模型: saved_models\EfficientNetB0_best.pth
Epoch [1/20], Loss: 0.2317, Acc: 0.9636, F1: 0.9815, Time: 165.82s, LR: 1.00e-04, Patience: 0/5
✅ 保存最佳模型: saved_models\EfficientNetB0_best.pth
Epoch [2/20], Loss: 0.1030, Acc: 0.9806, F1: 0.9900, Time: 164.88s, LR: 1.00e-04, Patience: 0/5
✅ 保存最佳模型: saved_models\EfficientNetB0_best.pth
Epoch [3/20], Loss: 0.0761, Acc: 0.9952, F1: 0.9975, Time: 165.01s, LR: 1.00e-04, Patience: 0/5
✅ 保存最佳模型: saved_models\EfficientNetB0_best.pth
Epoch [4/20], Loss: 0.0552, Acc: 0.9976, F1: 0.9987, Time: 164.58s, LR: 1.00e-04, Patience: 0/5
✅ 保存最佳模型: saved_models\EfficientNetB0_best.pth
Epoch [5/20], Loss: 0.0458, Acc: 1.0000, F1: 1.0000, Time: 163.72s, LR: 1.00e-04, Patience: 0/5
Epoch [6/20], Loss: 0.0423, Acc: 1.0000, F1: 1.0000, Time: 164.06s, LR: 1.00e-04, Patience: 1/5
Epoch [7/20], Loss: 0.0301, Acc: 1.0000, F1: 1.0000, Time: 164.36s, LR: 1.00e-04, Patience: 2/5
Epoch [8/20], Loss: 0.0342, Acc: 0.9988, F1: 0.9994, Time: 163.27s, LR: 1.00e-04, Patience: 3/5
Epoch 00009: reducing learning rate of group 0 to 5.0000e-05.
Epoch [9/20], Loss: 0.0274, Acc: 1.0000, F1: 1.0000, Time: 164.34s, LR: 5.00e-05, Patience: 4/5
Epoch [10/20], Loss: 0.0251, Acc: 1.0000, F1: 1.0000, Time: 164.54s, LR: 5.00e-05, Patience: 5/5
🛑 早停触发于第 10 轮
✅ EfficientNetB0 训练完成: Acc=1.0000, F1=1.0000
================================================================================
✅ 所有模型训练完成,性能对比结果如下:
================================================================================
模型 准确率 精确率 召回率 F1得分 平均耗时(s) 训练轮数
--------------------------------------------------------------------------------
CNN 0.9976 0.9987 0.9987 0.9987 151.87 11
VGG16 0.9988 0.9987 0.9987 0.9994 5379.55 7
ResNet50 0.9964 0.9962 1.0000 0.9981 271.20 11
MobileNetV2 0.9988 0.9987 0.9987 0.9994 121.17 9
DenseNet121 0.9988 0.9987 1.0000 0.9994 292.64 14
EfficientNetB0 1.0000 1.0000 1.0000 1.0000 164.46 10
--------------------------------------------------------------------------------
🏆 最佳模型: EfficientNetB0 (F1-Score: 1.0000, Accuracy: 1.0000)
💾 结果已保存:
- Excel文件: training_results\model_training_results.xlsx
- 模型文件: saved_models/
- 图表文件: training_results/
- 最佳模型: EfficientNetB0_best.pth
⏱️ 总训练时间: 871.78 分钟
将两次实验(6组)的结果,绘制成nature级别的图像,对比分析,第一次是未改进的,第二次是改进的,应该表现在图里的尽量表现,并将其他数据罗列成几个对比个表格,编写完整
最新发布