0_2_3-卷积层的反向传播-多通道、无padding、步长不为1

numpy实现神经网络系列

工程地址:https://github.com/yizt/numpy_neuron_network

基础知识

0_1-全连接层、损失函数的反向传播

0_2_1-卷积层的反向传播-单通道、无padding、步长1

0_2_2-卷积层的反向传播-多通道、无padding、步长1

0_2_3-卷积层的反向传播-多通道、无padding、步长不为1

0_2_4-卷积层的反向传播-多通道、有padding、步长不为1

0_2_5-池化层的反向传播-MaxPooling、AveragePooling、GlobalAveragePooling、GlobalMaxPooling

0_3-激活函数的反向传播-ReLU、LeakyReLU、PReLU、ELU、SELU

0_4-优化方法-SGD、AdaGrad、RMSProp、Adadelta、Adam

DNN练习

1_1_1-全连接神经网络做线性回归

1_1_2-全连接神经网络做mnist手写数字识别

CNN练习

2_1-numpy卷积层实现

2_2-numpy池化层实现

2_3-numpy-cnn-mnist手写数字识别

本文目录

本文下载地址:0_2_3-卷积层的反向传播-多通道、无padding、步长不为1

依赖知识

a) 熟悉全连接层、损失函数的反向传播

b) 熟悉卷积层的反向传播-单通道、无padding、步长1

c) 熟悉卷积层的反向传播-多通道、无padding、步长1

d) 熟悉以上三点的依赖知识

约定说明

a) l l 代表网络的第l 层, zl z l 代表第 l l 层卷积,zd,i,jl 代表第 l l 层卷积第d 通道 (i,j) ( i , j ) 位置的值; zl z l 的通道数为 Cl C l , 高度和宽度分别为 Hl,W^l H l , W ^ l ( 避 免 与 权 重 相 同 )

b) Wl1,bl1 W l − 1 , b l − 1 代表连接第 l1 l − 1 层和第 l l 层的卷积核权重和偏置; 卷积核的维度为(k1l1,k2l1) ; 卷积核的步长为 (sl11,sl12) ( s 1 l − 1 , s 2 l − 1 )

c) 记损失函数L关于第 l l 层卷积的输出zl 的偏导为 δl=Lzl   (3) δ l = ∂ L ∂ z l       ( 3 )

前向传播

​ 根据以上约定,卷积核权重 Wl1Rkl11×kl12×Cl1×Cl W l − 1 ∈ R k 1 l − 1 × k 2 l − 1 × C l − 1 × C l ,偏置 bl1RCl b l − 1 ∈ R C l ,每个输出通道一个偏置。 则有第 l l 层卷积层,第d个通道输出为:

zld,i,j=c=1Cl1m=0kl111n=0kl121Wl1m,n,c,dzl1c,isl11+m,jsl12+n+bl1di[0,Hl1],j[0,W^l1](4) (4) z d , i , j l = ∑ c = 1 C l − 1 ∑ m = 0 k 1 l − 1 − 1 ∑ n = 0 k 2 l − 1 − 1 W m , n , c , d l − 1 z c , i ⋅ s 1 l − 1 + m , j ⋅ s 2 l − 1 + n l − 1 + b d l − 1 i ∈ [ 0 , H l − 1 ] , j ∈ [ 0 , W ^ l − 1 ]

​ 其中: Hl=(Hl1kl11)/sl11+1;     W^l=(W^l1kl12)/sl12+1 H l = ( H l − 1 − k 1 l − 1 ) / s 1 l − 1 + 1 ;           W ^ l = ( W ^ l − 1 − k 2 l − 1 ) / s 2 l − 1 + 1 ;

反向传播

权重梯度

a) 首先来看损失函数 L L 关于第l1层权重 Wl1 W l − 1 和偏置 bl1 b l − 1 的梯度:

LWl1m,n,c,d=ijLzld,i,jzld,i,jWl1m,n,c,d=ijδld,i,j(Cl1c=1kl111m=0kl121n=0Wl1m,n,c,dzl1c,isl11+m,jsl12+n+bl1d)Wl1m,n,c,d=ijδld,i,jzl1c,isl11+m,jsl12+n//ldWl1m,n,c,d(1)(2)(5) (1) ∂ L ∂ W m , n , c , d l − 1 = ∑ i ∑ j ∂ L ∂ z d , i , j l ∗ ∂ z d , i , j l ∂ W m , n , c , d l − 1 / / l 层 的 d 通 道 每 个 神 经 元 都 有 梯 度 传 给 权 重 W m , n , c , d l − 1 (2) = ∑ i ∑ j δ d , i , j l ∗ ∂ ( ∑ c = 1 C l − 1 ∑ m = 0 k 1 l − 1 − 1 ∑ n = 0 k 2 l − 1 − 1 W m , n , c , d l − 1 z c , i ⋅ s 1 l − 1 + m , j ⋅ s 2 l − 1 + n l − 1 + b d l − 1 ) ∂ W m , n , c , d l − 1 (5) = ∑ i ∑ j δ d , i , j l ∗ z c , i ⋅ s 1 l − 1 + m , j ⋅ s 2 l − 1 + n l − 1

​ 对比公式(5)和单通道中公式(4),可以发现,损失函数 L L 关于第l1层权重 Wl1:,:c,d W : , : c , d l − 1 梯度就是以 δlpadding δ l p a d d i n g (后面会说明它的含义) 为卷积核在 zl1c z c l − 1 上做卷积的结果(这里没有偏置项),单通道对单通道的卷积。

b) 损失函数 L L 关于第l1层偏置 bl1 b l − 1 的梯度同

Lbl1d=ijδld,i,j(6) (6) ∂ L ∂ b d l − 1 = ∑ i ∑ j δ d , i , j l

l-1层梯度

​ 直接从公式推导损失函数关于第 l1 l − 1 层输出的偏导比较难,我们参考转置卷积论文A guide to convolution arithmetic for deep learning 知识,我们以另外一种方式证明; 对于如下的图,上一层为输入的卷积层( 5×5 5 × 5 ) ,用( 3×3 3 × 3 ) 的卷积核以步长为2,做卷积得到下一层卷积大小为 2×2 2 × 2 (图中蓝色的点)。如果我们将输出卷积的每行和每列之间填充步长减一的行列,行列的元素全为0。记卷积层 zl z l 使用这种零填充后的卷积层为 zlpadding z l p a d d i n g 。那么前向过程其实就相当于卷积核,在输入卷积上以不为1的步长卷积后的结果就是 zlpadding z l p a d d i n g

no_padding_strides_transposed

​ 那么反向过程也是一样,相当于翻转后的卷积在相同零填充的 δl δ l 上左卷积的结果,设 δlpadding δ l p a d d i n g δl δ l 的行列分别填充 (sl111,sl121) ( s 1 l − 1 − 1 , s 2 l − 1 − 1 ) 行列零元素后的梯度矩阵。则根据多通道 中的公式(8) 有

δl1c,i,j=d=1Clm=0kl111n=0kl121rot180Wl1m,n,c,dpδlpaddingd,i+m,j+n(8) (8) δ c , i , j l − 1 = ∑ d = 1 C l ∑ m = 0 k 1 l − 1 − 1 ∑ n = 0 k 2 l − 1 − 1 r o t 180 ∘ W m , n , c , d l − 1 p δ d , i + m , j + n l p a d d i n g

​ 其中 pδlpaddingd,i,j p δ d , i , j l p a d d i n g δl δ l 在行列直接插入 (sl111,sl121) ( s 1 l − 1 − 1 , s 2 l − 1 − 1 ) 行列零元素后(即 δlpadding δ l p a d d i n g ),再在元素外围填充高度和宽度为 (kl111,kl121) ( k 1 l − 1 − 1 , k 2 l − 1 − 1 ) 的零元素后的梯度矩阵。

参考

a) A guide to convolution arithmetic for deep learning

import numpy as np import torch import torch.nn as nn import torch.optim as optim from torch.utils.data import Dataset, DataLoader import os import pandas as pd from PIL import Image import matplotlib matplotlib.use('Agg') # 使用非交互式后端 import matplotlib.pyplot as plt from tqdm import tqdm import warnings warnings.filterwarnings("ignore") class TFRSequence3DCNNDataset(Dataset): def __init__(self, sequences, labels): """ 2通道数据集类(水平+垂直振动信号) """ self.sequences = sequences # 形状: (N, H, W, D, 2),最后一维为2通道 self.labels = labels def __len__(self): return len(self.sequences) def __getitem__(self, idx): # 转换为PyTorch张量,调整维度顺序为 (C, D, H, W),其中C=2 sequence = torch.FloatTensor(self.sequences[idx]) # (H, W, D, 2) sequence = sequence.permute(3, 2, 0, 1) # 转换为 (2, D, H, W) label = torch.FloatTensor([self.labels[idx]]) return sequence, label class TFRSequence3DCNN: def __init__(self, sequence_length=4, img_size=128, channels=2): self.sequence_length = sequence_length # 间序列长度(重叠数L) self.img_size = img_size # 图像尺寸(H=W) self.channels = channels # 2通道(水平+垂直) def create_sequences_from_folders(self, s_root, c_root, bearing_folders): """ 从水平(S)和垂直(C)文件夹创建2通道3D序列样本 :param s_root: 水平方向振动信号小波图根路径 :param c_root: 垂直方向振动信号小波图根路径 :param bearing_folders: 轴承文件夹列表(如["Bearing1_1", ...]) """ all_sequences = [] all_labels = [] for bearing_folder in bearing_folders: # 水平和垂直方向的轴承文件夹路径 s_folder = os.path.join(s_root, bearing_folder) c_folder = os.path.join(c_root, bearing_folder) print(f"处理轴承文件夹: {bearing_folder}(水平路径: {s_folder},垂直路径: {c_folder})") # 读取标签文件(假设水平和垂直文件夹共享同一标签,从水平文件夹读取) s_labels_csv = os.path.join(s_folder, "rul_labels.csv") c_labels_csv = os.path.join(c_folder, "rul_labels.csv") if not os.path.exists(s_labels_csv) or not os.path.exists(c_labels_csv): print(f"警告: {bearing_folder} 缺少标签文件,跳过") continue # 确保水平和垂直文件夹的图像文件列表一致 s_labels_df = pd.read_csv(s_labels_csv) c_labels_df = pd.read_csv(c_labels_csv) if not s_labels_df['image_file'].equals(c_labels_df['image_file']): print(f"警告: {bearing_folder} 水平与垂直文件夹的图像文件列表不一致,跳过") continue image_files = s_labels_df['image_file'].tolist() rul_values = s_labels_df['rul'].tolist() # 标签使用水平文件夹的(与垂直一致) # 为当前轴承创建2通道序列 folder_sequences, folder_labels = self._create_single_folder_sequences( s_folder, c_folder, image_files, rul_values ) all_sequences.extend(folder_sequences) all_labels.extend(folder_labels) # 转换为numpy数组并检查形状 all_sequences = np.array(all_sequences) all_labels = np.array(all_labels) print(f"所有序列数据的形状: {all_sequences.shape}(期望: (N, H, W, L, 2))") print(f"所有标签数据的形状: {all_labels.shape}") return all_sequences, all_labels def _create_single_folder_sequences(self, s_folder, c_folder, image_files, rul_values): """ 为单个轴承创建2通道序列(水平+垂直) """ sequences = [] labels = [] # 按间顺序处理图像文件(滑动窗口) for i in range(len(image_files) - self.sequence_length): s_sequence_imgs = [] # 水平方向序列图像 c_sequence_imgs = [] # 垂直方向序列图像 # 读取连续sequence_length张图像(水平+垂直) for j in range(self.sequence_length): img_name = image_files[i + j] s_img_path = os.path.join(s_folder, img_name) # 水平方向图像 c_img_path = os.path.join(c_folder, img_name) # 垂直方向图像 try: # 读取水平方向图像并归一化 s_img = Image.open(s_img_path) s_img_array = np.array(s_img) / 255.0 if len(s_img_array.shape) == 3: s_img_array = s_img_array.mean(axis=2) # 转为灰度 # 读取垂直方向图像并归一化 c_img = Image.open(c_img_path) c_img_array = np.array(c_img) / 255.0 if len(c_img_array.shape) == 3: c_img_array = c_img_array.mean(axis=2) # 转为灰度 s_sequence_imgs.append(s_img_array) c_sequence_imgs.append(c_img_array) except Exception as e: print(f"读取图像失败(水平: {s_img_path},垂直: {c_img_path}): {e}") break # 确保成功读取所有图像(水平+垂直序列长度均为sequence_length) if len(s_sequence_imgs) == self.sequence_length and len(c_sequence_imgs) == self.sequence_length: # 堆叠水平序列: (H, W, L) s_sequence_3d = np.stack(s_sequence_imgs, axis=2) # 堆叠垂直序列: (H, W, L) c_sequence_3d = np.stack(c_sequence_imgs, axis=2) # 合并为2通道: (H, W, L, 2)(最后一维为通道:0=水平,1=垂直) sequence_3d = np.stack([s_sequence_3d, c_sequence_3d], axis=-1) sequences.append(sequence_3d) # 使用序列最后一张图像的RUL作为标签 labels.append(rul_values[i + self.sequence_length - 1]) return sequences, labels class ThreeDCNN(nn.Module): """ 2通道3DCNN模型(适配水平+垂直振动信号) 按照论文要求修改池化层:使用固定尺寸的平均池化 (12×12×L) 【修改】移除输出层sigmoid,改为自然输出(无[0,1]范围限制) """ def __init__(self, sequence_length=4, img_size=128, channels=2): super(ThreeDCNN, self).__init__() # 计算卷积后特征图尺寸 def conv_output_size(size, kernel, stride, padding): return (size - kernel + 2 * padding) // stride + 1 # 计算经过三层卷积后的特征图尺寸 h = img_size w = img_size d = sequence_length # 第一层卷积 h = conv_output_size(h, 7, 2, 3) w = conv_output_size(w, 7, 2, 3) d = conv_output_size(d, 2, 1, 0) # 第二层卷积 h = conv_output_size(h, 3, 2, 1) w = conv_output_size(w, 3, 2, 1) d = conv_output_size(d, 2, 1, 0) # 第三层卷积 h = conv_output_size(h, 3, 2, 1) w = conv_output_size(w, 3, 2, 1) d = conv_output_size(d, 2, 1, 0) # 存储最终特征图尺寸 self.final_depth = d self.final_height = h self.final_width = w print(f"卷积后特征图尺寸: (D:{d}, H:{h}, W:{w})") # 第一个3D卷积层: 7×7×2 卷积核, 步长 2×2×1(输入通道=2) self.conv1 = nn.Conv3d( in_channels=channels, # 输入通道数改为2 out_channels=32, kernel_size=(2, 7, 7), # (D, H, W) 维度的卷积核 stride=(1, 2, 2), # (D, H, W) 维度的步长 padding=(0, 3, 3) # 保持H/W维度下采样后尺寸对齐 ) self.bn1 = nn.BatchNorm3d(32) # 第二个3D卷积层: 3×3×2 卷积核, 步长 2×2×1 self.conv2 = nn.Conv3d( in_channels=32, out_channels=64, kernel_size=(2, 3, 3), stride=(1, 2, 2), padding=(0, 1, 1) ) self.bn2 = nn.BatchNorm3d(64) # 第三个3D卷积层: 3×3×2 卷积核, 步长 2×2×1 self.conv3 = nn.Conv3d( in_channels=64, out_channels=128, kernel_size=(2, 3, 3), stride=(1, 2, 2), padding=(0, 1, 1) ) self.bn3 = nn.BatchNorm3d(128) # 修改池化层:使用固定尺寸的平均池化 (12×12×L) # 但由于特征图尺寸可能小于12,使用自适应池化确保输出尺寸一致 self.pool = nn.AdaptiveAvgPool3d((d, 12, 12)) # 输出尺寸 (D, 12, 12) # Dropout层(根据论文描述,使用0.25) self.dropout1 = nn.Dropout(0.25) # 全连接层 # 计算FC层输入尺寸: 128通道 * D * 12 * 12 fc_input_size = 128 * d * 12 * 12 self.fc1 = nn.Linear(fc_input_size, 128) self.fc2 = nn.Linear(128, 50) self.fc3 = nn.Linear(50, 1) # 激活函数(保留中间层的ReLU,移除输出层的sigmoid) self.relu = nn.ReLU() def forward(self, x): # x shape: (batch_size, 2, L, H, W) # 第一个卷积块 x = self.relu(self.bn1(self.conv1(x))) # 第二个卷积块 x = self.relu(self.bn2(self.conv2(x))) # 第三个卷积块 x = self.relu(self.bn3(self.conv3(x))) # Dropout(论文中描述在第3卷积层后) x = self.dropout1(x) # 修改后的池化层 x = self.pool(x) # 输出形状: (batch_size, 128, D', 12, 12) # 展平特征图 x = x.view(x.size(0), -1) # (batch_size, 128 * D' * 12 * 12) # 全连接层(【核心修改】移除sigmoid,直接输出自然结果) x = self.relu(self.fc1(x)) x = self.relu(self.fc2(x)) x = self.fc3(x) # 自然输出,无范围限制 return x def train_3dcnn_model_pytorch(): """训练2通道3DCNN模型(水平+垂直振动信号)""" # 1. 配置参数 SEQUENCE_LENGTH = 4 # 间序列长度(L=4) IMG_SIZE = 128 # 图像尺寸 CHANNELS = 2 # 2通道(水平+垂直) BATCH_SIZE = 32 EPOCHS = 1 LEARNING_RATE = 0.001 # 2. 数据路径(修改:使用新的水平和垂直方向根路径) s_root = r"D:\成电——研究生\基于数据驱动的故障诊断研究\数据集汇总\phm-ieee-2012-data-challenge-dataset-master\python\FTshujuji\xunlianji\S" c_root = r"D:\成电——研究生\基于数据驱动的故障诊断研究\数据集汇总\phm-ieee-2012-data-challenge-dataset-master\python\FTshujuji\xunlianji\C" bearing_folders = [ # 轴承文件夹列表(需在S和C根路径下均存在) "Bearing1_1", "Bearing1_2", "Bearing2_1", "Bearing2_2", # 修改:添加Bearing2系列 "Bearing3_1", "Bearing3_2" ] # 3. 创建2通道序列数据 print("=== 创建2通道3D序列数据 ===") sequence_creator = TFRSequence3DCNN( sequence_length=SEQUENCE_LENGTH, img_size=IMG_SIZE, channels=CHANNELS ) X, y = sequence_creator.create_sequences_from_folders( s_root=s_root, c_root=c_root, bearing_folders=bearing_folders ) print(f"最终数据形状: {X.shape}(期望: (N, {IMG_SIZE}, {IMG_SIZE}, {SEQUENCE_LENGTH}, {CHANNELS}))") print(f"标签形状: {y.shape}") print(f"样本数: {len(X)}") # 4. 创建数据集和数据加载器 dataset = TFRSequence3DCNNDataset(X, y) train_loader = DataLoader(dataset, batch_size=BATCH_SIZE, shuffle=True) print(f"训练集样本数: {len(dataset)}") # 5. 构建模型 print("\n=== 构建2通道3DCNN模型(使用论文指定池化层) ===") device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') print(f"使用设备: {device}") model = ThreeDCNN( sequence_length=SEQUENCE_LENGTH, img_size=IMG_SIZE, channels=CHANNELS ).to(device) # 损失函数和优化器(保持MSE不变,适配自然输出的回归任务) criterion = nn.MSELoss() optimizer = optim.Adam(model.parameters(), lr=LEARNING_RATE) # 打印模型结构 print(model) # 计算FC层输入尺寸 sample = torch.randn(1, 2, SEQUENCE_LENGTH, IMG_SIZE, IMG_SIZE).to(device) with torch.no_grad(): output = model.conv1(sample) output = model.conv2(output) output = model.conv3(output) output = model.pool(output) fc_input_size = output.view(output.size(0), -1).shape[1] print(f"全连接层输入尺寸: {fc_input_size}") # 6. 训练模型 print("\n=== 开始训练 ===") train_losses = [] train_maes = [] for epoch in range(EPOCHS): model.train() running_loss = 0.0 running_mae = 0.0 progress_bar = tqdm(train_loader, desc=f'Epoch {epoch + 1}/{EPOCHS}') for batch_idx, (data, target) in enumerate(progress_bar): data, target = data.to(device), target.to(device) # 调试:打印首批次输入形状 if epoch == 0 and batch_idx == 0: print(f"输入数据形状: {data.shape}(期望: (batch_size, 2, {SEQUENCE_LENGTH}, {IMG_SIZE}, {IMG_SIZE}))") print(f"目标数据形状: {target.shape}") # 打印各层输出形状 x = data[:1] print(f"输入形状: {x.shape}") x = model.conv1(x) print(f"Conv1输出: {x.shape}") x = model.conv2(x) print(f"Conv2输出: {x.shape}") x = model.conv3(x) print(f"Conv3输出: {x.shape}") x = model.pool(x) print(f"池化层输出: {x.shape}") x = x.view(x.size(0), -1) print(f"展平后形状: {x.shape}") optimizer.zero_grad() output = model(data) # 输出为自然值(无sigmoid限制) loss = criterion(output, target) mae = torch.abs(output - target).mean() # 计算MAE loss.backward() optimizer.step() running_loss += loss.item() * data.size(0) running_mae += mae.item() * data.size(0) progress_bar.set_postfix({'Loss': f'{loss.item():.4f}', 'MAE': f'{mae.item():.4f}'}) # 计算本轮平均损失 train_loss = running_loss / len(train_loader.dataset) train_mae = running_mae / len(train_loader.dataset) train_losses.append(train_loss) train_maes.append(train_mae) print(f'Epoch {epoch + 1}/{EPOCHS}: 训练MSE: {train_loss:.4f}, 训练MAE: {train_mae:.4f}') # 7. 最终评估 print("\n=== 最终模型评估 ===") model.eval() train_predictions = [] train_targets = [] with torch.no_grad(): for data, target in train_loader: data = data.to(device) output = model(data) train_predictions.extend(output.cpu().numpy()) train_targets.extend(target.numpy()) train_predictions = np.array(train_predictions).flatten() train_targets = np.array(train_targets).flatten() train_mse = np.mean((train_predictions - train_targets) **2) train_mae = np.mean(np.abs(train_predictions - train_targets)) train_mape = np.mean(np.abs((train_predictions - train_targets) / (train_targets + 1e-8))) * 100 print(f"训练集 - MSE: {train_mse:.4f}, MAE: {train_mae:.4f}, MAPE: {train_mape:.4f}%") # 8. 绘制训练历史(只保存,不显示) plt.figure(figsize=(12, 4)) plt.subplot(1, 2, 1) plt.plot(train_losses, label='训练损失', color='blue') plt.title('模型MSE损失') plt.ylabel('MSE') plt.xlabel('轮次') plt.legend() plt.grid(alpha=0.3) plt.subplot(1, 2, 2) plt.plot(train_maes, label='训练MAE', color='red') plt.title('平均绝对误差') plt.ylabel('MAE') plt.xlabel('轮次') plt.legend() plt.grid(alpha=0.3) plt.tight_layout() plt.savefig('2channel_gap_training_history.png', dpi=300, bbox_inches='tight') plt.close() # 关闭图形以释放内存 # 9. 绘制预测vs真实值(只保存,不显示) plt.figure(figsize=(10, 6)) indices = np.random.choice(len(train_predictions), min(100, len(train_predictions)), replace=False) # 调整理想线范围(根据实际RUL范围,假设最大RUL为1000) max_rul = max(np.max(train_targets), np.max(train_predictions)) plt.scatter(train_targets[indices], train_predictions[indices], alpha=0.6) plt.plot([0, max_rul], [0, max_rul], 'r--') # 理想线(覆盖实际数据范围) plt.xlabel('真实RUL') plt.ylabel('预测RUL') plt.title('2通道模型预测结果(自然输出)') plt.grid(alpha=0.3) plt.savefig('2channel_gap_predictions.png', dpi=300, bbox_inches='tight') plt.close() # 关闭图形以释放内存 print("训练和评估图表已保存为PNG文件") return model, { 'train_losses': train_losses, 'train_maes': train_maes, 'predictions': train_predictions, 'targets': train_targets } if __name__ == "__main__": torch.manual_seed(42) np.random.seed(42) try: model, history = train_3dcnn_model_pytorch() # 保存模型 torch.save({ 'model_state_dict': model.state_dict(), 'training_history': history, 'model_config': { 'sequence_length': 4, 'img_size': 128, 'channels': 2 } }, '2channel_gap_final_3dcnn_model.pth') print("2通道模型(自然输出)已保存为 '2channel_gap_final_3dcnn_model.pth'") # 训练总结 print("\n=== 训练总结 ===") print(f"最终训练MSE: {history['train_losses'][-1]:.4f}") print(f"最终训练MAE: {history['train_maes'][-1]:.4f}") print(f"训练集样本数: {len(history['targets'])}") print(f"模型输入尺寸: (batch_size, 2, 4, 128, 128)") except Exception as e: print(f"训练错误: {e}") import traceback traceback.print_exc()在该双通道3DCNN网络的基础上加上LSTM模型进行HI构建
最新发布
11-19
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值