乳腺癌病理图像分类之A Novel Architecture to Classify Histopathology Images Using CNN之二--模型代码复现

图1:整体模型架构
论文是讲乳腺癌分类的,简单做个复现,模型如图1,详细介绍如图2,其余细节:batchsize给的是64.(剩下关于激活函数比较,dropout与BN层影响在此不做讨论,通过代码变形实施起来也简单)

图2:细节描述
详细代码复现:
1.导包

import torch 
from torch import nn
from torchinfo import summary   

2.对论文中所论述的3个卷积+一个BN层+一个maxpooling层的 结构进行定义

class TCIS(nn.Module):
    def __init__(self,channel,padding=True,First=False):
        super().__init__()
        self.channel = channel
        self.padding = padding
        self.First = First
        self.inchannel = int(self.channel/2)

        #第一层第一个卷积接收到的channel是3,单独定义
        self.conv_first = nn.Conv2d(in_channels=3, out_channels=self.channel,
                          kernel_size=3,stride=1, padding = 1)
        #除第一层,其余层的第一个卷积channel是当前channel的一半
        self.conv1_1 = nn.Conv2d(in_channels=self.inchannel, out_channels=self.channel,
                               kernel_size=3, stride=1, padding=1)
        #除第五层外,第二个卷积和第三个卷积定义
        self.conv1_2 = nn.Conv2d(in_channels=self.channel, out_channels=self.channel,
                          kernel_size=3,stride=1, padding = 1)

		#第五层中,第二个卷积和第三个卷积定义
        self.conv2_2 = nn.Conv2d(in_channels=self.channel, out_channels=self.channel,
                                 kernel_size=3, stride=1)
		#bn层定义
        self.bn = nn.BatchNorm2d(self.channel)
        #最大池化层定义
        self.mp = nn.MaxPool2d(kernel_size=2)

    def forward(self,x):
        if self.First == False:
            x1 = self.conv1_1(x)
        else:
            x1 = self.conv_first(x)
        if self.padding==True:
            x2 = self.conv1_2(x1)
            x3 = self.conv1_2(x2)
        else:
            x2 = self.conv2_2(x1)
            x3 = self.conv2_2(x2)
        xbn = self.bn(x3)
        xout = self.mp(xbn)
        return xout

3.对主程序进行定义

class NACHICNN(nn.Module):
    def __init__(self):
        super().__init__()
		#第一层
        self.convb1 = TCIS(32,padding=True, First=True)
        #第二层
        self.convb2 = TCIS(64)
        #第三层
        self.convb3 = TCIS(128)
        #第4层
        self.convb4 = TCIS(256)
        #第5层
        self.convb5 = TCIS(512,padding=False)


        self.fc1 = nn.Linear(32768,512)#输入神经元数是通过debug找到的
        self.fc2 = nn.Linear(512,512)
        self.sigmoid_ = nn.Sigmoid()

    def forward(self,x):
        x1 = self.convb1(x)
        x2 = self.convb2(x1)
        x3 = self.convb3(x2)
        x4 = self.convb4(x3)
        x5 = self.convb5(x4)

        x6 = x5.flatten()#(这里我按照原文直接使用flatten(),深度学习中,多数情况下使用全局平均池化+view(),的组合对其进行定义)

        x6 = self.fc1(x6)
        x7 = self.fc2(x6)

        return torch.sigmoid(x7)

4.模型测试代码

if __name__ == "__main__":
	#实例化
    net = NACHICNN().to(device='cuda')
    #打印模型
    show_ = summary(net, input_size=(64,3,96,96))

5.模型参数
Layer (type:depth-idx) Output Shape Param #

NACHICNN [512] –
├─TCIS: 1-1 [64, 32, 48, 48] 13,888
│ └─Conv2d: 2-1 [64, 32, 96, 96] 896
│ └─Conv2d: 2-2 [64, 32, 96, 96] 9,248
│ └─Conv2d: 2-3 [64, 32, 96, 96] (recursive)
│ └─BatchNorm2d: 2-4 [64, 32, 96, 96] 64
│ └─MaxPool2d: 2-5 [64, 32, 48, 48] –
├─TCIS: 1-2 [64, 64, 24, 24] 38,720
│ └─Conv2d: 2-6 [64, 64, 48, 48] 18,496
│ └─Conv2d: 2-7 [64, 64, 48, 48] 36,928
│ └─Conv2d: 2-8 [64, 64, 48, 48] (recursive)
│ └─BatchNorm2d: 2-9 [64, 64, 48, 48] 128
│ └─MaxPool2d: 2-10 [64, 64, 24, 24] –
├─TCIS: 1-3 [64, 128, 12, 12] 151,168
│ └─Conv2d: 2-11 [64, 128, 24, 24] 73,856
│ └─Conv2d: 2-12 [64, 128, 24, 24] 147,584
│ └─Conv2d: 2-13 [64, 128, 24, 24] (recursive)
│ └─BatchNorm2d: 2-14 [64, 128, 24, 24] 256
│ └─MaxPool2d: 2-15 [64, 128, 12, 12] –
├─TCIS: 1-4 [64, 256, 6, 6] 597,248
│ └─Conv2d: 2-16 [64, 256, 12, 12] 295,168
│ └─Conv2d: 2-17 [64, 256, 12, 12] 590,080
│ └─Conv2d: 2-18 [64, 256, 12, 12] (recursive)
│ └─BatchNorm2d: 2-19 [64, 256, 12, 12] 512
│ └─MaxPool2d: 2-20 [64, 256, 6, 6] –
├─TCIS: 1-5 [64, 512, 1, 1] 2,374,144
│ └─Conv2d: 2-21 [64, 512, 6, 6] 1,180,160
│ └─Conv2d: 2-22 [64, 512, 4, 4] 2,359,808
│ └─Conv2d: 2-23 [64, 512, 2, 2] (recursive)
│ └─BatchNorm2d: 2-24 [64, 512, 2, 2] 1,024
│ └─MaxPool2d: 2-25 [64, 512, 1, 1] –
├─Linear: 1-6 [512] 16,777,728
├─Linear: 1-7 [512] 262,656

Total params: 24,929,760
Trainable params: 24,929,760
Non-trainable params: 0
Total mult-adds (G): 66.72

Input size (MB): 7.08
Forward/backward pass size (MB): 864.03
Params size (MB): 87.02
Estimated Total Size (MB): 958.13

import sys; print(‘Python %s on %s’ % (sys.version, sys.platform))
Python 3.9.13 (main, Aug 25 2022, 23:51:50) [MSC v.1916 64 bit (AMD64)]
Type ‘copyright’, ‘credits’ or ‘license’ for more information
IPython 7.31.1 – An enhanced Interactive Python. Type ‘?’ for help.
PyDev console: using IPython 7.31.1
Python 3.9.13 (main, Aug 25 2022, 23:51:50) [MSC v.1916 64 bit (AMD64)] on win32

待施工。。。
1.数据读取
3.训练代码
4.测试代码
5.主程序

以下是一个简单的CNN-LSTM分类模型的Matlab代码示例: ```matlab % 加载数据集 load('data.mat') % 定义神经网络模型 inputSize = size(XTrain{1}); numClasses = numel(categories(YTrain)); layers = [ sequenceInputLayer(inputSize(1:2),'Name','input') convolution2dLayer(3,16,'Padding','same','Name','conv1') batchNormalizationLayer('Name','BN1') reluLayer('Name','relu1') maxPooling2dLayer(2,'Stride',2,'Name','pool1') convolution2dLayer(3,32,'Padding','same','Name','conv2') batchNormalizationLayer('Name','BN2') reluLayer('Name','relu2') maxPooling2dLayer(2,'Stride',2,'Name','pool2') lstmLayer(100,'Name','lstm') dropoutLayer(0.2,'Name','dropout') fullyConnectedLayer(numClasses,'Name','fc') softmaxLayer('Name','softmax') classificationLayer('Name','classification')]; % 设置训练参数 options = trainingOptions('adam', ... 'InitialLearnRate', 1e-3, ... 'MaxEpochs', 50, ... 'MiniBatchSize', 64, ... 'Shuffle','every-epoch', ... 'Verbose',false, ... 'Plots','training-progress'); % 训练模型 net = trainNetwork(XTrain,YTrain,layers,options); % 测试模型 YPred = classify(net,XTest); accuracy = sum(YPred == YTest)/numel(YTest); fprintf('测试集准确率为 %.2f%%\n',accuracy*100); ``` 以上代码中,`data.mat`是一个加载了训练集和测试集数据的MATLAB数据文件,包含`XTrain`、`YTrain`、`XTest`和`YTest`四个变量。模型采用了类似于VGG网络的卷积神经网络(CNN)和长短时记忆网络(LSTM)来提取特征和序列建模,并以softmax层进行分类。训练过程采用了Adam优化器,最大训练轮数为50,批量大小为64。最终输出测试集的准确率。 请注意,这只是一个简单的示例,您需要根据您的具体问题和数据集进行相应的修改和调整。
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值