模型修改与比较

本文主要对LDL与FocusNet的resnet代码进行比较,分析了两者的相似与不同之处,指出第二份代码功能更丰富灵活。还探讨了两份代码的resnet50轮流实验结果是否相同,因初始化、架构等细节差异,结果可能不同,最后介绍了模型URLs、基本组件和自定义初始化等内容。

对Resnet模型文件的理解

LDL与FocusNet的resnet的比较

相似之处

  1. 基本模块定义:两份代码都定义了BasicBlockBottleneck类作为构建ResNet模型的基本组件。
  2. 模型架构:都提供了从ResNet-18到ResNet-152的不同深度的ResNet模型构造函数,并使用预训练权重。
  3. 初始导入:两者均导入了PyTorch中的核心模块(如torch, torch.nn)。

不同之处

  1. 权重加载方式:第一份代码使用torch.utils.model_zoo.load_url来加载预训练模型,而第二份代码则使用了torchvision.models.utils.load_state_dict_from_url。后者是更专门化的工具,通常用于在torchvision库中加载预训练模型。
  2. 卷积层参数:在第二份代码中,conv3x3函数支持分组卷积(groups)和空洞卷积(dilation),这使得其更灵活,能够支持例如ResNeXt或带有空洞卷积的模型设计。
  3. 额外的ResNet变体:第二份代码除了基本的ResNet模型之外,还包含了ResNeXt和Wide ResNet的变体,提供了更多的选项和可能的性能提升。
  4. 层和正则化层的参数化:第二份代码对于BasicBlockBottleneck的定义包含了更多的参数,如norm_layer,这允许更多自定义和扩展性。
  5. 替代步长与空洞卷积:第二份代码支持将标准的步长替换为空洞卷积,这在处理某些特定类型的图像或高级特征提取任务中非常有用。

结论

这两份代码在功能和灵活性方面有明显不同。第一份代码更为基础,适合入门级或标准的ResNet模型使用。第二份代码提供了更高级的特性,例如支持ResNeXt和Wide ResNet架构,以及更高级的卷积操作,适合需要更复杂或定制化模型的场景。

因此,这两份代码不是完全等价的。第二份代码在功能上更为丰富和灵活,能够满足更广泛的需求和实验设置。如果你的项目需要这些高级特性,第二份代码会是更好的选择。如果项目需求相对简单,第一份代码可能更直接有效。

”这两份代码的resnet50,如果轮流实验一次,结果会不会一样?

两份代码中的ResNet50模型在结构上有着基本的相似性,都遵循了ResNet的标准架构,但它们在一些细节上的实现和潜在的初始化方式上有所不同。这些差异可能会影响模型训练和推理过程中的表现。以下是两者的主要不同点和它们可能如何影响实验结果的考虑:

初始化方式

  • 第一份代码(非torchvision版本)使用了kaiming_normal方法初始化卷积层,并特别初始化了全连接层和批量归一化层。
  • 第二份代码(torchvision版本)在模型定义内部使用了kaiming_normal_constant_初始化方法,且可能会在创建模型时进行一些额外的初始化设置,例如zero_init_residual,这对于模型的最终性能可能有显著影响。

架构细节

  • 第二份代码支持空洞卷积的使用,这可以通过replace_stride_with_dilation参数进行调整,这允许模型在不降低空间分辨率的前提下增加感受野。虽然在默认的ResNet50调用中并未使用空洞卷积,但这显示了第二份代码在配置模型时提供了更大的灵活性。

权重加载

  • 第二份代码通过torchvision.models.utils.load_state_dict_from_url直接集成了预训练权重的加载,这可能与第一份代码使用的torch.utils.model_zoo.load_url在处理细节上有所不同,尽管这些差异对模型性能的影响可能是微小的。

潜在的代码执行差异

  • 代码中如何处理批量归一化层和其他层的训练模式可能会有所不同。例如,第一份代码中提供了对批量归一化层的特定处理(如设置为评休模式),这可能影响训练动态和模型的最终表现。

结果的可预测性

如果两份代码在完全相同的硬件和软件环境下执行,且初始化状态完全一致,理论上它们应该能够产生非常接近的结果。然而,由于上述的实现细节差异,实际上可能会观察到一些性能上的差异。这些差异可能体现在模型收敛速度、最终准确率或是在特定任务上的表现差异。

结论

尽管两份代码的基本架构相同,但由于初始化、架构细节的差异以及可能的微小执行差异,它们在实验中得到的结果可能会有所不同。如果对比这两种实现的性能,最好在相同的条件下进行多次实验,以获取统计上的显著差异。这将有助于更准确地评估任何观察到的性能差异是否具有统计意义。

模型URLs

定义了各种ResNet模型的预训练权重下载链接,可以用来初始化模型以提高效率和准确性。

model_urls = {
    'resnet18': 'https://download.pytorch.org/models/resnet18-5c106cde.pth',
    'resnet34': 'https://download.pytorch.org/models/resnet34-333f7ec4.pth',
    'resnet50': 'https://download.pytorch.org/models/resnet50-19c8e357.pth',
    'resnet101': 'https://download.pytorch.org/models/resnet101-5d3b4d8f.pth',
    'resnet152': 'https://download.pytorch.org/models/resnet152-b121ed2d.pth',
}

基本组件

  • conv3x3:定义了一个常用的3x3卷积层,不带偏置项。
  • BasicBlockBottleneck:这两个类是ResNet模型的基本构件。BasicBlock 适用于较浅的网络(如ResNet18和ResNet34),而Bottleneck 适用于较深的网络(如ResNet50及以上)。它们都包括了若干卷积层和Batch Normalization层,并在每个块的最后应用ReLU激活函数。

自定义初始化

  • weights_init:定义了不同层的权重和偏置的初始化策略。

LDL 的resnet:

BasicBlock、Bottleneck、model_zoo

import torch
import torch.nn as nn
import math
import torch.utils.model_zoo as model_zoo
from torch.nn import init
import torch.nn.functional as F
import numpy as np


__all__ = ['ResNet', 'resnet18', 'resnet34', 'resnet50', 'resnet101',
           'resnet152']


model_urls = {
    'resnet18': 'https://download.pytorch.org/models/resnet18-5c106cde.pth',
    'resnet34': 'https://download.pytorch.org/models/resnet34-333f7ec4.pth',
    'resnet50': 'https://download.pytorch.org/models/resnet50-19c8e357.pth',
    'resnet101': 'https://download.pytorch.org/models/resnet101-5d3b4d8f.pth',
    'resnet152': 'https://download.pytorch.org/models/resnet152-b121ed2d.pth',
}


def conv3x3(in_planes, out_planes, stride=1):
    """3x3 convolution with padding"""
    return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
                     padding=1, bias=False)


class BasicBlock(nn.Module):
    expansion = 1

    def __init__(self, inplanes, planes, stride=1, downsample=None):
        super(BasicBlock, self).__init__()
        self.conv1 = conv3x3(inplanes, planes, stride)
        self.bn1 = nn.BatchNorm2d(planes)
        self.relu = nn.ReLU(inplace=True)
        self.conv2 = conv3x3(planes, planes)
        self.bn2 = nn.BatchNorm2d(planes)
        self.downsample = downsample
        self.stride = stride

    def forward(self, x):
        residual = x

        out = self.conv1(x)
        out = self.bn1(out)
        out = self.relu(out)

        out = self.conv2(out)
        out = self.bn2(out)

        if self.downsample is not None:
            residual = self.downsample(x)

        out += residual
        out = self.relu(out)

        return out


class Bottleneck(nn.Module):
    expansion = 4

    def __init__(self, inplanes, planes, stride=1, downsample=None):
        super(Bottleneck, self).__init__()
        self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
        self.bn1 = nn.BatchNorm2d(planes)
        self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,
                               padding=1, bias=False)
        self.bn2 = nn.BatchNorm2d(planes)
        self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)
        self.bn3 = nn.BatchNorm2d(planes * 4)
        self.relu = nn.ReLU(inplace=True)
        self.downsample = downsample
        self.stride = stride

    def forward(self, x):
        residual = x

        out = self.conv1(x)
        out = self.bn1(out)
        out = self.relu(out)

        out = self.conv2(out)
        out = self.bn2(out)
        out = self.relu(out)

        out = self.conv3(out)
        out = self.bn3(out)

        if self.downsample is not None:
            residual = self.downsample(x)

        out += residual
        out = self.relu(out)

        return out


def weights_init(m):
    if isinstance(m, nn.Conv2d):
        init.kaiming_normal(m.weight, mode='fan_out')
        if m.bias is not None:
            init.constant(m.bias, 0)
    elif isinstance(m, nn.BatchNorm2d) or isinstance(m, nn.BatchNorm1d):
        init.constant(m.weight, 1)
        init.constant(m.bias, 0)
    elif isinstance(m, nn.Linear):
        init.normal(m.weight, std=0.001)
        if m.bias is not None:
            init.constant(m.bias, 0)


class ResNet(nn.Module):

    def __init__(self, block, layers, num_classes=1000):
        self.inplanes = 64
        super(ResNet, self).__init__()
        self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,
                               bias=False)
        self.bn1 = nn.BatchNorm2d(64)
        self.relu = nn.ReLU(inplace=True)
        self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
        self.layer1 = self._make_layer(block, 64, layers[0])
        self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
        self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
        self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
        self.avgpool = nn.AvgPool2d(7, stride=1)
        self.fc = nn.Linear(512 * block.expansion, num_classes)

        self.load_state_dict(model_zoo.load_url(model_urls['resnet50']))

        # for m in self.modules():
        #     if isinstance(m, nn.Conv2d):
        #         n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
        #         m.weight.data.normal_(0, math.sqrt(2. / n))
        #     elif isinstance(m, nn.BatchNorm2d):
        #         m.weight.data.fill_(1)
        #         m.bias.data.zero_()

        # self.avgpool = nn.Sequential(nn.AdaptiveAvgPool2d(4), nn.AdaptiveMaxPool2d(1))
        # self.fc.apply(weights_init)
        #

        self.avgpool = nn.AdaptiveAvgPool2d(1)

        self.fc = nn.Linear(512 * block.expansion, 4)
        # self.fc = nn.Sequential(nn.Linear(2048, 512),
        #                         nn.ReLU(inplace=True),
        #                         nn.Dropout(p=0.5),
        #                         nn.Linear(512, 4))
        self.fc.apply(weights_init)

        # self.counting = nn.Linear(512 * block.expansion, 65)
        # self.counting = nn.Sequential(nn.Linear(2048, 512),
        #                               nn.ReLU(inplace=True),
        #                               nn.Dropout(p=0.5),
        #                               nn.Linear(512, 65))
        # self.counting.apply(weights_init)

        def set_bn_fix(m):
            classname = m.__class__.__name__
            if classname.find('BatchNorm') != -1:
                for p in m.parameters(): p.requires_grad=False

        self.bn1.apply(set_bn_fix)
        self.layer1.apply(set_bn_fix)
        self.layer2.apply(set_bn_fix)
        self.layer3.apply(set_bn_fix)
        self.layer4.apply(set_bn_fix)

        # self.conv2 = nn.Sequential(nn.Conv2d(2048, 2048, kernel_size=3, stride=2, padding=1, bias=True),
        #                            nn.ReLU(inplace=True))
        # self.conv2.apply(weights_init)

    def _make_layer(self, block, planes, blocks, stride=1):
        downsample = None
        if stride != 1 or self.inplanes != planes * block.expansion:
            downsample = nn.Sequential(
                nn.Conv2d(self.inplanes, planes * block.expansion,
                          kernel_size=1, stride=stride, bias=False),
                nn.BatchNorm2d(planes * block.expansion),
            )

        layers = []
        layers.append(block(self.inplanes, planes, stride, downsample))
        self.inplanes = planes * block.expansion
        for i in range(1, blocks):
            layers.append(block(self.inplanes, planes))

        return nn.Sequential(*layers)

    def forward(self, x, x_new):

        batch_size = len(x)

        x = self.conv1(x)
        x = self.bn1(x)
        x = self.relu(x)
        x = self.maxpool(x)

        # if batch_size % 16 == 0 and self.training == True:
        #     x_new = F.adaptive_max_pool2d(x[x_new].view(len(x_new), 64, 112, 112), 56)
        #     x = torch.cat((x, x_new))

        # if batch_size % 16 == 0 and self.training == True:
        #     for i in range(len(x_new)):
        #         if i == 0:
        #             temp = x[x_new[0]].view(1, 4, 64, 56, 56)
        #         else:
        #             temp = torch.cat((temp, x[x_new[i]].view(1, 4, 64, 56, 56)), 0)
        #     temp = temp.view(len(x_new), 64, 112, 112)
        #     x_new = F.adaptive_max_pool2d(temp, 56)
            # x_new = self.conv2(temp)
            # x = torch.cat((x, x_new))
            # print('te')

        x = self.layer1(x)
        x = self.layer2(x)
        x = self.layer3(x)
        x = self.layer4(x)

        '''
        if batch_size % 16 == 0 and self.training == True:
            for i in range(batch_size):
                if i == 0:
                    temp = x[x_new[0]].view(1, 4, 2048, 7, 7)
                else:
                    temp = torch.cat((temp, x[x_new[i]].view(1, 4, 2048, 7, 7)), 0)
            temp = temp.view(batch_size, 2048, 14, 14)
            x_new = F.adaptive_max_pool2d(temp, 7)
            # x_new = self.conv2(temp)
            x = torch.cat((x, x_new))
            # print('te')
        '''
        # if batch_size % 16 == 0 and self.training == True:
        #     x_new = F.adaptive_max_pool2d(x[x_new].view(len(x_new), 2048, 14, 14), 7)
        #     x = torch.cat((x, x_new))

        # x = F.adaptive_max_pool2d(x, 4)  # .....

        x = self.avgpool(x)
        x = x.view(x.size(0), -1)

        cls = self.fc(x)
        # cou = self.counting(x)

        # cls = F.softmax(cls) + 1e-4 

        # cou = F.softmax(cou) + 1e-4

        # cou2cls = torch.stack((torch.sum(cou[:, :5], 1), torch.sum(cou[:, 5:20], 1), torch.sum(cou[:, 20:50], 1),
        #                        torch.sum(cou[:, 50:], 1)), 1)
        # cou2cls = torch.log(cou2cls)

        # Exception
        # cou2cou = torch.sum(cou * torch.from_numpy(np.array(range(1, 66))).float().cuda(), 1)

        # return cls, cou, cou2cls
        return cls

    def train(self, mode=True):
        # Override train so that the training mode is set as we want
        nn.Module.train(self, mode)
        if mode:
            # Set fixed blocks to be in eval mode
            # self.conv1.eval()
            # self.bn1.eval()
            # self.relu.eval()
            # self.maxpool.eval()
            # self.layer1.eval()

            def set_bn_eval(m):
                classname = m.__class__.__name__
                if classname.find('BatchNorm') != -1:
                    m.eval()

            self.bn1.apply(set_bn_eval)
            self.layer1.apply(set_bn_eval)
            self.layer2.apply(set_bn_eval)
            self.layer3.apply(set_bn_eval)
            self.layer4.apply(set_bn_eval)


def resnet18(pretrained=False, **kwargs):
    """Constructs a ResNet-18 model.

    Args:
        pretrained (bool): If True, returns a model pre-trained on ImageNet
    """
    model = ResNet(BasicBlock, [2, 2, 2, 2], **kwargs)
    if pretrained:
        model.load_state_dict(model_zoo.load_url(model_urls['resnet18']))
    return model


def resnet34(pretrained=False, **kwargs):
    """Constructs a ResNet-34 model.

    Args:
        pretrained (bool): If True, returns a model pre-trained on ImageNet
    """
    model = ResNet(BasicBlock, [3, 4, 6, 3], **kwargs)
    if pretrained:
        model.load_state_dict(model_zoo.load_url(model_urls['resnet34']))
    return model


def resnet50(pretrained=False, **kwargs):
    """Constructs a ResNet-50 model.

    Args:
        pretrained (bool): If True, returns a model pre-trained on ImageNet
    """
    model = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs)
    # if pretrained:
    #     model.load_state_dict(model_zoo.load_url(model_urls['resnet50']))
    return model


def resnet101(pretrained=False, **kwargs):
    """Constructs a ResNet-101 model.

    Args:
        pretrained (bool): If True, returns a model pre-trained on ImageNet
    """
    model = ResNet(Bottleneck, [3, 4, 23, 3], **kwargs)
    if pretrained:
        model.load_state_dict(model_zoo.load_url(model_urls['resnet101']))
    return model


def resnet152(pretrained=False, **kwargs):
    """Constructs a ResNet-152 model.

    Args:
        pretrained (bool): If True, returns a model pre-trained on ImageNet
    """
    model = ResNet(Bottleneck, [3, 8, 36, 3], **kwargs)
    if pretrained:
        model.load_state_dict(model_zoo.load_url(model_urls['resnet152']))
    return model

霹雳吧啦的resnet50

BasicBlock、Bottleneck

import torch.nn as nn
import torch


class BasicBlock(nn.Module):
    expansion = 1

    def __init__(self, in_channel, out_channel, stride=1, downsample=None, **kwargs):
        super(BasicBlock, self).__init__()
        self.conv1 = nn.Conv2d(in_channels=in_channel, out_channels=out_channel,
                               kernel_size=3, stride=stride, padding=1, bias=False)
        self.bn1 = nn.BatchNorm2d(out_channel)
        self.relu = nn.ReLU()
        self.conv2 = nn.Conv2d(in_channels=out_channel, out_channels=out_channel,
                               kernel_size=3, stride=1, padding=1, bias=False)
        self.bn2 = nn.BatchNorm2d(out_channel)
        self.downsample = downsample

    def forward(self, x):
        identity = x
        if self.downsample is not None:
            identity = self.downsample(x)

        out = self.conv1(x)
        out = self.bn1(out)
        out = self.relu(out)

        out = self.conv2(out)
        out = self.bn2(out)

        out += identity
        out = self.relu(out)

        return out


class Bottleneck(nn.Module):
    """
    注意:原论文中,在虚线残差结构的主分支上,第一个1x1卷积层的步距是2,第二个3x3卷积层步距是1。
    但在pytorch官方实现过程中是第一个1x1卷积层的步距是1,第二个3x3卷积层步距是2,
    这么做的好处是能够在top1上提升大概0.5%的准确率。
    可参考Resnet v1.5 https://ngc.nvidia.com/catalog/model-scripts/nvidia:resnet_50_v1_5_for_pytorch
    """
    expansion = 4

    def __init__(self, in_channel, out_channel, stride=1, downsample=None,
                 groups=1, width_per_group=64):
        super(Bottleneck, self).__init__()

        width = int(out_channel * (width_per_group / 64.)) * groups

        self.conv1 = nn.Conv2d(in_channels=in_channel, out_channels=width,
                               kernel_size=1, stride=1, bias=False)  # squeeze channels
        self.bn1 = nn.BatchNorm2d(width)
        # -----------------------------------------
        self.conv2 = nn.Conv2d(in_channels=width, out_channels=width, groups=groups,
                               kernel_size=3, stride=stride, bias=False, padding=1)
        self.bn2 = nn.BatchNorm2d(width)
        # -----------------------------------------
        self.conv3 = nn.Conv2d(in_channels=width, out_channels=out_channel*self.expansion,
                               kernel_size=1, stride=1, bias=False)  # unsqueeze channels
        self.bn3 = nn.BatchNorm2d(out_channel*self.expansion)
        self.relu = nn.ReLU(inplace=True)
        self.downsample = downsample

    def forward(self, x):
        identity = x
        if self.downsample is not None:
            identity = self.downsample(x)

        out = self.conv1(x)
        out = self.bn1(out)
        out = self.relu(out)

        out = self.conv2(out)
        out = self.bn2(out)
        out = self.relu(out)

        out = self.conv3(out)
        out = self.bn3(out)

        out += identity
        out = self.relu(out)

        return out


class ResNet(nn.Module):

    def __init__(self,
                 block,
                 blocks_num,
                 num_classes=1000,
                 include_top=True,
                 groups=1,
                 width_per_group=64):
        super(ResNet, self).__init__()
        self.include_top = include_top
        self.in_channel = 64

        self.groups = groups
        self.width_per_group = width_per_group

        self.conv1 = nn.Conv2d(3, self.in_channel, kernel_size=7, stride=2,
                               padding=3, bias=False)
        self.bn1 = nn.BatchNorm2d(self.in_channel)
        self.relu = nn.ReLU(inplace=True)
        self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
        self.layer1 = self._make_layer(block, 64, blocks_num[0])
        self.layer2 = self._make_layer(block, 128, blocks_num[1], stride=2)
        self.layer3 = self._make_layer(block, 256, blocks_num[2], stride=2)
        self.layer4 = self._make_layer(block, 512, blocks_num[3], stride=2)
        if self.include_top:
            self.avgpool = nn.AdaptiveAvgPool2d((1, 1))  # output size = (1, 1)
            self.fc = nn.Linear(512 * block.expansion, num_classes)

        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')

    def _make_layer(self, block, channel, block_num, stride=1):
        downsample = None
        if stride != 1 or self.in_channel != channel * block.expansion:
            downsample = nn.Sequential(
                nn.Conv2d(self.in_channel, channel * block.expansion, kernel_size=1, stride=stride, bias=False),
                nn.BatchNorm2d(channel * block.expansion))

        layers = []
        layers.append(block(self.in_channel,
                            channel,
                            downsample=downsample,
                            stride=stride,
                            groups=self.groups,
                            width_per_group=self.width_per_group))
        self.in_channel = channel * block.expansion

        for _ in range(1, block_num):
            layers.append(block(self.in_channel,
                                channel,
                                groups=self.groups,
                                width_per_group=self.width_per_group))

        return nn.Sequential(*layers)

    def forward(self, x):
        x = self.conv1(x)
        x = self.bn1(x)
        x = self.relu(x)
        x = self.maxpool(x)

        x = self.layer1(x)
        x = self.layer2(x)
        x = self.layer3(x)
        x = self.layer4(x)

        if self.include_top:
            x = self.avgpool(x)
            x = torch.flatten(x, 1)
            x = self.fc(x)

        return x


def resnet34(num_classes=1000, include_top=True):
    # https://download.pytorch.org/models/resnet34-333f7ec4.pth
    return ResNet(BasicBlock, [3, 4, 6, 3], num_classes=num_classes, include_top=include_top)


def resnet50(num_classes=1000, include_top=True):
    # https://download.pytorch.org/models/resnet50-19c8e357.pth
    return ResNet(Bottleneck, [3, 4, 6, 3], num_classes=num_classes, include_top=include_top)


def resnet101(num_classes=1000, include_top=True):
    # https://download.pytorch.org/models/resnet101-5d3b4d8f.pth
    return ResNet(Bottleneck, [3, 4, 23, 3], num_classes=num_classes, include_top=include_top)


def resnext50_32x4d(num_classes=1000, include_top=True):
    # https://download.pytorch.org/models/resnext50_32x4d-7cdf4587.pth
    groups = 32
    width_per_group = 4
    return ResNet(Bottleneck, [3, 4, 6, 3],
                  num_classes=num_classes,
                  include_top=include_top,
                  groups=groups,
                  width_per_group=width_per_group)


def resnext101_32x8d(num_classes=1000, include_top=True):
    # https://download.pytorch.org/models/resnext101_32x8d-8ba56ff5.pth
    groups = 32
    width_per_group = 8
    return ResNet(Bottleneck, [3, 4, 23, 3],
                  num_classes=num_classes,
                  include_top=include_top,
                  groups=groups,
                  width_per_group=width_per_group)

DED的Resnet50

import torchvision.models as models



import torch
from torch import nn
import torchvision.models as models



class ResNet18(nn.Module):
    def __init__(self, class_num, pretrained=True):
        super(ResNet18, self).__init__()
        resnet18 = models.resnet18(pretrained=pretrained)
        self.conv1 = resnet18.conv1
        self.bn1 = resnet18.bn1
        self.relu = resnet18.relu
        self.maxpool = resnet18.maxpool
        self.layer1 = resnet18.layer1
        self.layer2 = resnet18.layer2
        self.layer3 = resnet18.layer3
        self.layer4 = resnet18.layer4
        self.avgpool = resnet18.avgpool
        self.fc = nn.Linear(512, class_num)

    def forward(self, x):
        x = self.conv1(x)
        x = self.bn1(x)
        x = self.relu(x)
        x = self.maxpool(x)

        x = self.layer1(x)
        x = self.layer2(x)
        x = self.layer3(x)
        x = self.layer4(x)

        x = self.avgpool(x)
        x = torch.flatten(x, 1)
        x = self.fc(x)
        return x



class Res18_PK(nn.Module):
    def __init__(self, input_channel, class_num, pretrained=True):
        super(Res18_PK, self).__init__()
        resnet18 = models.resnet18(pretrained=pretrained)
        self.pre_process = nn.Sequential(
            nn.Conv2d(input_channel, 3, (1, 1)),
            nn.ReLU(),
        )

        self.conv1 = resnet18.conv1
        self.bn1 = resnet18.bn1
        self.relu = resnet18.relu
        self.maxpool = resnet18.maxpool
        self.layer1 = resnet18.layer1
        self.layer2 = resnet18.layer2
        self.layer3 = resnet18.layer3
        self.layer4 = resnet18.layer4
        self.avgpool = resnet18.avgpool
        self.fc = nn.Linear(512, class_num)

    def forward(self, x):
        x = self.pre_process(x)
        x = self.conv1(x)
        x = self.bn1(x)
        x = self.relu(x)
        x = self.maxpool(x)

        x = self.layer1(x)
        x = self.layer2(x)
        x = self.layer3(x)
        x = self.layer4(x)

        x = self.avgpool(x)
        x = torch.flatten(x, 1)
        x = self.fc(x)
        return x


class ResNet34(nn.Module):
    def __init__(self, class_num, pretrained=True):
        super(ResNet34, self).__init__()
        resnet34 = models.resnet34(pretrained=pretrained)
        self.conv1 = resnet34.conv1
        self.bn1 = resnet34.bn1
        self.relu = resnet34.relu
        self.maxpool = resnet34.maxpool
        self.layer1 = resnet34.layer1
        self.layer2 = resnet34.layer2
        self.layer3 = resnet34.layer3
        self.layer4 = resnet34.layer4
        self.avgpool = resnet34.avgpool
        self.fc = nn.Linear(512, class_num)

    def forward(self, x):
        x = self.conv1(x)
        x = self.bn1(x)
        x = self.relu(x)
        x = self.maxpool(x)

        x = self.layer1(x)
        x = self.layer2(x)
        x = self.layer3(x)
        x = self.layer4(x)

        x = self.avgpool(x)
        x = torch.flatten(x, 1)
        x = self.fc(x)
        return x


class Res34_PK(nn.Module):
    def __init__(self, input_channel, class_num, pretrained=True):
        super(Res34_PK, self).__init__()
        self.pre_process = nn.Sequential(
            nn.Conv2d(input_channel, 3, (1, 1)),
            nn.ReLU(),
        )

        resnet34 = models.resnet34(pretrained=pretrained)
        self.conv1 = resnet34.conv1
        self.bn1 = resnet34.bn1
        self.relu = resnet34.relu
        self.maxpool = resnet34.maxpool
        self.layer1 = resnet34.layer1
        self.layer2 = resnet34.layer2
        self.layer3 = resnet34.layer3
        self.layer4 = resnet34.layer4
        self.avgpool = resnet34.avgpool
        self.fc = nn.Linear(512, class_num)

    def forward(self, x):
        x = self.pre_process(x)
        x = self.conv1(x)
        x = self.bn1(x)
        x = self.relu(x)
        x = self.maxpool(x)

        x = self.layer1(x)
        x = self.layer2(x)
        x = self.layer3(x)
        x = self.layer4(x)

        x = self.avgpool(x)
        x = torch.flatten(x, 1)
        x = self.fc(x)
        return x


class ResNet50(nn.Module):
    def __init__(self, class_num, pretrained=True):
        super(ResNet50, self).__init__()
        resnet50 = models.resnet50(pretrained=pretrained)
        self.conv1 = resnet50.conv1
        self.bn1 = resnet50.bn1
        self.relu = resnet50.relu
        self.maxpool = resnet50.maxpool
        self.layer1 = resnet50.layer1
        self.layer2 = resnet50.layer2
        self.layer3 = resnet50.layer3
        self.layer4 = resnet50.layer4
        self.avgpool = resnet50.avgpool
        self.fc = nn.Linear(2048, class_num)

    def forward(self, x):
        x = self.conv1(x)
        x = self.bn1(x)
        x = self.relu(x)
        x = self.maxpool(x)

        x = self.layer1(x)
        x = self.layer2(x)
        x = self.layer3(x)
        x = self.layer4(x)

        x = self.avgpool(x)
        x = torch.flatten(x, 1)
        x = self.fc(x)
        return x

评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值