简单GoogLeNet实现

本文详细介绍了如何从头开始实现经典的GoogLeNet深度学习模型,包括其Inception模块的构建和训练过程,适合深度学习初学者和进阶者进行实践。
import numpy as np
import torch
from torch import nn
from torch.autograd import Variable

def conv_relu(in_channel, out_channel, kernel, stride=1, padding=0):
    layer = nn.Sequential(
        nn.Conv2d(in_channel, out_channel, kernel, stride, padding),
        nn.BatchNorm2d(out_channel, eps=1e-3),
        nn.ReLU(True)
    )
    return layer

class inception(nn.Module):
    def __init__(self,in_channel,out1_1,out2_1,out2_3,out3_1,out3_5,out4_1):
        super(inception,self).__init__()
        #第一条路
        self.branch1_1 = conv_relu(in_channel,out1_1,1)

        #第二条路
        self.branch3_3 = nn.Sequential(
            conv_relu(in_channel, out2_1, 1),
            conv_relu(out2_1, out2_3, 3, padding=1)
        )

        #第三条路
        self.branch5_5 = nn.Sequential(
            conv_relu(in_channel, out3_1, 1),
            conv_relu(out3_1, out3_5, 5, padding=2)
        )

        #第四条路
        self.branch_pool = nn.Sequential(
            nn.MaxPool2d(3, stride=1, padding=1),
            conv_relu(in_channel, out4_1, 1)
        )

    def forward(self, x):
        f1 = self.branch1_1(x)
        f2 = self.branch3_3(x)
        f3 = self.branch5_5(x)
        f4 = self.branch_pool(x)
        output = torch.cat((f1, f2, f3, f4), dim=1)
        return output

class googlenet(nn.Module):
    def __init__(self, in_channel, num_classes, verbose=False):
        super(googlenet, self).__init__()
        self.verbose = verbose

        self.block1 = nn.Sequential(
            conv_relu(in_channel, out_channel=64, kernel=7, stride=2, padding=3),
            nn.MaxPool2d(3,2)
        )

        self.block2 = nn.Sequential(
            conv_relu(64, 64, kernel=1),
            conv_relu(64, 192, kernel=3, padding=1),
            nn.MaxPool2d(3, 2)
        )

        self.block3 = nn.Sequential(
            inception(192, 64, 96, 128, 16, 32, 32),
            inception(256, 128, 128, 192, 32, 96, 64),
            nn.MaxPool2d(3, 2)
        )

        self.block4 = nn.Sequential(
            inception(480, 192, 96, 208, 16, 48, 64),
            inception(512, 160, 112, 224, 24, 64, 64),
            inception(512, 128, 128, 256, 24, 64, 64),
            inception(512, 112, 144, 288, 32, 64, 64),
            inception(528, 256, 160, 320, 32, 128, 128),
            nn.MaxPool2d(3, 2)
        )

        self.block5 = nn.Sequential(
            inception(832, 256, 160, 320, 32, 128, 128),
            inception(832, 384, 182, 384, 48, 128, 128),
            nn.AvgPool2d(2)
        )

        self.classifier = nn.Linear(1024, num_classes)

    def forward(self, x):
        x = self.block1(x)
        if self.verbose:
            print('block 1 output: {}'.format(x.shape))
        x = self.block2(x)
        if self.verbose:
            print('block 2 output: {}'.format(x.shape))
        x = self.block3(x)
        if self.verbose:
            print('block 3 output: {}'.format(x.shape))
        x = self.block4(x)
        if self.verbose:
            print('block 4 output: {}'.format(x.shape))
        x = self.block5(x)
        if self.verbose:
            print('block 5 output: {}'.format(x.shape))
        x = x.view(x.shape[0], -1)
        x = self.classifier(x)
        return x

test_net = googlenet(3, 10, True)
test_x = Variable(torch.zeros(1, 3, 96, 96))
test_y = test_net(test_x)
print('output: {}'.format(test_y.shape))

















PyTorch是一种用于机器学习的编程库,而GoogLeNet是一种深度卷积神经网络架构。然而,使用PyTorch实现迁移学习来应用于GoogLeNet的过程并不复杂。 首先,我们需要加载预训练的GoogLeNet模型。PyTorch提供了一个方便的方式来加载预训练好的模型: ``` import torch import torchvision.models as models # Load the pretrained model googlenet = models.googlenet(pretrained=True) ``` 接着,我们需要定义一些新的图层来适应我们特定的任务。假设我们想要对样本进行分类: ``` # Set requires_grad = False to freeze the pre-trained parameters for param in googlenet.parameters(): param.requires_grad = False # Replace the final fully-connected layer num_classes = 10 googlenet.fc = torch.nn.Linear(googlenet.fc.in_features, num_classes) ``` 在上面的代码中,我们冻结了预训练参数并替换了全连接层。我们还定义了一个新的num_classes参数来指定所需的类别数量。 接下来,我们需要定义优化器和损失函数。在这个示例中,我们将使用随机梯度下降(SGD)优化器和交叉熵损失函数: ``` # Define the optimizer and loss function optimizer = torch.optim.SGD(googlenet.fc.parameters(), lr=0.001) criterion = torch.nn.CrossEntropyLoss() ``` 在所有步骤都准备好之后,我们可以开始训练模型: ``` # Train the model for epoch in range(num_epochs): for batch_idx, (data, target) in enumerate(train_loader): # Forward pass output = googlenet(data) # Compute the loss loss = criterion(output, target) # Backward pass and optimize optimizer.zero_grad() loss.backward() optimizer.step() ``` 在训练完成后,我们可以评估我们的模型: ``` # Evaluate the model with torch.no_grad(): total_correct = 0 for data, target in test_loader: output = googlenet(data) pred = output.argmax(dim=1, keepdim=True) total_correct += pred.eq(target.view_as(pred)).sum().item() accuracy = 100. * total_correct / len(test_loader.dataset) print(f'Test accuracy: {accuracy:.2f}%') ``` 在这个示例中,我们使用PyTorch实现GoogLeNet的迁移学习。虽然这只是一个简单的例子,但它说明了PyTorch和迁移学习的强大力量。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值