Vision Transformer图像分类实现

Vision Transformer (ViT) 是一种基于 Transformer 架构的图像分类模型。与传统的卷积神经网络 (CNN) 不同,ViT 将图像分割成多个小块(patches),并将这些小块视为序列输入到 Transformer 中。以下是使用 PyTorch 实现 Vision Transformer 进行图像分类的步骤。

1. 安装必要的库

首先,确保你已经安装了必要的库:

pip install torch torchvision

注意:具体需要依据cuda版本来选择对应版本

PyTorch

 2. 导入库

import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
import torchvision.transforms as transforms
from torch.utils.data import DataLoader

 3. 定义 Vision Transformer 模型
 

import math
from torch import nn, einsum
import torch.nn.functional as F

class PatchEmbedding(nn.Module):
    def __init__(self, img_size=224, patch_size=16, in_channels=3, embed_dim=768):
        super().__init__()
        self.img_size = img_size
        self.patch_size = patch_size
        self.n_patches = (img_size // patch_size) ** 2

        self.proj = nn.Conv2d(in_channels, embed_dim, kernel_size=patch_size, stride=patch_size)

    def forward(self, x):
        x = self.proj(x)  # (B, embed_dim, n_patches ** 0.5, n_patches ** 0.5)
        x = x.flatten(2)  # (B, embed_dim, n_patches)
        x = x.transpose(1, 2)  # (B, n_patches, embed_dim)
        return x

class VisionTransformer(nn.Module):
    def __init__(self, img_size=224, patch_size=16, in_channels=3, n_classes=1000, embed_dim=768, depth=12, n_heads=12, mlp_ratio=4., dropout=0.1):
        super().__init__()

        self.patch_embed = PatchEmbedding(img_size, patch_size, in_channels, embed_dim)
        self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim))
        self.pos_embed = nn.Parameter(torch.zeros(1, self.patch_embed.n_patches + 1, embed_dim))
        self.dropout = nn.Dropout(dropout)

        self.blocks = nn.ModuleList([
            nn.TransformerEncoderLayer(d_model=embed_dim, nhead=n_heads, dim_feedforward=int(embed_dim * mlp_ratio), dropout=dropout)
            for _ in range(depth)
        ])

        self.norm = nn.LayerNorm(embed_dim)
        self.head = nn.Linear(embed_dim, n_classes)

    def forward(self, x):
        B = x.shape[0]
        x = self.patch_embed(x)

        cls_tokens = self.cls_token.expand(B, -1, -1)
        x = torch.cat((cls_tokens, x), dim=1)
        x = x + self.pos_embed
        x = self.dropout(x)

        for block in self.blocks:
            x = block(x)

        x = self.norm(x)
        cls_token_final = x[:, 0]
        x = self.head(cls_token_final)

        return x

4. 数据预处理和加载
 

transform = transforms.Compose([
    transforms.Resize((224, 224)),
    transforms.ToTensor(),
    transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])

train_dataset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)
test_dataset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)

train_loader = DataLoader(train_dataset, batch_size=64, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=64, shuffle=False)

5. 训练模型

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

model = VisionTransformer(img_size=224, patch_size=16, in_channels=3, n_classes=10, embed_dim=768, depth=12, n_heads=12, mlp_ratio=4., dropout=0.1).to(device)

criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)

for epoch in range(10):  # 训练10个epoch
    model.train()
    running_loss = 0.0
    for i, (inputs, labels) in enumerate(train_loader):
        inputs, labels = inputs.to(device), labels.to(device)

        optimizer.zero_grad()

        outputs = model(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()

        running_loss += loss.item()
        if i % 100 == 99:  # 每100个batch打印一次损失
            print(f'Epoch [{epoch + 1}/{10}], Step [{i + 1}/{len(train_loader)}], Loss: {running_loss / 100:.4f}')
            running_loss = 0.0

print('Finished Training')

 6. 测试模型
 

model.eval()
correct = 0
total = 0
with torch.no_grad():
    for inputs, labels in test_loader:
        inputs, labels = inputs.to(device), labels.to(device)
        outputs = model(inputs)
        _, predicted = torch.max(outputs.data, 1)
        total += labels.size(0)
        correct += (predicted == labels).sum().item()

print(f'Accuracy of the model on the test images: {100 * correct / total:.2f}%')

 7. 保存模型

torch.save(model.state_dict(), 'vit_cifar10.pth')

 8. 加载模型

model = VisionTransformer(img_size=224, patch_size=16, in_channels=3, n_classes=10, embed_dim=768, depth=12, n_heads=12, mlp_ratio=4., dropout=0.1).to(device)
model.load_state_dict(torch.load('vit_cifar10.pth'))

以上展示了如何使用 PyTorch 实现 Vision Transformer 进行图像分类。在实际应用中可以根据需要调整模型的超参数,如 `embed_dim`、`depth`、`n_heads` 等,以适应不同的任务和数据集。

### 使用 Vision Transformer (ViT) 模型实现医学图像二分类任务 #### 数据预处理 对于医学图像数据集,确保所有图片尺寸一致非常重要。由于ViT模型主要用于二维图像,在处理三维医学影像时需特别注意。可以考虑将3D体素转换为多个2D切片或将整个3D体积作为输入传递给改进后的ViT架构如VIT-V-Net[^2]。 ```python import numpy as np from PIL import Image import torch from torchvision.transforms import Compose, Resize, ToTensor def preprocess_image(image_path): transform = Compose([ Resize((224, 224)), # 调整大小至适合ViT输入的分辨率 ToTensor() # 将PIL图像转为PyTorch张量并归一化到[0., 1.]范围 ]) image = Image.open(image_path).convert('RGB') tensor = transform(image) return tensor.unsqueeze(0) # 添加批次维度 ``` #### 加载预训练 ViT 模型 利用现有的预训练权重初始化ViT有助于加速收敛过程,并可能提高最终模型的表现力。这里以Hugging Face Transformers库为例展示加载方式: ```python from transformers import ViTFeatureExtractor, ViTForImageClassification feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224-in21k') model = ViTForImageClassification.from_pretrained('google/vit-base-patch16-224-in21k', num_labels=2) # 设置模型为评估模式(如果仅做预测) model.eval() ``` #### 修改最后一层适应特定任务需求 为了使ViT适用于具体的二分类问题,需要调整最后几层神经元的数量以及激活函数的选择。通常情况下会移除原有的全连接层,并替换成新的具有两个输出节点的新一层,分别代表两类标签的概率分布。 ```python from torch.nn import Linear, LogSoftmax num_features = model.classifier.in_features model.classifier = Linear(num_features, 2) output_activation = LogSoftmax(dim=-1) ``` #### 训练与验证流程 定义损失函数、优化器之后即可开始迭代更新参数直至满足停止条件;期间还需定期保存最佳版本以便后续部署使用。 ```python criterion = torch.nn.CrossEntropyLoss() optimizer = torch.optim.AdamW(model.parameters(), lr=5e-5) for epoch in range(num_epochs): running_loss = 0.0 for inputs, labels in train_loader: optimizer.zero_grad() outputs = model(inputs)['logits'] loss = criterion(outputs, labels) loss.backward() optimizer.step() running_loss += loss.item() print(f'Epoch {epoch}, Loss: {running_loss/len(train_loader)}') torch.save(model.state_dict(), 'best_model.pth') ```
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

reset2021

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值