训练自己的SETR和VIT模型

前言

前言

  因为最近在发论文需要对之前的模型复现并记录完整的数据,结果没有找到一个完整的代码可以让SETR在torch框架中训练,有的是缺少dataloader,有的是缺少train.py,有的是只有一个网络结构代码。
  所以我将SETR和VIT的结构整合在了一起,并且加入了dataloader,训练,预测,评估每个类别的iou、recall等等参数的功能。代码结构如下(目前,持续更新中。。。。。):
   代码下载地址:click here
SETR-pytorch
├── img
├── logs
├── model_data
├── nets
│ ├── IntmdSequential.py
│ ├── model_training.py
│ ├── SETR.py
│ └── Transformer.py
├── utils
│ ├── init.py
│ ├── callbacks.py
│ ├── dataloader.py
│ ├── utils.py
│ ├── utils_fit.py
│ └── utils_metrics.py
├── VOCdevkit
│ └── VOC2007
│ ├── ImageSets
│ │ └── Segmentation
│ │ ├── README.md
│ │ ├── train.txt
│ │ └── val.txt
│ ├── JPEGImages
│ └── SegmentationClass
├── requirements.txt
└── train.py

训练

所需环境

见requirements.txt

conda activate yourenv
pip install -r requirements.txt

训练步骤

  1. 加载你的数据集,数据集放置到VOCdeckit路径下。结构如下:
  • VOCdevkit
    • VOC2007
      • ImageSets
        • Segmentation
          • train.txt
          • val.txt
      • JPEGImages
        • image.jpg
      • SegmentationClass
        • label.png
  1. 说明
    • num_classes设为你的数据集的类别数+1
    • optimizer_type设为你需要的,可选:sgd,adam
    • 如果训练SETR模型 则backbone = ‘setr’,model = SETR(num_classes=num_classes, backbone=backbone, pretrained=pretrained)
    • 如果训练Vit模型 则backbone = ‘vit’, model = VIT(num_classes=num_classes, backbone=backbone, pretrained=pretrained)
    • 运行train.py
# -*- coding: UTF-8 -*-
"""
===================================================================================
@author : Leoda
@Date   : 2024/07/17 11:28:43
@Project -> : SETR-pytorch$
@name: train.py
==================================================================================
"""
import os
import datetime
import torch.optim as optim
import numpy as np
import torch
import torch.distributed as dist
import torch.backends.cudnn as cudnn
from torch.utils.data import DataLoader
from functools import partial

from nets.SETR import SETR, VIT
from nets.model_training import weights_init, get_lr_scheduler, set_optimizer_lr
from utils.callbacks import LossHistory, EvalCallback
from utils.dataloader import SegmentationDataset, seg_dataset_collate
from utils.utils import seed_everything, download_weights, show_config, worker_init_fn
from utils.utils_fit import fit_one_epoch

if __name__ == "__main__":
    #set
    Cuda = True
    seed = 11
    distributed = False
    sync_bn = False
    fp16 = False
    num_classes = 11

    #model
    backbone = 'setr' # 'setr' or 'vit' 这里是小写
    pretrained = False
    model_path = ""
    input_shape = [512, 512]

    #train
    Init_epoch = 0
    Freeze_epoch = 0
    Freeze_batch_size = 6
    UnFreeze_epoch = 100
    UnFreeze_batch_size = 3
    Freeze_train = False
    VOCdevkit_path = 'VOCdevkit'

    #parameters
    Init_lr = 7e-3
    Min_lr = Init_lr * 0.01
    optimizer_type = "sgd"
    momentum = 0.9
    weight_decay = 1e-4
    lr_decay_type = 'cos'

    #save
    save_period = 5
    save_dir = 'logs'
    eval_flag = True
    eval_period = 10

    #loss functions
    dice_loss = False
    focal_loss = False
    cls_weights = np.ones([num_classes], np.float32)
    num_workers = 4
    seed_everything(seed)
    ngpus_per_node  = torch.cuda.device_count()

    if distributed:
        dist.init_process_group(backend="nccl")
        local_rank = int(os.environ["LOCAL_RANK"])
        rank = int(os.environ["RANK"])
        device = torch.device("cuda", local_rank)
        if local_rank == 0:
            print(f"[{os.getpid()}] (rank = {rank}, "
                  f"local_rank = "f"{local_rank}) training...")
            print("Gpu Device Count : ", ngpus_per_node)
    else:
        device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
        local_rank = 0
        rank = 0

    if pretrained:
        if distributed:
            if local_rank == 0:
                download_weights(backbone)
            dist.barrier()
        else:
            download_weights(backbone)
    #================================================================================
    #   这里model可选择模型又SETR和VIT,字母这里是大写
    #   model = SETR(num_classes=num_classes, backbone=backbone, pretrained=pretrained)
    #   model = VIT(num_classes=num_classes, backbone=backbone, pretrained=pretrained)
    #================================================================================
    model = SETR(num_classes=num_classes, backbone=backbone, pretrained=pretrained)
    if not pretrained:
        weights_init(model)
    if model_path != '':
        if local_rank == 0:
            print('Load weights {}.'.format(model_path))
        model_dict = model.state_dict()
        pretrained_dict = torch.load(model_path, map_location=device)
        load_key, no_load_key, temp_dict = [], [], {}
        for k, v in pretrained_dict.items():
            if k in model_dict.keys() and np.shape(model_dict[k]) == np.shape(v):
                temp_dict[k] = v
                load_key.append(k)
            else:
                no_load_key.append(k)
        model_dict.update(temp_dict)
        model.load_state_dict(model_dict)
        if local_rank == 0:
            print("\nSuccessful Load Key:", str(load_key)[:500], "……\nSuccessful Load Key Num:", len(load_key))
            print("\nFail To Load Key:", str(no_load_key)[:500], "……\nFail To Load Key num:", len(no_load_key))
            print("\n\033[1;33;44m温馨提示,head部分没有载入是正常现象,Backbone部分没有载入是错误的。\033[0m")
    if local_rank == 0:
        time_str        = datetime.datetime.strftime(datetime.datetime.now(),'%Y_%m_%d_%H_%M_%S')
        log_dir         = os.path.join(save_dir, "loss_" + str(time_str))
        loss_history    = LossHistory(log_dir, model, input_shape=input_shape)
    else:
        loss_history    = None

    if fp16:
        from torch.cuda.amp import GradScaler as GradScaler
        scaler = GradScaler()
    else:
        scaler = None

    model_train     = model.train()
    if sync_bn and ngpus_per_node > 1 and distributed:
        model_train = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model_train)
    elif sync_bn:
        print("Sync_bn is not support in one gpu or not distributed.")
    if Cuda:
        if distributed:
            model_train = model_train.cuda(local_rank)
            model_train = torch.nn.parallel.DistributedDataParallel(model_train, device_ids=[local_rank], find_unused_parameters=True)
        else:
            model_train = torch.nn.DataParallel(model)
            cudnn.benchmark = True
            model_train = model_train.cuda()
    #dataset
    with open(os.path.join(VOCdevkit_path, "VOC2007/ImageSets/Segmentation/train.txt"),"r") as f:
        train_lines = f.readlines()
    with open(os.path.join(VOCdevkit_path, "VOC2007/ImageSets/Segmentation/val.txt"),"r") as f:
        val_lines = f.readlines()
    num_train   = len(train_lines)
    num_val     = len(val_lines)

    if local_rank == 0:
        show_config(
            num_classes = num_classes, backbone = backbone, model_path = model_path, input_shape = input_shape,
            Init_Epoch = Init_epoch, Freeze_Epoch = Freeze_epoch, UnFreeze_Epoch = UnFreeze_epoch, Freeze_batch_size = Freeze_batch_size,
            Unfreeze_batch_size = UnFreeze_batch_size, Freeze_Train = Freeze_train, Init_lr = Init_lr,
            Min_lr = Min_lr, optimizer_type = optimizer_type, momentum = momentum, lr_decay_type = lr_decay_type,
            save_period = save_period, save_dir = save_dir, num_workers = num_workers, num_train = num_train, num_val = num_val
        )
        wanted_step = 1.5e4 if optimizer_type == "sgd" else 0.5e4
        total_step  = num_train // UnFreeze_batch_size * UnFreeze_epoch
        if total_step <= wanted_step:
            if num_train // UnFreeze_batch_size == 0:
                raise ValueError('数据集过小,无法进行训练,请扩充数据集。')
            wanted_epoch = wanted_step // (num_train // UnFreeze_batch_size) + 1
            print("\n\033[1;33;44m[Warning] 使用%s优化器时,建议将训练总步长设置到%d以上。\033[0m"%(optimizer_type, wanted_step))
            print("\033[1;33;44m[Warning] 本次运行的总训练数据量为%d,Unfreeze_batch_size为%d,共训练%d个Epoch,计算出总训练步长为%d。\033[0m"%(num_train, UnFreeze_batch_size, UnFreeze_epoch, total_step))
            print("\033[1;33;44m[Warning] 由于总训练步长为%d,小于建议总步长%d,建议设置总世代为%d。\033[0m"%(total_step, wanted_step, wanted_epoch))

    if True:
        UnFreeze_flag = False
        if Freeze_train:
            for param in model.encoder_2d.parameters():
            # for param in model.backbone.parameters():
                param.requires_grad = False
        batch_size = Freeze_batch_size if Freeze_train else UnFreeze_batch_size

        nbs = 16
        lr_limit_max = 5e-4 if optimizer_type == 'adam' else 1e-1
        lr_limit_min = 3e-4 if optimizer_type == 'adam' else 5e-4
        if backbone in ["setr", "vit"]:
            lr_limit_max = 1e-4 if optimizer_type == 'adam' else 1e-1
            lr_limit_min = 1e-4 if optimizer_type == 'adam' else 5e-4
        else:
            raise ValueError("The backbone must be setr or vit")
        Init_lr_fit = min(max(batch_size / nbs * Init_lr, lr_limit_min), lr_limit_max)
        Min_lr_fit = min(max(batch_size / nbs * Min_lr, lr_limit_min * 1e-2), lr_limit_max * 1e-2)

        optimizer = {
            'adam': optim.Adam(model.parameters(), Init_lr_fit, betas=(momentum, 0.999), weight_decay=weight_decay),
            'sgd': optim.SGD(model.parameters(), Init_lr_fit, momentum=momentum, nesterov=True, weight_decay=weight_decay)
        }[optimizer_type]

        lr_scheduler_func = get_lr_scheduler(lr_decay_type, Init_lr_fit, Min_lr_fit, UnFreeze_epoch)

        epoch_step = num_train // batch_size
        epoch_step_val = num_val // batch_size

        if epoch_step == 0 or epoch_step_val == 0:
            raise ValueError("数据集过小,无法继续进行训练,请扩充数据集。")

        train_dataset = SegmentationDataset(train_lines, input_shape, num_classes, True, VOCdevkit_path)
        val_dataset = SegmentationDataset(val_lines, input_shape, num_classes, False, VOCdevkit_path)

        if distributed:
            train_sampler   = torch.utils.data.distributed.DistributedSampler(train_dataset, shuffle=True,)
            val_sampler     = torch.utils.data.distributed.DistributedSampler(val_dataset, shuffle=False,)
            batch_size      = batch_size // ngpus_per_node
            shuffle         = False
        else:
            train_sampler   = None
            val_sampler     = None
            shuffle         = True

        gen             = DataLoader(train_dataset, shuffle = shuffle, batch_size = batch_size, num_workers = num_workers, pin_memory=True,
                                    drop_last = True, collate_fn = seg_dataset_collate, sampler=train_sampler,
                                    worker_init_fn=partial(worker_init_fn, rank=rank, seed=seed))
        gen_val         = DataLoader(val_dataset  , shuffle = shuffle, batch_size = batch_size, num_workers = num_workers, pin_memory=True,
                                    drop_last = True, collate_fn = seg_dataset_collate, sampler=val_sampler,
                                    worker_init_fn=partial(worker_init_fn, rank=rank, seed=seed))
        # ----------------------#
        #   记录eval的map曲线
        # ----------------------#
        if local_rank == 0:
            eval_callback = EvalCallback(model, input_shape, num_classes, val_lines, VOCdevkit_path, log_dir, Cuda,
                                         eval_flag=eval_flag, period=eval_period)
        else:
            eval_callback = None

        # ---------------------------------------#
        #   开始模型训练
        # ---------------------------------------#
        for epoch in range(Init_epoch, UnFreeze_epoch):
            # ---------------------------------------#
            #   如果模型有冻结学习部分
            #   则解冻,并设置参数
            # ---------------------------------------#
            if epoch >= Freeze_epoch and not UnFreeze_flag and Freeze_train:
                batch_size = UnFreeze_batch_size
                # -------------------------------------------------------------------#
                #   判断当前batch_size,自适应调整学习率
                # -------------------------------------------------------------------#
                nbs = 16
                lr_limit_max = 5e-4 if optimizer_type == 'adam' else 1e-1
                lr_limit_min = 3e-4 if optimizer_type == 'adam' else 5e-4
                Init_lr_fit = min(max(batch_size / nbs * Init_lr, lr_limit_min), lr_limit_max)
                Min_lr_fit = min(max(batch_size / nbs * Min_lr, lr_limit_min * 1e-2), lr_limit_max * 1e-2)
                # ---------------------------------------#
                #   获得学习率下降的公式
                # ---------------------------------------#
                lr_scheduler_func = get_lr_scheduler(lr_decay_type, Init_lr_fit, Min_lr_fit, UnFreeze_epoch)

                for param in model.backbone.parameters():
                    param.requires_grad = True

                epoch_step = num_train // batch_size
                epoch_step_val = num_val // batch_size

                if epoch_step == 0 or epoch_step_val == 0:
                    raise ValueError("数据集过小,无法继续进行训练,请扩充数据集。")

                gen = DataLoader(train_dataset, shuffle=shuffle, batch_size=batch_size, num_workers=num_workers,
                                 pin_memory=True,
                                 drop_last=True, collate_fn=seg_dataset_collate, sampler=train_sampler,
                                 worker_init_fn=partial(worker_init_fn, rank=rank, seed=seed))
                gen_val = DataLoader(val_dataset, shuffle=shuffle, batch_size=batch_size, num_workers=num_workers,
                                     pin_memory=True,
                                     drop_last=True, collate_fn=seg_dataset_collate, sampler=val_sampler,
                                     worker_init_fn=partial(worker_init_fn, rank=rank, seed=seed))

                UnFreeze_flag = True

            if distributed:
                train_sampler.set_epoch(epoch)

            set_optimizer_lr(optimizer, lr_scheduler_func, epoch)

            fit_one_epoch(model_train, model, loss_history, eval_callback, optimizer, epoch, epoch_step, epoch_step_val,
                          gen, gen_val, UnFreeze_epoch, Cuda, \
                          dice_loss, focal_loss, cls_weights, num_classes, fp16, scaler, save_period, save_dir,
                          local_rank)

            if distributed:
                dist.barrier()

        if local_rank == 0:
            loss_history.writer.close()

预测

训练完后logs目录下会保存训练的best_weights.pth,。。。。。。。。还没完全弄完 继续更新中

Transformer发轫于NLP(自然语言处理),并跨界应用到CV(计算机视觉)领域。 Swin Transformer是基于Transformer的计算机视觉骨干网,在图像分类、目标检测、实例分割、语义分割等多项下游CV应用中取得了SOTA的性能。该项工作也获得了ICCV 2021顶会最佳论文奖。 本课程将手把手地教大家使用labelImg标注使用Swin Transformer训练自己的数据集。  本课程将介绍Transformer及在CV领域的应用、Swin Transformer的原理。 课程以多目标检测(足球梅西同时检测)为例进行Swin Transformer实战演示。 课程在WindowsUbuntu系统上分别做项目演示。包括:安装软件环境、安装Pytorch、安装Swin-Transformer-Object-Detection、标注自己的数据集、准备自己的数据集(自动划分训练验证集)、数据集格式转换(Python脚本完成)、修改配置文件、训练自己的数据集、测试训练出的网络模型、性能统计、日志分析。  相关课程: 《Transformer原理与代码精讲(PyTorch)》https://edu.youkuaiyun.com/course/detail/36697《Transformer原理与代码精讲(TensorFlow)》https://edu.youkuaiyun.com/course/detail/36699《ViT(Vision Transformer)原理与代码精讲》https://edu.youkuaiyun.com/course/detail/36719《DETR原理与代码精讲》https://edu.youkuaiyun.com/course/detail/36768《Swin Transformer实战目标检测:训练自己的数据集》https://edu.youkuaiyun.com/course/detail/36585《Swin Transformer实战实例分割:训练自己的数据集》https://edu.youkuaiyun.com/course/detail/36586《Swin Transformer原理与代码精讲》 https://download.youkuaiyun.com/course/detail/37045
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

小马敲马

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值