【YOLO】基于YOLOv8实现自定义数据的自动标注(针对VOC格式的数据集)

该文介绍了如何使用YOLOv8模型对VOC格式的数据集进行自动标注,通过提供模型权重、图片路径和标注文件保存路径,运行代码可自动生成.xml标注文件,从而减少手动标注的工作量。虽然结果需人工复查,但已显著节省了时间。未来计划更新支持COCO格式的自动标注。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

基于YOLOv8实现自定义数据的自动标注

引言

利用yolov8的检测模型实现数据集的自标注,针对VOC数据集,.xml文件,labelimg标注工具

VOC格式的数据集自标注实现

yolov8模型的训练可以参考笔者的博客
【YOLO】YOLOv8实操:环境配置/自定义数据集准备/模型训练/预测
训练好自定义的模型,就可以执行下面的代码实现模型自标注数据集
修改下面三个参数即可:

weight_path = "/media/ll/L/llr/model/yolov8/weights/best.pt"  # 模型路径
imgdir = r'/media/ll/L/llr/DATASET/subwayDatasets/bjdt/images'  # 图片路径
xmldir = r'/media/ll/L/llr/DATASET/ZED_DATA/GZG/bjdt_daytime/xml'  # 标注文件保存路径

完整代码如下:

"""
Fuction:使用预训练模型权重对图像集进行识别后自动标注
Author: Alian
Create_Date:2023.03.30
Finishe_Date:2023.03.30
"""
import os
from os import getcwd
import glob
from xml.etree import ElementTree as ET
from utils.general import *
# from utils.datasets import *
from utils import torch_utils
from ultralytics import YOLO  # YOLOV8


# 定义一个创建一级分支object的函数
def create_object(root, xyxy, names,cls):  # 参数依次,树根,xmin,ymin,xmax,ymax
    # 创建一级分支object
    _object = ET.SubElement(root, 'object')
    # 创建二级分支
    name = ET.SubElement(_object, 'name')
    # print(obj_name)
    name.text = str(names[int(cls)])
    pose = ET.SubElement(_object, 'pose')
    pose.text = 'Unspecified'
    truncated = ET.SubElement(_object, 'truncated')
    truncated.text = '0'
    difficult = ET.SubElement(_object, 'difficult')
    difficult.text = '0'
    # 创建bndbox
    bndbox = ET.SubElement(_object, 'bndbox')
    xmin = ET.SubElement(bndbox, 'xmin')
    xmin.text = '%s' % int(xyxy[0])
    ymin = ET.SubElement(bndbox, 'ymin')
    ymin.text = '%s' % int(xyxy[1])
    xmax = ET.SubElement(bndbox, 'xmax')
    xmax.text = '%s' % int(xyxy[2])
    ymax = ET.SubElement(bndbox, 'ymax')
    ymax.text = '%s' % int(xyxy[3])


# 创建xml文件的函数
def create_tree(image_path, h, w):
    # 创建树根annotation
    annotation = ET.Element('annotation')
    # 创建一级分支folder
    folder = ET.SubElement(annotation, 'folder')
    # 添加folder标签内容
    folder.text = os.path.dirname(image_path)

    # 创建一级分支filename
    filename = ET.SubElement(annotation, 'filename')
    filename.text = os.path.basename(image_path)

    # 创建一级分支path
    path = ET.SubElement(annotation, 'path')

    path.text = image_path  # 用于返回当前工作目录getcwd() + '\{}'.format

    # 创建一级分支source
    source = ET.SubElement(annotation, 'source')
    # 创建source下的二级分支database
    database = ET.SubElement(source, 'database')
    database.text = 'Unknown'

    # 创建一级分支size
    size = ET.SubElement(annotation, 'size')
    # 创建size下的二级分支图像的宽、高及depth
    width = ET.SubElement(size, 'width')
    width.text = str(w)
    height = ET.SubElement(size, 'height')
    height.text = str(h)
    depth = ET.SubElement(size, 'depth')
    depth.text = '3'

    # 创建一级分支segmented
    segmented = ET.SubElement(annotation, 'segmented')
    segmented.text = '0'

    return annotation


def pretty_xml(element, indent, newline, level=0):  # elemnt为传进来的Elment类,参数indent用于缩进,newline用于换行
    if element:  # 判断element是否有子元素
        if (element.text is None) or element.text.isspace():  # 如果element的text没有内容
            element.text = newline + indent * (level + 1)
        else:
            element.text = newline + indent * (level + 1) + element.text.strip() + newline + indent * (level + 1)
            # else:  # 此处两行如果把注释去掉,Element的text也会另起一行
            # element.text = newline + indent * (level + 1) + element.text.strip() + newline + indent * level
    temp = list(element)  # 将element转成list
    for subelement in temp:
        if temp.index(subelement) < (len(temp) - 1):  # 如果不是list的最后一个元素,说明下一个行是同级别元素的起始,缩进应一致
            subelement.tail = newline + indent * (level + 1)
        else:  # 如果是list的最后一个元素, 说明下一行是母元素的结束,缩进应该少一个
            subelement.tail = newline + indent * level
        pretty_xml(subelement, indent, newline, level=level + 1)  # 对子元素进行递归操作


def Auto_label(weight,imgdir,xmldir):
    # load model
    model = YOLO(weight)
    img_list = glob.glob('%s/*.*' % imgdir)
    for img_path in img_list:
        results = model(img_path,show=False,save=False)[0]  # predict on an image
        # 创建xml文件
        annotation = create_tree(img_path, results.orig_shape[0], results.orig_shape[1])
        det = results.boxes
        names = results.names
        cls = det.cls
        for i in range(len(det)):
            create_object(annotation,det.xyxy[i],names,cls[i])
        # 将树模型写入xml文件
        tree = ET.ElementTree(annotation)
        root = tree.getroot()
        pretty_xml(root, '\t', '\n')
        # tree.write('.\{}\{}.xml'.format(outdir, image_name.strip('.jpg')), encoding='utf-8')
        tree.write(img_path.replace(imgdir,xmldir).replace('.jpg','.xml'), encoding='utf-8')




if __name__ == '__main__':
    # 加载模型
    weight_path = "/media/ll/L/llr/model/yolov8/weights/best.pt"  # 模型路径
    imgdir = r'/media/ll/L/llr/DATASET/subwayDatasets/bjdt/images'  # 图片路径
    xmldir = r'/media/ll/L/llr/DATASET/ZED_DATA/GZG/bjdt_daytime/xml'  # 标注文件保存路径
    Auto_label(weight_path,imgdir,xmldir)  # 调用自标注函数

综上,就实现了利用yolov8的检测模型实现数据集的自动标注,不过标注结果最好人工复查下,但是已经省下很多标注时间啦

后续会更新COCO数据格式的自动标注博客,即.json文件,labelme标注工具的

DQN(Deep Q-Network)是一种使用深度神经网络实现的强化学习算法,用于解决离散动作空间的问题。在PyTorch中实现DQN可以分为以下几个步骤: 1. 定义神经网络:使用PyTorch定义一个包含多个全连接层的神经网络,输入为状态空间的维度,输出为动作空间的维度。 ```python import torch.nn as nn import torch.nn.functional as F class QNet(nn.Module): def __init__(self, state_dim, action_dim): super(QNet, self).__init__() self.fc1 = nn.Linear(state_dim, 64) self.fc2 = nn.Linear(64, 64) self.fc3 = nn.Linear(64, action_dim) def forward(self, x): x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x ``` 2. 定义经验回放缓存:包含多条经验,每条经验包含一个状态、一个动作、一个奖励和下一个状态。 ```python import random class ReplayBuffer(object): def __init__(self, max_size): self.buffer = [] self.max_size = max_size def push(self, state, action, reward, next_state): if len(self.buffer) < self.max_size: self.buffer.append((state, action, reward, next_state)) else: self.buffer.pop(0) self.buffer.append((state, action, reward, next_state)) def sample(self, batch_size): state, action, reward, next_state = zip(*random.sample(self.buffer, batch_size)) return torch.stack(state), torch.tensor(action), torch.tensor(reward), torch.stack(next_state) ``` 3. 定义DQN算法:使用PyTorch定义DQN算法,包含训练和预测两个方法。 ```python class DQN(object): def __init__(self, state_dim, action_dim, gamma, epsilon, lr): self.qnet = QNet(state_dim, action_dim) self.target_qnet = QNet(state_dim, action_dim) self.gamma = gamma self.epsilon = epsilon self.lr = lr self.optimizer = torch.optim.Adam(self.qnet.parameters(), lr=self.lr) self.buffer = ReplayBuffer(100000) self.loss_fn = nn.MSELoss() def act(self, state): if random.random() < self.epsilon: return random.randint(0, action_dim - 1) else: with torch.no_grad(): q_values = self.qnet(state) return q_values.argmax().item() def train(self, batch_size): state, action, reward, next_state = self.buffer.sample(batch_size) q_values = self.qnet(state).gather(1, action.unsqueeze(1)).squeeze(1) target_q_values = self.target_qnet(next_state).max(1)[0].detach() expected_q_values = reward + self.gamma * target_q_values loss = self.loss_fn(q_values, expected_q_values) self.optimizer.zero_grad() loss.backward() self.optimizer.step() def update_target_qnet(self): self.target_qnet.load_state_dict(self.qnet.state_dict()) ``` 4. 训练模型:使用DQN算法进行训练,并更新目标Q网络。 ```python dqn = DQN(state_dim, action_dim, gamma=0.99, epsilon=1.0, lr=0.001) for episode in range(num_episodes): state = env.reset() total_reward = 0 for step in range(max_steps): action = dqn.act(torch.tensor(state, dtype=torch.float32)) next_state, reward, done, _ = env.step(action) dqn.buffer.push(torch.tensor(state, dtype=torch.float32), action, reward, torch.tensor(next_state, dtype=torch.float32)) state = next_state total_reward += reward if len(dqn.buffer.buffer) > batch_size: dqn.train(batch_size) if step % target_update == 0: dqn.update_target_qnet() if done: break dqn.epsilon = max(0.01, dqn.epsilon * 0.995) ``` 5. 测试模型:使用训练好的模型进行测试。 ```python total_reward = 0 state = env.reset() while True: action = dqn.act(torch.tensor(state, dtype=torch.float32)) next_state, reward, done, _ = env.step(action) state = next_state total_reward += reward if done: break print("Total reward: {}".format(total_reward)) ``` 以上就是在PyTorch中实现DQN强化学习的基本步骤。需要注意的是,DQN算法中还有很多细节和超参数需要调整,具体实现过程需要根据具体问题进行调整。
评论 11
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值