[目标检测][图像分割] 使用 mask rcnn 训练自己数据集 识别琦玉老师

本文介绍了如何使用Mask R-CNN训练自己的数据集,特别是针对埼玉老师的图片。首先克隆Mask R-CNN代码库,然后使用VGG Image Annotator进行图像标注。接着修改代码以适应自定义数据集,并添加回调函数保存权重。经过训练后,进行图片测试,发现模型能较好地检测埼玉老师,但在特定场景下可能出现误识别。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

#介绍

  1. 没有过多的解释, 需要一定程序基础和动手能力
  2. 没有环境配置
  3. 第一次写东西, 有问题请见谅

#克隆以及观察代码

  1. clone https://github.com/matterport/Mask_RCNN.git
  2. 找到文件 balloon image.png
  3. 查看注释
  • 看 BalloonDataset 中的 load_balloon 方法可知使用的标记软件为 VGG Image Annotator,
def load_balloon(self, dataset_dir, subset):
        """Load a subset of the Balloon dataset.
        dataset_dir: Root directory of the dataset.
        subset: Subset to load: train or val
        """
        # Add classes. We have only one class to add.
        self.add_class("balloon", 1, "balloon")

        # Train or validation dataset?
        assert subset in ["train", "val"]
        dataset_dir = os.path.join(dataset_dir, subset)

        # Load annotations
        # VGG Image Annotator (up to version 1.6) saves each image in the form:
        # { 'filename': '28503151_5b5b7ec140_b.jpg',
        #   'regions': {
   
   
        #       '0': {
   
   
        #           'region_attributes': {},
        #           'shape_attributes': {
   
   
        #               'all_points_x': [...],
        #               'all_points_y': [...],
        #               'name': 'polygon'}},
        #       ... more regions ...
        #   },
        #   'size': 100202
        # }
        # We mostly care about the x and y coordinates of each region
        # Note: In VIA 2.0, regions was changed from a dict to a list.
        annotations = json.load(open(os.path.join(dataset_dir, "via_region_data.json")))
        annotations = list(annotations.values())  # don't need the dict keys

        # The VIA tool saves images in the JSON even if they don't have any
        # annotations. Skip unannotated images.
        annotations = [a for a in annotations if a['regions']]

        # Add images
        for a in annotations:
            # Get the x, y coordinaets of points of the polygons that make up
            # the outline of each object instance. These are stores in the
            # shape_attributes (see json format above)
            # The if condition is needed to support VIA versions 1.x and 2.x.
            if type(a['regions']) is dict:
                polygons = [r['shape_attributes'] for r in a['regions'].values()]
            else:
                polygons = [r['shape_attributes'] for r in a['regions']] 

            # load_mask() needs the image size to convert polygons to masks.
            # Unfortunately, VIA doesn't include it in JSON, so we must read
            # the image. This is only managable since the dataset is tiny.
            image_path = os.path.join(dataset_dir, a['filename'])
            image = skimage.io.imread(image_path)
            height, width = image.shape[:2]

            self.add_image(
                "balloon",
                image_id=a['filename'],  # use file name as a unique image id
                path=image_path,
                width=width, height=height,
                polygons=polygons)
  1. 在这里我选用这个版本image.png

#准备供训练的图集

  1. 网上随便找到一些埼玉老师的图片

  2. 分别放到训练目录train和测试目录val
    image.png

  3. 使用 VGG Image Annotator 中的 Polygon region shape 框选需要检测的区域

  4. 点击Region Attributes 添加source(balloon)和class_name(埼玉)

  5. 其他图片也是一样的处理, 具体不多说了.
    image.png

  6. 导出json结构, 放到图片目录, 如下
    image.png

#接下来是修改代

  1. 在这里我们只修改 load_balloon 方法中的 self.add_class(“balloon”, 1, "balloon), 修改后如下
def load_balloon(self, dataset_dir, subset):
        """Load a subset of the Balloon dataset.
        dataset_dir: Root directory of the dataset.
        subset: Subset to load: train or val
        """
        # Add classes. We have only one class to add.
        self.add_class("balloon", 1, "埼玉") #old: self.add_class("balloon", 1, "balloon)

        # Train or validation dataset?
        assert subset in ["train", "val"]
        dataset_dir = os.path.join(dataset_dir, subset)

  1. 修改train方法
  • 添加custom_callback, 用来保存训练好的权重. 每个epoch完成会自动保存权重. 随时可以强制停止, 然后测试训练效果.
def train(model):
    """Train the model."""
    # Training dataset.
    dataset_train = BalloonDataset()
    dataset_train.load_balloon(args.dataset, "train")
    dataset_train.prepare()

    # Validation dataset
    dataset_val = BalloonDataset()
    dataset_val.load_balloon(args.dataset, "val")
    dataset_val.prepare()

    # *** This training schedule is an example. Update to your needs ***
    # Since we're using a very small dataset, and starting from
    # COCO trained weights, we don't need to train too long. Also,
    # no need to train all layers, just the heads should do it.
    print("Training network heads")
    model.train(dataset_train, dataset_val,
                learning_rate=config.LEARNING_RATE,
                epochs=100,
                layers='heads', custom_callbacks=[keras.callbacks.ModelCheckpoint(weights_path,
                                            verbose=0, save_weights_only=True)])
  1. 手动指定权重文件路径
  • 添加代码, 当文件不存在的时候, 创建权重文件, 如下
    # Select weights file to load
    if args.weights.lower() == "coco":
        weights_path = COCO_WEIGHTS_PATH
        # Download weights file
        if not os.path.exists(weights_path):
            utils.download_trained_weights(weights_path)
    elif args.weights.lower() == "last":
        # Find last trained weights
        weights_path = model.find_last()
    elif args.weights.lower() == "imagenet":
        # Start from ImageNet trained weights
        weights_path = model.get_imagenet_weights()
    else:
        # 添加代码, 当文件不存在的时候, 创建权重文件
        if os.path.isfile(args.weights) == False:
            model.keras_model.save_weights(args.weights)

        weights_path = args.weights
  1. 修改detect_and_color_splash方法
  • 添加方法box_splash, 用来画框
  • 在下方的detect_and_color_splash方法中注释掉splash = color_splash(image, r[‘masks’])使用splash = box_splash(image, r)
def box_splash(image, r):
    rois 
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值