Faster RCNN roidb.py

本文详细介绍了Faster R-CNN中roidb的准备过程,包括如何通过添加图像尺寸等信息丰富roidb,以及如何计算边界框回归目标。深入探讨了roidb在不同阶段的数据构成及其在训练中的作用。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

1. self.roidb返回一个列表,列表元素为字典,每张图片对应一个字典(包括flipped的图像)


2. prepare_roidb 函数

def prepare_roidb(imdb):
    """Enrich the imdb's roidb by adding some derived quantities that
    are useful for training. This function precomputes the maximum
    overlap, taken over ground-truth boxes, between each ROI and
    each ground-truth box. The class with maximum overlap is also
    recorded.
    该函数主要是用来准备imdb的roidb,主要工作是给roidb中的字典添加一些属性,比如‘image’。
    简单地来说,imdb就是图像数据库,roidb就是图像中的Region Of Intrest 数据库
    """
    sizes = [PIL.Image.open(imdb.image_path_at(i)).size
             for i in xrange(imdb.num_images)]
    # 当在‘Stage 2 Fast R-CNN, init from stage 2 RPN R-CNN model’阶段中,roidb由rpn_roidb()
    # 方法生成,其中的没一张图像的box不仅仅只有gtbox,还包括rpn_file里面的box。
    roidb = imdb.roidb 

    # print '~~~~~~~~~~~~~~imdb.image_index: {}'.format(imdb.image_index)
    # print '~~~~~~~~~~~~~~imdb.image_index: {} {}'.format(imdb.image_index[0], imdb.image_index[5011]) # 000005 000005 说明image_index是重复的两端合在一起的,即验证了append_flipped_images()中的self._image_index = self._image_index * 2
    # print '~~~~~~roidb[0]: {}'.format(roidb[0])
    # print '~~~~~~len(imdb.image_index): {}'.format(len(imdb.image_index)) #10022
    for i in xrange(len(imdb.image_index)):
        roidb[i]['image'] = imdb.image_path_at(i)
        roidb[i]['width'] = sizes[i][0]
        roidb[i]['height'] = sizes[i][1]
        # need gt_overlaps as a dense array for argmax
        gt_overlaps = roidb[i]['gt_overlaps'].toarray()
        # max overlap with gt over classes (columns)
        max_overlaps = gt_overlaps.max(axis=1)
        # gt class that had the max overlap 返回最大值的aixs=1轴上的坐标,在这里它同时代表某一类
        max_classes = gt_overlaps.argmax(axis=1)
        roidb[i]['max_classes'] = max_classes
        roidb[i]['max_overlaps'] = max_overlaps
        # sanity checks
        # max overlap of 0 => class should be zero (background)
        #检查背景类
        zero_inds = np.where(max_overlaps == 0)[0]
        assert all(max_classes[zero_inds] == 0)
        # max overlap > 0 => class should not be zero (must be a fg class)
        nonzero_inds = np.where(max_overlaps > 0)[0]
        assert all(max_classes[nonzero_inds] != 0)
        # print '~~~~~~~~~~~~~~~~zero_inds: {}'.format(zero_inds)
    # print '~~~~~~~~roidb[0]["gt_overlaps"]: {}'.format(roidb[0]['gt_overlaps'].toarray())
    # print '~~~~~~~~roidb[1]["gt_overlaps"]: {}'.format(roidb[1]['gt_overlaps'].toarray())
    # print '~~~~~~~~roidb[2]["gt_overlaps"]: {}'.format(roidb[2]['gt_overlaps'].toarray())
    # print '~~~~~~~~roidb[3]["gt_overlaps"]: {}'.format(roidb[3]['gt_overlaps'].toarray())

print ‘~~~~roidb[3][“gt_overlaps”]: {}’.format(roidb[3][‘gt_overlaps’].toarray())等测试代码在stage 1 时的输出如下:

{'boxes': array([[262, 210, 323, 338],
                [164, 263, 252, 371],
                [240, 193, 294, 298]], dtype=uint16), 'gt_classes': array([9, 9, 9], dtype=int32), 'gt_overlaps': <3x21 sparse matrix of type '<type 'numpy.float32'>'
    with 3 stored elements in Compressed Sparse Row format>, 'seg_areas': array([ 7998.,  9701.,  5830.], dtype=float32), 'flipped': False}
## 'gt_overlaps': <3x21 sparse matrix of type '<type 'numpy.float32'>' with 3 stored elements in Compressed Sparse Row format> 21代表21类,
#with 3 stored elements in Compressed Sparse Row format> 说明该sparse matrix里面只存储了3个非0值


#roidb[0]
{'gt_classes': array([9, 9, 9], dtype=int32), 'max_classes': array([9, 9, 9]), 'image': '/home/sam/WORKSPACE/py-faster-rcnn/data/VOCdevkit2007/VOC2007/JPEGImages/000005.jpg', 'flipped': False, 'width': 500, 
'boxes': array([[262, 210, 323, 338],
                [164, 263, 252, 371],
                [240, 193, 294, 298]], dtype=uint16), 'max_overlaps': array([ 1.,  1.,  1.], dtype=float32), 'height': 375, 'seg_areas': array([ 7998.,  9701.,  5830.], dtype=float32), 'gt_overlaps': <3x21 sparse matrix of type '<type 'numpy.float32'>'
    with 3 stored elements in Compressed Sparse Row format>}


#roidb[1]
{'gt_classes': array([7], dtype=int32), 'max_classes': array([7]), 'image': '/home/sam/WORKSPACE/py-faster-rcnn/data/VOCdevkit2007/VOC2007/JPEGImages/000007.jpg', 'flipped': False, 'width': 500, 
    'boxes': array([[140,  49, 499, 329]], dtype=uint16), 'max_overlaps': array([ 1.], dtype=float32), 'height': 333, 'seg_areas': array([ 101160.], dtype=float32), 'gt_overlaps': <1x21 sparse matrix of type '<type 'numpy.float32'>'
    with 1 stored elements in Compressed Sparse Row format>}


#roidb[2]
{'gt_classes': array([13, 15, 15, 15], dtype=int32), 'max_classes': array([13, 15, 15, 15]), 'image': '/home/sam/WORKSPACE/py-faster-rcnn/data/VOCdevkit2007/VOC2007/JPEGImages/000009.jpg', 'flipped': False, 'width': 500, 
'boxes': array([[ 68, 171, 269, 329],
                [149, 140, 228, 283],
                [284, 200, 326, 330],
                [257, 197, 296, 328]], dtype=uint16), 'max_overlaps': array([ 1.,  1.,  1.,  1.], dtype=float32), 'height': 375, 'seg_areas': array([ 32118.,  11520.,   5633.,   5280.], dtype=float32), 'gt_overlaps': <4x21 sparse matrix of type '<type 'numpy.float32'>'
    with 4 stored elements in Compressed Sparse Row format>}


#roidb[3]
{'gt_classes': array([7], dtype=int32), 'max_classes': array([7]), 'image': '/home/sam/WORKSPACE/py-faster-rcnn/data/VOCdevkit2007/VOC2007/JPEGImages/000012.jpg', 'flipped': False, 'width': 500, 
'boxes': array([[155,  96, 350, 269]], dtype=uint16), 'max_overlaps': array([ 1.], dtype=float32), 'height': 333, 'seg_areas': array([ 34104.], dtype=float32), 'gt_overlaps': <1x21 sparse matrix of type '<type 'numpy.float32'>'
    with 1 stored elements in Compressed Sparse Row format>}

#roidb[0]["gt_overlaps"]: 
[[ 0.  0.  0.  0.  0.  0.  0.  0.  0.  1.  0.  0.  0.  0.  0.  0.  0.  0. 0.  0.  0.]
 [ 0.  0.  0.  0.  0.  0.  0.  0.  0.  1.  0.  0.  0.  0.  0.  0.  0.  0. 0.  0.  0.]
 [ 0.  0.  0.  0.  0.  0.  0.  0.  0.  1.  0.  0.  0.  0.  0.  0.  0.  0. 0.  0.  0.]]

#roidb[1]["gt_overlaps"]: 
[[ 0.  0.  0.  0.  0.  0.  0.  1.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0. 0.  0.  0.]]

#roidb[2]["gt_overlaps"]: 
[[ 0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  1.  0.  0.  0.  0.  0.  0.  0.]
 [ 0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  1.  0.  0.  0.  0.  0.]
 [ 0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  1.  0.  0.  0.  0.  0.]
 [ 0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  1.  0.  0.  0.  0.  0.]]

#roidb[3]["gt_overlaps"]: 
[[ 0.  0.  0.  0.  0.  0.  0.  1.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.]]

def add_bbox_regression_targets(roidb): 添加bbox的回归目标(给roidb里面的字典添加了’bbox_targets’属性),同时根据cfg文件来处理bbox_targets的mean stds 问题

def add_bbox_regression_targets(roidb):
    """Add information needed to train bounding-box regressors."""
    assert len(roidb) > 0
    assert 'max_classes' in roidb[0], 'Did you call prepare_roidb first?'

    num_images = len(roidb)
    # Infer number of classes from the number of columns in gt_overlaps
    num_classes = roidb[0]['gt_overlaps'].shape[1]
    for im_i in xrange(num_images):
        rois = roidb[im_i]['boxes']

        # 当在‘Stage 2 Fast R-CNN, init from stage 2 RPN R-CNN model’阶段中,roidb由rpn_roidb()
        # 方法生成,其中的没一张图像的box不仅仅只有gtbox,还包括rpn_file里面的box。
        max_overlaps = roidb[im_i]['max_overlaps']
        max_classes = roidb[im_i]['max_classes']
        roidb[im_i]['bbox_targets'] = \
                _compute_targets(rois, max_overlaps, max_classes)

    if cfg.TRAIN.BBOX_NORMALIZE_TARGETS_PRECOMPUTED:
        # Use fixed / precomputed "means" and "stds" instead of empirical values
        means = np.tile(
                np.array(cfg.TRAIN.BBOX_NORMALIZE_MEANS), (num_classes, 1))
        stds = np.tile(
                np.array(cfg.TRAIN.BBOX_NORMALIZE_STDS), (num_classes, 1))
    else:
        # Compute values needed for means and stds
        # var(x) = E(x^2) - E(x)^2
        # 针对每个类计算mean 和 std, 因为需要训练class-specific regressors
        class_counts = np.zeros((num_classes, 1)) + cfg.EPS
        sums = np.zeros((num_classes, 4))
        squared_sums = np.zeros((num_classes, 4))
        for im_i in xrange(num_images):
            targets = roidb[im_i]['bbox_targets']
            for cls in xrange(1, num_classes):
                cls_inds = np.where(targets[:, 0] == cls)[0]
                if cls_inds.size > 0:
                    class_counts[cls] += cls_inds.size
                    sums[cls, :] += targets[cls_inds, 1:].sum(axis=0)
                    squared_sums[cls, :] += \
                            (targets[cls_inds, 1:] ** 2).sum(axis=0)

        means = sums / class_counts
        stds = np.sqrt(squared_sums / class_counts - means ** 2)

    print 'bbox target means:'
    print means # 输出21类的bbox_targets的4个offset的均值
    print means[1:, :].mean(axis=0) # ignore bg class
    print 'bbox target stdevs:'
    print stds # 输出21类的bbox_targets的4个offset的std,每一类为1行
    print stds[1:, :].mean(axis=0) # ignore bg class

    # Normalize targets
    if cfg.TRAIN.BBOX_NORMALIZE_TARGETS:
        print "Normalizing targets"
        for im_i in xrange(num_images):
            targets = roidb[im_i]['bbox_targets']
            for cls in xrange(1, num_classes):
                cls_inds = np.where(targets[:, 0] == cls)[0]
                roidb[im_i]['bbox_targets'][cls_inds, 1:] -= means[cls, :]
                roidb[im_i]['bbox_targets'][cls_inds, 1:] /= stds[cls, :]
    else:
        print "NOT normalizing targets"

    # These values will be needed for making predictions
    # (the predicts will need to be unnormalized and uncentered)
    return means.ravel(), stds.ravel()

def _compute_targets(rois, overlaps, labels)
roidb中 键‘boxes’所对应的值为左上、右下两点的坐标,其形式为(w1, h1, w2, h2)

def _compute_targets(rois, overlaps, labels):
    """Compute bounding-box regression targets for an image."""
    # Indices of ground-truth ROIs
    gt_inds = np.where(overlaps == 1)[0]
    if len(gt_inds) == 0:
        # Bail if the image has no ground-truth ROIs
        return np.zeros((rois.shape[0], 5), dtype=np.float32)
    # Indices of examples for which we try to make predictions
    ex_inds = np.where(overlaps >= cfg.TRAIN.BBOX_THRESH)[0]

    # Get IoU overlap between each ex ROI and gt ROI
    # 调用bbox.pyx文件中的bbox_overlaps函数计算RPN Proposals 与 Ground-Truth Box的IOU
    # overlap;IOU overlap为Proposal与gt box的面积交集 除以 面积的并集。
    ex_gt_overlaps = bbox_overlaps(
        np.ascontiguousarray(rois[ex_inds, :], dtype=np.float),
        np.ascontiguousarray(rois[gt_inds, :], dtype=np.float))

    # Find which gt ROI each ex ROI has max overlap with:
    # this will be the ex ROI's gt target
    gt_assignment = ex_gt_overlaps.argmax(axis=1)

    # 这里gt_assignment的size比gt_inds的size大,所以返回gt_rois的size和ex_rois的size一样大,
    #即为ex_rois中的每个roi都assign了一个gt,可见python的灵活强大
    gt_rois = rois[gt_inds[gt_assignment], :]
    ex_rois = rois[ex_inds, :]

    # 调用bbox_transform.py文件中的bbox_transform函数。target返回图片中每个box的结果,
    #被选为(P,G)pair的proposals包含其中,那些不满足overlaps >= cfg.TRAIN.BBOX_THRESH 
    #的proposal也包含其中,只不过相应的值为0.bbox_transform的计算参考论文RCNN
    targets = np.zeros((rois.shape[0], 5), dtype=np.float32)
    targets[ex_inds, 0] = labels[ex_inds]
    targets[ex_inds, 1:] = bbox_transform(ex_rois, gt_rois)
    return targets

bbox.pyx文件中的 def bbox_overlaps(np.ndarray[DTYPE_t, ndim=2] boxes,np.ndarray[DTYPE_t, ndim=2] query_boxes):
返回 : overlaps: (N, K) ndarray of overlap between boxes and query_boxes

def bbox_overlaps(
        np.ndarray[DTYPE_t, ndim=2] boxes,
        np.ndarray[DTYPE_t, ndim=2] query_boxes):
    """
    Parameters
    ----------
    boxes: (N, 4) ndarray of float
    query_boxes: (K, 4) ndarray of float
    Returns
    -------
    overlaps: (N, K) ndarray of overlap between boxes and query_boxes
    """
    cdef unsigned int N = boxes.shape[0]
    cdef unsigned int K = query_boxes.shape[0]
    cdef np.ndarray[DTYPE_t, ndim=2] overlaps = np.zeros((N, K), dtype=DTYPE)
    cdef DTYPE_t iw, ih, box_area
    cdef DTYPE_t ua
    cdef unsigned int k, n
    for k in range(K):
        box_area = (
            (query_boxes[k, 2] - query_boxes[k, 0] + 1) *
            (query_boxes[k, 3] - query_boxes[k, 1] + 1)
        )
        for n in range(N):
            iw = (
                min(boxes[n, 2], query_boxes[k, 2]) -
                max(boxes[n, 0], query_boxes[k, 0]) + 1
            )
            if iw > 0:
                ih = (
                    min(boxes[n, 3], query_boxes[k, 3]) -
                    max(boxes[n, 1], query_boxes[k, 1]) + 1
                )
                if ih > 0:
                    ua = float(
                        (boxes[n, 2] - boxes[n, 0] + 1) *
                        (boxes[n, 3] - boxes[n, 1] + 1) +
                        box_area - iw * ih
                    )
                    overlaps[n, k] = iw * ih / ua
    return overlaps
<think>好的,我现在需要帮助用户解决运行Faster R-CNN训练脚本时遇到的问题。用户提到了错误或者对用法不太清楚,所以我要详细分析可能的情况。 首先,用户提供的命令是python tools/train.py configs/faster_rcnn/faster_rcnn_r50_fpn_1x_didi.py,这可能涉及到配置文件的路径是否正确。我需要检查用户是否在正确的项目目录下运行命令,比如MMDetection框架通常需要从根目录执行。如果用户在其他目录下运行,可能会找不到配置文件或依赖文件。 然后,配置文件faster_rcnn_r50_fpn_1x_didi.py是否存在?用户可能没有正确设置配置文件,或者文件路径有误。比如,如果用户的项目结构中没有configs目录,或者该目录下没有相应的配置文件,就会出错。需要建议用户确认文件路径,并检查_base_中的引用是否正确,比如是否有../_base_/models/faster_rcnn_r50_fpn.py这样的引用,这些基础文件是否存在。 接下来,用户可能缺少依赖库。MMDetection需要PyTorch、CUDA等环境。如果用户没有正确安装这些依赖,或者版本不匹配,会导致运行错误。建议用户检查requirements.txt,确保所有依赖已安装,并确认CUDA版本与PyTorch兼容。 数据集的配置也是常见的问题。用户提到的didi_detection.py可能是指自定义的数据集,需要检查数据集路径是否正确,标注文件是否存在,数据格式是否符合要求。例如,COCO格式的数据集需要特定的目录结构和json文件。 GPU资源问题也可能导致错误。如果用户没有可用的GPU,或者显存不足,训练脚本可能会崩溃。建议用户使用--gpu-id参数指定GPU,或者调整batch size以减少显存占用。 日志和错误信息是关键。用户需要查看终端输出的错误信息,或者日志文件,比如work_dirs中的日志,来定位具体问题。常见的错误包括导入错误、文件未找到、CUDA内存不足等。 另外,用户可能需要了解如何继续训练、多GPU训练或者指定工作目录。比如,使用--resume-from参数继续训练,或者--work-dir指定输出目录。多GPU训练可以通过工具如分布式训练脚本启动。 最后,建议用户参考提供的引用中的示例命令,比如引用[1]和引用[2],里面提到了类似的训练命令,可能帮助用户正确执行。同时,可以提示用户进行FPS测试,如引用[3]中的方法,来验证模型性能。 总结可能的问题点:路径错误、依赖缺失、数据集配置、GPU问题、参数使用。需要逐步引导用户检查这些方面,并提供相应的解决方法和示例命令。</think>根据引用[1][2]提供的训练命令及配置信息,运行Faster R-CNN训练脚本的正确步骤如下: ### 一、基础执行命令 ```bash # 单GPU训练 python tools/train.py configs/faster_rcnn/faster_rcnn_r50_fpn_1x_didi.py --work-dir work_dirs/faster_rcnn # 多GPU训练(例如8卡) bash tools/dist_train.sh configs/faster_rcnn/faster_rcnn_r50_fpn_1x_didi.py 8 --work-dir work_dirs/faster_rcnn ``` 参数说明: - `--work-dir`指定模型和日志保存路径[^2] - 配置文件`faster_rcnn_r50_fpn_1x_didi.py`需包含正确的_base_配置链[^2] ### 二、常见问题排查 1. **路径错误** - 确认在项目根目录执行命令 - 检查配置文件中的`_base_`路径是否正确: ```python # faster_rcnn_r50_fpn_1x_didi.py _base_ = [ '../_base_/models/faster_rcnn_r50_fpn.py', # 模型结构 '../_base_/datasets/didi_detection.py', # 数据集配置 '../_base_/schedules/schedule_1x.py', # 训练策略 '../_base_/default_runtime.py' # 运行时配置 ] ``` 2. **依赖缺失** ```bash # 安装必要依赖 pip install -r requirements.txt # 确认CUDA与PyTorch版本匹配 nvidia-smi # 查看CUDA版本 ``` 3. **数据集配置** - 检查`didi_detection.py`中的路径: ```python data = dict( train=dict( img_prefix='data/didi/train/', ann_file='data/didi/annotations/train.json'), val=dict( img_prefix='data/didi/val/', ann_file='data/didi/annotations/val.json')) ``` 4. **GPU资源问题** ```bash # 指定单卡训练 python tools/train.py configs/faster_rcnn/faster_rcnn_r50_fpn_1x_didi.py --gpu-id 0 # 减少batch_size防止OOM sed -i 's/samples_per_gpu=2/samples_per_gpu=1/g' configs/faster_rcnn/faster_rcnn_r50_fpn_1x_didi.py ``` ### 三、进阶用法 1. **恢复训练** ```bash python tools/train.py configs/faster_rcnn/faster_rcnn_r50_fpn_1x_didi.py \ --resume-from work_dirs/faster_rcnn/latest.pth ``` 2. **性能验证** ```bash # 训练完成后测试FPS(需修改路径) python tools/analysis_tools/benchmark.py \ configs/faster_rcnn/faster_rcnn_r50_fpn_2x_coco.py \ work_dirs/faster_rcnn/epoch_12.pth \ --fuse-conv-bn ``` 引用[3]展示了基准测试的标准方法
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值