keras-yolo3项目之三:model.py文件

Keras-YOLO3模型实现详解
本文详细介绍了Keras中的YOLOv3模型实现,重点解析了model.py文件中的核心函数,包括二维卷积层的设定、网络结构的设计以及边界框的处理和损失函数的计算,展示了YOLO模型在目标检测中的应用。

model.py文件是用keras模块编写的yolo模型文件,主要包含14个函数:

  • DarknetConv2D:为二维卷积层设置Darknet参数。
  • DarknetConv2D_BN_Leaky:由二维卷积层DarknetConv2D与正则化层BatchNormalization、激活函数LeakyReLU组合而成。
  • resblock_body:封装了一个循环体:包含2个卷积核大小不同的DarknetConv2D_BN_Leaky
  • darknet_bodyDarknent网络结构,含有52个二维卷积层。
  • make_last_layers:最后几个函数层,由6个DarknetConv2D_BN_Leaky层和1个Conv2D_linear层组成。
  • yolo_bodyyolo网络结构。
  • tiny_yolo_bodytiny_yolo网络结构。
  • yolo_head:将最终网络层特征转换为边界框参数。
  • yolo_correct_boxes:获取正确的边界框。
  • yolo_boxes_and_scores:处理卷积层输出。
  • yolo_eval:在给定的输入和返回过滤框上评估YOLO模型。
  • preprocess_true_boxes:将true_boxes预处理为训练输入格式
  • box_iou:检测框与原标注框的交并比。
  • yolo_loss:模型损失。

详细内容如下:

"""YOLO_v3 Model Defined in Keras."""

from functools import wraps

import numpy as np
import tensorflow as tf
from keras import backend as K
from keras.layers import Conv2D, Add, ZeroPadding2D, UpSampling2D, Concatenate, MaxPooling2D
from keras.layers.advanced_activations import LeakyReLU
from keras.layers.normalization import BatchNormalization
from keras.models import Model
from keras.regularizers import l2

from yolo3.utils import compose


@wraps(Conv2D)
def DarknetConv2D(*args, **kwargs):
    """
    Wrapper to set Darknet parameters for Convolution2D.
    为2维卷积层设置Darknet参数
    
    返回带有Darknet参数的2维卷积层
    """
    darknet_conv_kwargs = {'kernel_regularizer': l2(5e-4)}
    darknet_conv_kwargs['padding'] = 'valid' if kwargs.get('strides')==(2,2) else 'same'
    darknet_conv_kwargs.update(kwargs)
    return Conv2D(*args, **darknet_conv_kwargs)    #带有Darknet参数的2维卷积层

def DarknetConv2D_BN_Leaky(*args, **kwargs):
    """
    Darknet Convolution2D followed by BatchNormalization and LeakyReLU.
    封装3个网络层:
    1.带有Darknet参数的2维卷积层
    2.批量正则化网络层:按批正则化前一个网络层的激活
    3.激活函数层
    """
    no_bias_kwargs = {'use_bias': False}
    no_bias_kwargs.update(kwargs)
    return compose(
        DarknetConv2D(*args, **no_bias_kwargs),    #带有Darknet参数的2维卷积层
        BatchNormalization(),    #按批正则化前一个网络层的激活
        LeakyReLU(alpha=0.1))    #激活函数

def resblock_body(x, num_filters, num_blocks):
    '''
    A series of resblocks starting with a downsampling Convolution2D
    封装了一个循环体:包含2个卷积核大小不同的DarknetConv2D_BN_Leaky
    x:输入的图像张量
    num_filters:输出空间的维度
    num_blocks:循环次数
    
    本函数包含2个步骤:
    1.填充,计算DarknetConv2D_BN_Leaky,输出维度为num_filters的x;
    2.循环累加(循环体),将步骤1的输出作为该步骤的初始输入x:
    首先计算1*1卷积核,然后将1*1卷积核的结果作为3*3卷积核的输入,计算出结果y
    循环累加的初始输入x与得到的y进行相加,将相加结果赋值给x,进入下一次循环
    '''
    # Darknet uses left and top padding instead of 'same' mode
    x = ZeroPadding2D(((1,0),(1,0)))(x)    #对2维输入的的左侧和上侧进行0填充,((1,0),(1,0))表示((上,下),(左,右))
    x = DarknetConv2D_BN_Leaky(num_filters, (3,3), strides=(2,2))(x)   #3*3的卷积核,卷积步长(2,2)
    
    for i in range(num_blocks):
        y = compose(
                DarknetConv2D_BN_Leaky(num_filters//2, (1,1)),    #1*1的卷积核
                DarknetConv2D_BN_Leaky(num_filters, (3,3)))(x)    #3*3的卷积核
        x = Add()([x,y])    #求和,输出维度与x,y一致
    return x

def darknet_body(x):
    '''
    Darknent body having 52 Convolution2D layers
    Darknent网络层主干,主要作用是对图像进行特征提取,包含52个2维卷积核层:52=1+3+5+17+17+9
    1个DarknetConv2D_BN_Leaky里包含1个2维卷积层,1个批量正则化层,1个激活层
    1个resblock_body包含1+2*num_blocks个DarknetConv2D_BN_Leaky
    '''
    x = DarknetConv2D_BN_Leaky(32, (3,3))(x)    #1
    x = resblock_body(x, 64, 1)    #1+2*1=3
    x = resblock_body(x, 128, 2)    #1+2*2=5
    x = resblock_body(x, 256, 8)    #1+2*8=17
    x = resblock_body(x, 512, 8)    #1+2*8=17
    x = resblock_body(x, 1024, 4)    #1+2*4=9
    return x

def make_last_layers(x, num_filters, out_filters):
    '''
    6 Conv2D_BN_Leaky layers followed by a Conv2D_linear layer
    7个2维卷积层
    5个DarknetConv2D_BN_Leaky计算得x
    1个DarknetConv2D_BN_Leaky,1个DarknetConv2D计算得y
    
    num_filters:中间卷积层x得输出通道数,也可以理解为特征数量
    out_filters:最终层y的通道数
    返回x,y
    '''
    x = compose(
            DarknetConv2D_BN_Leaky(num_filters, (1,1)),
            DarknetConv2D_BN_Leaky(num_filters*2, (3,3)),
            DarknetConv2D_BN_Leaky(num_filters, (1,1)),
            DarknetConv2D_BN_Leaky(num_filters*2, (3,3)),
            DarknetConv2D_BN_Leaky(num_filters, (1,1)))(x)
    y = compose(
            DarknetConv2D_BN_Leaky(num_filters*2, (3,3)),
            DarknetConv2D(out_filters, (1,1)))(x)
    return x, y


def yolo_body(inputs, num_anchors, num_classes):
    """
    Create YOLO_V3 model CNN body in Keras.
    YOLO_V3卷积神经网络主干,共包含75个卷积层
    
    1.Darknent模型,包含52个卷积层  
    2.Darknent模型的输出作为make_last_layers输入,输出x,y1,包含7个卷积层  
    3.将x进行DarknetConv2D_BN_Leaky,UpSampling2D计算,并将结果与Darknent模型的152层的输出进行拼接,得x,包含1个卷积层  
    4.将输出x作为make_last_layers输入,输出x,y2,包含7个卷积层  
    5.将x进行DarknetConv2D_BN_Leaky,UpSampling2D计算,并将结果与Darknent模型的92层的输出进行拼接,得x,包含1个卷积层  
    6.将输出x作为make_last_layers输入,输出x,y3,包含7个卷积层  
    """
    darknet = Model(inputs, darknet_body(inputs))   #Model(inputs, outputs, name=None)
    x, y1 = make_last_layers(darknet.output, 512, num_anchors*(num_classes+5))

    x = compose(
            DarknetConv2D_BN_Leaky(256, (1,1)),
            UpSampling2D(2))(x)
    x = Concatenate()([x,darknet.layers[152].output])    #拼接
    x, y2 = make_last_layers(x, 256, num_anchors*(num_classes+5))

    x = compose(
            DarknetConv2D_BN_Leaky(128, (1,1)),
            UpSampling2D(2))(x)
    x = Concatenate()([x,darknet.layers[92].output])
    x, y3 = make_last_layers(x, 128, num_anchors*(num_classes+5))

    return Model(inputs, [y1,y2,y3])

def tiny_yolo_body(inputs, num_anchors, num_classes):
    '''
    Create Tiny YOLO_v3 model CNN body in keras.
    轻量级YOLOV3卷积神经网络主干,包含12个卷积层
    
    1.5个DarknetConv2D_BN_Leaky计算得x1  
    2.将x1作为输入计算3个DarknetConv2D_BN_Leaky得x2  
    3.将x2作为输入计算DarknetConv2D_BN_Leaky,DarknetConv2D得y1  
    4.将x2作为输入计算DarknetConv2D_BN_Leaky,UpSampling2D,结果替换x2  
    5.将[x2,x1]进行Concatenate,DarknetConv2D_BN_Leaky,DarknetConv2D计算得y2  
    '''
    x1 = compose(
            DarknetConv2D_BN_Leaky(16, (3,3)),
            MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'),    #池化层
            DarknetConv2D_BN_Leaky(32, (3,3)),
            MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'),
            DarknetConv2D_BN_Leaky(64, (3,3)),
            MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'),
            DarknetConv2D_BN_Leaky(128, (3,3)),
            MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'),
            DarknetConv2D_BN_Leaky(256, (3,3)))(inputs)
    x2 = compose(
            MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'),
            DarknetConv2D_BN_Leaky(512, (3,3)),
            MaxPooling2D(pool_size=(2,2), strides=(1,1), padding='same'),
            DarknetConv2D_BN_Leaky(1024, (3,3)),
            DarknetConv2D_BN_Leaky(256, (1,1)))(x1)
    y1 = compose(
            DarknetConv2D_BN_Leaky(512, (3,3)),
            DarknetConv2D(num_anchors*(num_classes+5), (1,1)))(x2)

    x2 = compose(
            DarknetConv2D_BN_Leaky(128, (1,1)),
            UpSampling2D(2))(x2)
    y2 = compose(
            Concatenate(),
            DarknetConv2D_BN_Leaky(256, (3,3)),
            DarknetConv2D(num_anchors*(num_classes+5), (1,1)))([x2,x1])

    return Model(inputs, [y1,y2])


def yolo_head(feats, anchors, num_classes, input_shape, calc_loss=False):
    """Convert final layer features to bounding box parameters.
    将最终网络层特征转换为边界框参数。
    
    feats:yolov3网络的输出特征y1 or y2 or y3  
    anchors:聚类所得标注框,默认9个  
    num_classes:类别数量  
    input_shape:输入形状  
    返回检测框的中心坐标xy,宽高wh,置信度,类别概率
    """
    num_anchors = len(anchors)
    # Reshape to batch, height, width, num_anchors, box_params.
    anchors_tensor = K.reshape(K.constant(anchors), [1, 1, 1, num_anchors, 2])

    grid_shape = K.shape(feats)[1:3] # height, width,feats的第2,3个元素,即(width,feats)
    grid_y = K.tile(K.reshape(K.arange(0, stop=grid_shape[0]), [-1, 1, 1, 1]),
        [1, grid_shape[1], 1, 1])    #元素复制扩张(width,feats,1,1)
    grid_x = K.tile(K.reshape(K.arange(0, stop=grid_shape[1]), [1, -1, 1, 1]),
        [grid_shape[0], 1, 1, 1])
    grid = K.concatenate([grid_x, grid_y])    #拼接
    grid = K.cast(grid, K.dtype(feats))    #转换元素类型

    feats = K.reshape(
        feats, [-1, grid_shape[0], grid_shape[1], num_anchors, num_classes + 5])

    # Adjust preditions to each spatial grid point and anchor size.
    # 调整检测框大小
    box_xy = (K.sigmoid(feats[..., :2]) + grid) / K.cast(grid_shape[::-1], K.dtype(feats))    #检测框中心坐标
    box_wh = K.exp(feats[..., 2:4]) * anchors_tensor / K.cast(input_shape[::-1], K.dtype(feats))    #检测框宽高
    box_confidence = K.sigmoid(feats[..., 4:5])    #置信度
    box_class_probs = K.sigmoid(feats[..., 5:])    #类别可能性

    if calc_loss == True:
        return grid, feats, box_xy, box_wh
    return box_xy, box_wh, box_confidence, box_class_probs


def yolo_correct_boxes(box_xy, box_wh, input_shape, image_shape):
    '''
    Get corrected boxes
    获取更正后的检测框,即对识别结果的检测框进行更正
    '''
    box_yx = box_xy[..., ::-1]
    box_hw = box_wh[..., ::-1]
    input_shape = K.cast(input_shape, K.dtype(box_yx))    #输入形状
    image_shape = K.cast(image_shape, K.dtype(box_yx))    #图片形状
    new_shape = K.round(image_shape * K.min(input_shape/image_shape))    #按比例缩放图片
    offset = (input_shape-new_shape)/2./input_shape
    scale = input_shape/new_shape
    box_yx = (box_yx - offset) * scale
    box_hw *= scale

    box_mins = box_yx - (box_hw / 2.)
    box_maxes = box_yx + (box_hw / 2.)
    boxes =  K.concatenate([
        box_mins[..., 0:1],  # y_min
        box_mins[..., 1:2],  # x_min
        box_maxes[..., 0:1],  # y_max
        box_maxes[..., 1:2]  # x_max
    ])

    # Scale boxes back to original image shape.
    boxes *= K.concatenate([image_shape, image_shape])
    return boxes


def yolo_boxes_and_scores(feats, anchors, num_classes, input_shape, image_shape):
    '''
    Process Conv layer output
    处理卷积层的输出
    '''
    box_xy, box_wh, box_confidence, box_class_probs = yolo_head(feats,
        anchors, num_classes, input_shape)
    boxes = yolo_correct_boxes(box_xy, box_wh, input_shape, image_shape)
    boxes = K.reshape(boxes, [-1, 4])
    box_scores = box_confidence * box_class_probs
    box_scores = K.reshape(box_scores, [-1, num_classes])
    return boxes, box_scores


def yolo_eval(yolo_outputs,
              anchors,
              num_classes,
              image_shape,
              max_boxes=20,
              score_threshold=.6,
              iou_threshold=.5):
    """
    Evaluate YOLO model on given input and return filtered boxes.
    评估YOLO模型
    """
    num_layers = len(yolo_outputs)    #3
    anchor_mask = [[6,7,8], [3,4,5], [0,1,2]] if num_layers==3 else [[3,4,5], [1,2,3]] # default setting
    input_shape = K.shape(yolo_outputs[0])[1:3] * 32
    boxes = []
    box_scores = []
    # 遍历YOLO模型的3个输出
    for l in range(num_layers):
        _boxes, _box_scores = yolo_boxes_and_scores(yolo_outputs[l],
            anchors[anchor_mask[l]], num_classes, input_shape, image_shape)
        boxes.append(_boxes)
        box_scores.append(_box_scores)
    boxes = K.concatenate(boxes, axis=0)
    box_scores = K.concatenate(box_scores, axis=0)

    mask = box_scores >= score_threshold    #掩码
    max_boxes_tensor = K.constant(max_boxes, dtype='int32')
    boxes_ = []
    scores_ = []
    classes_ = []
    # 遍历类别
    for c in range(num_classes):
        # TODO: use keras backend instead of tf.
        class_boxes = tf.boolean_mask(boxes, mask[:, c])    #对张量使用布尔掩码进行筛选
        class_box_scores = tf.boolean_mask(box_scores[:, c], mask[:, c])
        nms_index = tf.image.non_max_suppression(
            class_boxes, class_box_scores, max_boxes_tensor, iou_threshold=iou_threshold)
        class_boxes = K.gather(class_boxes, nms_index)
        class_box_scores = K.gather(class_box_scores, nms_index)
        classes = K.ones_like(class_box_scores, 'int32') * c
        boxes_.append(class_boxes)
        scores_.append(class_box_scores)
        classes_.append(classes)
    boxes_ = K.concatenate(boxes_, axis=0)
    scores_ = K.concatenate(scores_, axis=0)
    classes_ = K.concatenate(classes_, axis=0)

    return boxes_, scores_, classes_


def preprocess_true_boxes(true_boxes, input_shape, anchors, num_classes):
    '''
    Preprocess true boxes to training input format
    处理训练数据集输入的标注框

    Parameters
    ----------
    true_boxes: array, shape=(m, T, 5)
        Absolute x_min, y_min, x_max, y_max, class_id relative to input_shape.
    input_shape: array-like, hw, multiples of 32
    anchors: array, shape=(N, 2), wh,标注框的宽高
    num_classes: integer

    Returns
    -------
    y_true: list of array, shape like yolo_outputs, xywh are reletive value

    '''
    assert (true_boxes[..., 4]<num_classes).all(), 'class id must be less than num_classes'
    num_layers = len(anchors)//3 # default setting
    anchor_mask = [[6,7,8], [3,4,5], [0,1,2]] if num_layers==3 else [[3,4,5], [1,2,3]]

    true_boxes = np.array(true_boxes, dtype='float32')
    input_shape = np.array(input_shape, dtype='int32')
    boxes_xy = (true_boxes[..., 0:2] + true_boxes[..., 2:4]) // 2
    boxes_wh = true_boxes[..., 2:4] - true_boxes[..., 0:2]
    true_boxes[..., 0:2] = boxes_xy/input_shape[::-1]
    true_boxes[..., 2:4] = boxes_wh/input_shape[::-1]

    m = true_boxes.shape[0]
    grid_shapes = [input_shape//{0:32, 1:16, 2:8}[l] for l in range(num_layers)]
    y_true = [np.zeros((m,grid_shapes[l][0],grid_shapes[l][1],len(anchor_mask[l]),5+num_classes),
        dtype='float32') for l in range(num_layers)]

    # Expand dim to apply broadcasting.
    anchors = np.expand_dims(anchors, 0)
    anchor_maxes = anchors / 2.
    anchor_mins = -anchor_maxes
    valid_mask = boxes_wh[..., 0]>0

    for b in range(m):
        # Discard zero rows.
        wh = boxes_wh[b, valid_mask[b]]
        if len(wh)==0: continue
        # Expand dim to apply broadcasting.
        wh = np.expand_dims(wh, -2)
        box_maxes = wh / 2.
        box_mins = -box_maxes

        intersect_mins = np.maximum(box_mins, anchor_mins)
        intersect_maxes = np.minimum(box_maxes, anchor_maxes)
        intersect_wh = np.maximum(intersect_maxes - intersect_mins, 0.)
        intersect_area = intersect_wh[..., 0] * intersect_wh[..., 1]
        box_area = wh[..., 0] * wh[..., 1]
        anchor_area = anchors[..., 0] * anchors[..., 1]
        iou = intersect_area / (box_area + anchor_area - intersect_area)

        # Find best anchor for each true box
        best_anchor = np.argmax(iou, axis=-1)

        for t, n in enumerate(best_anchor):
            for l in range(num_layers):
                if n in anchor_mask[l]:
                    i = np.floor(true_boxes[b,t,0]*grid_shapes[l][1]).astype('int32')
                    j = np.floor(true_boxes[b,t,1]*grid_shapes[l][0]).astype('int32')
                    k = anchor_mask[l].index(n)
                    c = true_boxes[b,t, 4].astype('int32')
                    y_true[l][b, j, i, k, 0:4] = true_boxes[b,t, 0:4]
                    y_true[l][b, j, i, k, 4] = 1
                    y_true[l][b, j, i, k, 5+c] = 1

    return y_true


def box_iou(b1, b2):
    '''
    Return iou tensor

    Parameters
    ----------
    b1: tensor, shape=(i1,...,iN, 4), xywh
    b2: tensor, shape=(j, 4), xywh

    Returns
    -------
    iou: tensor, shape=(i1,...,iN, j)

    '''

    # Expand dim to apply broadcasting.
    b1 = K.expand_dims(b1, -2)
    b1_xy = b1[..., :2]
    b1_wh = b1[..., 2:4]
    b1_wh_half = b1_wh/2.
    b1_mins = b1_xy - b1_wh_half
    b1_maxes = b1_xy + b1_wh_half

    # Expand dim to apply broadcasting.
    b2 = K.expand_dims(b2, 0)
    b2_xy = b2[..., :2]
    b2_wh = b2[..., 2:4]
    b2_wh_half = b2_wh/2.
    b2_mins = b2_xy - b2_wh_half
    b2_maxes = b2_xy + b2_wh_half

    intersect_mins = K.maximum(b1_mins, b2_mins)
    intersect_maxes = K.minimum(b1_maxes, b2_maxes)
    intersect_wh = K.maximum(intersect_maxes - intersect_mins, 0.)
    intersect_area = intersect_wh[..., 0] * intersect_wh[..., 1]
    b1_area = b1_wh[..., 0] * b1_wh[..., 1]
    b2_area = b2_wh[..., 0] * b2_wh[..., 1]
    iou = intersect_area / (b1_area + b2_area - intersect_area)

    return iou


def yolo_loss(args, anchors, num_classes, ignore_thresh=.5, print_loss=False):
    '''
    Return yolo_loss tensor

    Parameters
    ----------
    yolo_outputs: list of tensor, the output of yolo_body or tiny_yolo_body
    y_true: list of array, the output of preprocess_true_boxes
    anchors: array, shape=(N, 2), wh
    num_classes: integer
    ignore_thresh: float, the iou threshold whether to ignore object confidence loss

    Returns
    -------
    loss: tensor, shape=(1,)

    '''
    num_layers = len(anchors)//3 # default setting 3聚类所得的标注框的数量不能改变
    yolo_outputs = args[:num_layers]
    y_true = args[num_layers:]
    anchor_mask = [[6,7,8], [3,4,5], [0,1,2]] if num_layers==3 else [[3,4,5], [1,2,3]]
    input_shape = K.cast(K.shape(yolo_outputs[0])[1:3] * 32, K.dtype(y_true[0]))
    grid_shapes = [K.cast(K.shape(yolo_outputs[l])[1:3], K.dtype(y_true[0])) for l in range(num_layers)]
    loss = 0
    m = K.shape(yolo_outputs[0])[0] # batch size, tensor
    mf = K.cast(m, K.dtype(yolo_outputs[0]))

    for l in range(num_layers):
        object_mask = y_true[l][..., 4:5]
        true_class_probs = y_true[l][..., 5:]

        grid, raw_pred, pred_xy, pred_wh = yolo_head(yolo_outputs[l],
             anchors[anchor_mask[l]], num_classes, input_shape, calc_loss=True)
        pred_box = K.concatenate([pred_xy, pred_wh])

        # Darknet raw box to calculate loss.
        raw_true_xy = y_true[l][..., :2]*grid_shapes[l][::-1] - grid
        raw_true_wh = K.log(y_true[l][..., 2:4] / anchors[anchor_mask[l]] * input_shape[::-1])
        raw_true_wh = K.switch(object_mask, raw_true_wh, K.zeros_like(raw_true_wh)) # avoid log(0)=-inf
        box_loss_scale = 2 - y_true[l][...,2:3]*y_true[l][...,3:4]

        # Find ignore mask, iterate over each of batch.
        ignore_mask = tf.TensorArray(K.dtype(y_true[0]), size=1, dynamic_size=True)
        object_mask_bool = K.cast(object_mask, 'bool')
        def loop_body(b, ignore_mask):
            true_box = tf.boolean_mask(y_true[l][b,...,0:4], object_mask_bool[b,...,0])
            iou = box_iou(pred_box[b], true_box)    #计算iou
            best_iou = K.max(iou, axis=-1)
            ignore_mask = ignore_mask.write(b, K.cast(best_iou<ignore_thresh, K.dtype(true_box)))
            return b+1, ignore_mask
        _, ignore_mask = K.control_flow_ops.while_loop(lambda b,*args: b<m, loop_body, [0, ignore_mask])
        ignore_mask = ignore_mask.stack()
        ignore_mask = K.expand_dims(ignore_mask, -1)

        # K.binary_crossentropy is helpful to avoid exp overflow.
        xy_loss = object_mask * box_loss_scale * K.binary_crossentropy(raw_true_xy, raw_pred[...,0:2], from_logits=True)
        wh_loss = object_mask * box_loss_scale * 0.5 * K.square(raw_true_wh-raw_pred[...,2:4])
        confidence_loss = object_mask * K.binary_crossentropy(object_mask, raw_pred[...,4:5], from_logits=True)+ \
            (1-object_mask) * K.binary_crossentropy(object_mask, raw_pred[...,4:5], from_logits=True) * ignore_mask
        class_loss = object_mask * K.binary_crossentropy(true_class_probs, raw_pred[...,5:], from_logits=True)

        xy_loss = K.sum(xy_loss) / mf
        wh_loss = K.sum(wh_loss) / mf
        confidence_loss = K.sum(confidence_loss) / mf
        class_loss = K.sum(class_loss) / mf
        loss += xy_loss + wh_loss + confidence_loss + class_loss
        if print_loss:
            loss = tf.Print(loss, [loss, xy_loss, wh_loss, confidence_loss, class_loss, K.sum(ignore_mask)], message='loss: ')
    return loss
WARNING ⚠️ no model scale passed. Assuming scale=&#39;n&#39;. Traceback (most recent call last): File "E:\ultralytics-yolo11-20250502\ultralytics-yolo11-main\train.py", line 20, in <module> model = YOLO(r&#39;E:\ultralytics-yolo11-20250502\ov.yaml&#39;) File "E:\ultralytics-yolo11-20250502\ultralytics-yolo11-main\ultralytics\models\yolo\model.py", line 23, in __init__ super().__init__(model=model, task=task, verbose=verbose) File "E:\ultralytics-yolo11-20250502\ultralytics-yolo11-main\ultralytics\engine\model.py", line 143, in __init__ self._new(model, task=task, verbose=verbose) File "E:\ultralytics-yolo11-20250502\ultralytics-yolo11-main\ultralytics\engine\model.py", line 251, in _new self.model = (model or self._smart_load("model"))(cfg_dict, verbose=verbose and RANK == -1) # build model File "E:\ultralytics-yolo11-20250502\ultralytics-yolo11-main\ultralytics\nn\tasks.py", line 417, in __init__ self.model, self.save = parse_model(deepcopy(self.yaml), ch=ch, verbose=verbose, warehouse_manager=self.warehouse_manager) # model, savelist File "E:\ultralytics-yolo11-20250502\ultralytics-yolo11-main\ultralytics\nn\tasks.py", line 1228, in parse_model m = m(*args) File "E:\ultralytics-yolo11-20250502\ultralytics-yolo11-main\ultralytics\nn\backbone\overlock.py", line 998, in overlock_xt model = OverLoCK( File "E:\ultralytics-yolo11-20250502\ultralytics-yolo11-main\ultralytics\nn\backbone\overlock.py", line 921, in __init__ self.channel = [i.size(1) for i in self.forward(torch.randn(1, 3, 640, 640).cuda())] File "E:\ultralytics-yolo11-20250502\ultralytics-yolo11-main\ultralytics\nn\backbone\overlock.py", line 976, in forward x, ctx_cls = self.forward_sub_features(x, ctx) File "E:\ultralytics-yolo11-20250502\ultralytics-yolo11-main\ultralytics\nn\backbone\overlock.py", line 963, in forward_sub_features x, ctx = blk(x, ctx, ctx_up) File "E:\anaconda\envs\hong\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "E:\ultralytics-yolo11-20250502\ultralytics-yolo11-main\ultralytics\nn\backbone\overlock.py", line 566, in forward x = self._forward_inner(x, h_x, h_r) File "E:\ultralytics-yolo11-20250502\ultralytics-yolo11-main\ultralytics\nn\backbone\overlock.py", line 531, in _forward_inner x1 = na2d_av(attn1, value[0], kernel_size=self.smk_size) TypeError: &#39;NoneType&#39; object is not callable
最新发布
07-14
<think>我们正在解决用户关于YOLO训练时出现的`TypeError: &#39;NoneType&#39; object is not callable`错误。根据用户提供的引用和错误信息,我们注意到之前的引用[1]中提到了一个类似的错误`TypeError: &#39;NoneType&#39; object is not iterable`,但当前错误是`not callable`,两者有所不同。不过,我们可以从错误信息入手。 错误分析: 错误信息`TypeError: &#39;NoneType&#39; object is not callable`通常意味着在代码中尝试将`None`当作函数来调用。也就是说,有一个变量被赋值为`None`,但后面却试图用函数调用的方式使用它(例如:`some_variable()`)。 在用户提供的错误堆栈中,关键行是: ``` File "E:\ultralytics-yolo11-20250502\ultralytics-yolo11-main\ultralytics\nn\tasks.py", line 1228, in parse_model m = m(*args) TypeError: &#39;NoneType&#39; object is not callable ``` 这里,在`parse_model`函数中,尝试调用`m(*args)`,但此时`m`是`None`,因此报错。 可能的原因: 1. 模型配置文件(ov.yaml)中定义的某个模块在解析时返回了`None`。 2. 在构建模型时,某个自定义模块(如OverLoCK)没有正确返回一个可调用的对象(例如,在`overlock.py`中定义的自定义模块可能在某些条件下返回`None`)。 3. 模块注册问题:自定义模块可能没有正确注册,导致在通过名称获取模块时返回`None`。 解决步骤: 1. **检查模型配置文件(ov.yaml)**: 确保配置文件中的每一层定义都正确,特别是自定义层(如`OverLoCK`)的名称拼写与代码中定义的类名一致。 2. **检查自定义模块的返回值**: 在`overlock.py`中,确保自定义模块(如`OverLoCK`)的类定义正确,并且在实例化时返回的是一个可调用的对象(即类的实例,可以调用`__call__`方法,也就是`forward`)。 3. **检查`parse_model`函数**: 在`tasks.py`的`parse_model`函数中,会通过模块名来获取模块的构造函数。如果模块名不正确,或者模块没有正确导入,则获取到的`m`可能是`None`。 在`parse_model`函数中,通常会有类似这样的代码: ```python m = eval(m) if isinstance(m, str) else m # eval string ``` 这里会尝试将字符串转换为模块。如果字符串对应的模块不存在,则`eval`会失败,但不会返回`None`(而是抛出异常)。所以另一种可能是,在构建模块时,某个模块的构造函数返回了`None`(这不应该发生,因为构造函数通常返回实例)。 4. **检查自定义模块的初始化**: 查看`overlock.py`中的`OverLoCK`类,确保在`__init__`方法中没有返回`None`,并且没有在初始化过程中将自身设置为`None`。 5. **调试`parse_model`函数**: 在`tasks.py`的`parse_model`函数中,在出错行(1228行)之前打印`m`和`args`,以确定是哪个模块的构建导致了问题。 例如,在1228行之前添加: ```python print(f"Trying to build module: {m} with args: {args}") ``` 然后运行,看打印出的最后一个模块是什么,从而定位问题。 6. **检查自定义模块的导入**: 确保自定义模块(如`overlock.py`中的`OverLoCK`)已经被正确导入到`tasks.py`可以访问的命名空间中。因为`parse_model`函数中的`eval`是在当前上下文中执行的,所以需要确保自定义模块的类在`tasks.py`中可见。 通常,在`tasks.py`的顶部,应该导入自定义模块,例如: ```python from ultralytics.nn.backbone.overlock import OverLoCK ``` 或者,如果使用字符串动态加载,要确保模块路径正确。 7. **检查环境依赖**: 虽然错误不是直接关于CUDA,但引用[3]提到CUDA的问题,所以确保你的CUDA环境正常。不过,这个错误发生在模型构建阶段,与CUDA无关。 8. **检查PyTorch版本**: 引用[4]提到TensorFlow版本问题,但这里使用的是PyTorch。不过,版本不兼容也可能导致奇怪的问题。确保PyTorch版本与代码要求一致。 具体到用户的错误,由于错误发生在`parse_model`函数中,并且是调用`m(*args)`时`m`为`None`,所以最可能的原因是自定义模块`OverLoCK`没有正确导入或定义,导致在解析时获取到的`m`是`None`。 因此,重点检查: - 在`ov.yaml`配置文件中,所有层的定义(特别是使用`OverLoCK`的地方)是否拼写正确。 - 在`tasks.py`中,是否能够正确访问到`OverLoCK`类(即是否已经导入)。 如果自定义模块`OverLoCK`是在`overlock.py`中定义的,那么确保在`tasks.py`中已经导入。如果`parse_model`函数使用字符串来动态构造模块,那么需要确保字符串名称与类名完全一致。 举例:在`ov.yaml`中,可能有一行: ```yaml - [OverLoCK, ...] # 注意这里的OverLoCK必须与类名一致 ``` 在`tasks.py`中,`parse_model`函数会尝试将字符串"OverLoCK"转换为类,这要求该类已经在当前作用域中定义(通过导入)。 如果上述检查都没有问题,那么可能是代码逻辑错误导致模块被覆盖为`None`。通过打印调试可以定位。 由于用户没有提供完整的代码(特别是`ov.yaml`和`overlock.py`),我们只能给出一般性建议。 总结解决方案: 1. 检查模型配置文件`ov.yaml`,确保自定义层的名称正确。 2. 确保自定义层(OverLoCK)的类在`tasks.py`中可见(即已经导入)。 3. 在`tasks.py`的`parse_model`函数中添加调试信息,确定是哪个模块名称解析为`None`。 4. 检查自定义层的实现,确保在初始化时没有返回`None`。 如果用户按照以上步骤检查,应该可以解决问题。 相关问题: 1. 如何解决YOLO模型构建时的`TypeError: &#39;NoneType&#39; object is not callable`错误? 2. 在解析YOLO模型配置文件时,如何确保自定义模块被正确导入? 3. 为什么在PyTorch中构建模型时会遇到`NoneType`不可调用的错误? 4. 如何调试YOLOv5(或类似版本)的模型构建过程? 5.YOLO训练中,模型配置文件.yaml)的正确编写格式是什么?</think>根据错误信息`TypeError: &#39;NoneType&#39; object is not callable`和您提供的引用内容,这是YOLO训练中常见的模型配置问题。以下是系统性的解决方案: ### 错误原因分析 1. **模型配置错误**(最常见) - 自定义模块未正确定义或返回了`None` - YAML配置文件中存在语法错误或路径错误 - 模块名称拼写错误导致无法找到对应实现[^1] 2. **权重加载问题** - 未正确加载预训练权重 - 权重文件路径错误或文件损坏 - 模型结构与权重不匹配[^2] 3. **环境依赖问题** - PyTorch与CUDA版本不兼容 - 缺少必要的依赖库 - TensorFlow版本冲突(Keras相关)[^4] ### 解决步骤 #### 1. 检查模型配置文件 在`ov.yaml`文件中添加调试输出: ```yaml # 确保每个模块都有明确定义 backbone: type: OverLoCK # 检查模块名称拼写 config: channels: [64, 128, 256] print_config: true # 添加调试输出 ``` #### 2. 验证自定义模块实现 在`overlock.py`中检查`OverLoCK`类的初始化: ```python class OverLoCK(nn.Module): def __init__(self, config): super().__init__() # 确保没有返回None if config is None: raise ValueError("Config cannot be None") # 添加空值检查 # 调试输出 print(f"[DEBUG] OverLoCK config: {config}") ``` #### 3. 检查权重加载 在`train.py`中修改权重加载方式: ```python # 添加权重文件存在性检查 weight_path = r&#39;E:\ultralytics-yolo11-20250502\ov.yaml&#39; if not os.path.exists(weight_path): print(f"Error: Weight file not found at {weight_path}") # 尝试不使用预训练权重 model = YOLO(None) # 空初始化[^2] else: model = YOLO(weight_path) ``` #### 4. 验证环境依赖 ```bash # 检查PyTorch与CUDA兼容性 python -c "import torch; print(f&#39;PyTorch: {torch.__version__}, CUDA: {torch.version.cuda}&#39;)" # 检查TensorFlow版本(如果使用Keras) pip show tensorflow # 确保版本兼容[^4] ``` ### 预防措施 1. **添加类型检查装饰器** ```python def non_null(func): def wrapper(*args, **kwargs): if any(arg is None for arg in args): raise ValueError("None argument passed") return func(*args, **kwargs) return wrapper @non_null class OverLoCK(nn.Module): ... ``` 2. **配置文件验证脚本** ```python # validate_yaml.py import yaml def validate_config(path): with open(path) as f: config = yaml.safe_load(f) assert &#39;backbone&#39; in config, "Missing backbone config" assert &#39;type&#39; in config[&#39;backbone&#39;], "Missing module type" print("Config validation passed!") ``` 3. **单元测试样例** ```python # test_overlock.py import torch def test_overlock(): from overlock import OverLoCK model = OverLoCK(config={"channels": [64, 128]}) dummy = torch.randn(1, 3, 640, 640) try: output = model(dummy) assert output is not None print("✅ OverLoCK forward pass successful") except Exception as e: print(f"❌ Test failed: {str(e)}") ``` > 建议使用`torchsummary`验证模型结构: > ```bash > pip install torchsummary > python -c "from torchsummary import summary; from model import YOLO; summary(YOLO(&#39;ov.yaml&#39;), (3, 640, 640))" > ``` ### 典型修复案例 用户A在Keras+YOLO3训练中遇到类似错误,通过以下步骤解决: 1. 发现`train.py`第45行`generator_output = next(self._generator)`报错 2. 检查数据加载器发现返回了`None` 3. 修复数据路径后问题解决[^1] 用户B通过修改上采样模式解决兼容性问题: ```python # 在net/yolo.py中修改 self.upsample = nn.Upsample(scale_factor=2, mode=&#39;bilinear&#39;) # 改为&#39;bilinear&#39;[^2] ```
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

great-wind

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值