Title:Feature Enhancement Based on CycleGAN for Nighttime Vehicle Detection
基于Cyclegan的特征增强用于夜间车辆检测
ABSTRACT:
Existing night vehicle detection methods mainly detect vehicles by detecting headlights or taillights. However, these features are adversely affected by the complex road lighting environment. In this paper, a cascade detection network framework FteGanOd is proposed with a feature translate-enhancement (FTE) module and the object detection (OD) module. First, the FTE module is built based on CycleGAN and multi-scale feature fusion is proposed to enhance the detection of vehicle features at night. The features of night and day are combined by fusing different convolutional layers to produce enhanced feature (EF) maps. Second, the OD module, based on the existing object detection network, is improved by cascading with the FTE module to detect vehicles on the EF maps. The proposed FteGanOd method recognizes vehicles at night with greater accuracy by improving the contrast between vehicles and the background and by suppressing interference from ambient light. The proposed FteGanOd is validated on the Berkeley Deep Drive (BDD) dataset and our private dataset. The experimental results show that our proposed method can effectively enhance vehicle features and improve the accuracy of vehicle detection at night.
摘要:
现有的夜间车辆检测方法主要通过检测前大灯或尾灯来检测车辆。然而,这些特征受到复杂道路照明环境的不利影响。本文提出了一种级联检测网络框架Fte Gan Od,包含特征平移增强( FTE )模块和目标检测( OD )模块。首先,基于CycleGAN搭建FTE模块并提出多尺度特征融合增强夜间车辆特征检测。通过融合不同的卷积层将夜间和白天的特征进行组合,生成增强特征( EF )图。其次,在现有目标检测网络的基础上,对OD模块进行改进,通过与FTE模块级联来检测EF地图上的车辆。本文提出的Fte Gan Od方法通过提高车辆与背景的对比度,抑制环境光的干扰,提高了夜间车辆的识别精度。本文提出的Fte Gan Od在Berkeley Deep Drive ( BDD )数据集和我们的私有数据集上进行了验证。实验结果表明,本文提出的方法能够有效增强车辆特征,提高夜间车辆检测的准确率。
I. INTRODUCTION
Vehicle detection is an important application in the field of target detection. More accurate vehicle detection systems for day and night conditions will facilitate the development of more reliable Automatic Driving System (ADS) and Driver Assistance System (DAS) in the future. In night (lowlight) condition, the probability of traffic accidents is increased [1] because less visual information about vehicles and the complex lighting environment is available. (a) Less visual information about vehicles. The contrast between the vehicle and the background is reduced at night, making vehicle features less obvious. (b) Complex lighting environment. Interference from various other lights is confused with vehicle headlights and taillights, which leads to a high rate of false vehicle detection and presents significant challenges for vision-based vehicle detection at night.
一、引言
车辆检测是目标检测领域的一个重要应用。更精确的昼夜工况车辆检测系统将有利于未来开发更可靠的自动驾驶系统( Automatic Driver System,ADS )和辅助驾驶系统( Driver Assistance System,DAS )。在夜间(低照度)条件下,由于车辆的视觉信息较少,且光照环境复杂,增加了交通事故发生的概率[ 1 ]。( a )车辆的视觉信息较少。夜间车辆与背景的对比度降低,使得车辆特征不明显。( b )复杂的照明环境。各种其他灯光的干扰与车辆前照灯和尾灯混淆,导致车辆误检率较高,对基于视觉的夜间车辆检测提出了重大挑战。
Most existing vehicle detection methods use the headlights and taillights as the primary nighttime vehicle detection features. Traditional detection methods that are not based on convolutional neural networks (CNNs) used the headlights and taillight to locate vehicles [4]–[11]. Taillights were localized by segmenting the image, and vehicle bounding boxes were predicted by assuming the typical width of the vehicle [4], [5]. Region proposals were firstly obtained by paired taillights of vehicles, next the vehicles whether in these region proposals were determined [6], [7].In [10], a detection-by-tracking method was proposed to detect multiple vehicles by tracking headlights/taillights. These traditional non-CNN vehicle detection methods have two disadvantages. (1) Vehicle detection is susceptible errors due to the complex lighting conditions in urban areas, including vehicle lights, streetlights, building lights, and the reflected lights from vehicles, which increases the false positive rate. (2) Vehicle lights are sometimes obscured when the vehicle is occluded or only the side of the vehicle is photographed, which increases the missed detection rate.
现有的车辆检测方法大多采用前大灯和尾灯作为夜间车辆检测的主要特征。传统的非基于卷积神经网络( CNNs )的检测方法利用前大灯和尾灯来定位车辆[ 4 ] - [ 11 ]。通过分割图像定位尾灯,并假设车辆的典型宽度预测车辆边界框[ 4 ] [ 5 ]。区域建议首先通过车辆的配对尾灯获得,然后确定车辆是否在这些区域建议中[ 6 ],[ 7 ]。文献[ 10 ]提出了一种跟踪检测方法,通过跟踪前照灯/尾灯来检测多个车辆。这些传统的非CNN车辆检测方法存在两个缺点。( 1 )由于城市区域光照条件复杂,包括车灯、路灯、建筑灯、车辆反射光等,车辆检测容易出现错误,增加了误检率。( 2 )当车辆被遮挡或仅拍摄车辆侧面时,车灯有时会被遮挡,增加了漏检率。
Vehicle detection methods based on CNN have gradually become the research focus in recent years and some CNN-based nighttime vehicle detection methods have been investigated. Lin et al. [23] proposed AugGAN to translate daylight images into night images for data augmentation, and the images were then used to train existing detection systems, improving the performance of the detector. However, only augments data processing systems in existing night vehicle detection methods. Kuang et al. [1] used a bioinspired enhancement approach to enhance night images for feature fusion and object classification in 2017. In 2019, they combined traditional features and CNN features to generate regions of interest (ROI) [2], [3]. The above methods, combining traditional machine learning methods and deep learning methods for object detecting, belong to multistage learning frameworks. However, they are not an endto-end learning framework which makes the training process cumbersome.
基于CNN的车辆检测方法近年来逐渐成为研究热点,一些基于CNN的夜间车辆检测方法得到了研究。Lin等人[ 23 ]提出了AugGAN将白天图像转化为夜间图像进行数据增强,然后将图像用于训练现有的检测系统,提高了检测器的性能。然而,现有的夜间车辆检测方法仅增加了数据处理系统。2017年,Kuang等[ 1 ]使用一种生物启发增强方法增强夜间图像,用于特征融合和物体分类。2019年,他们结合传统特征和CNN特征生成感兴趣区域[ 2 ],[ 3 ]。上述方法结合了传统机器学习方法和深度学习方法进行目标检测,属于多阶段学习框架。然而,它们并不是一个端到端的学习框架,使得训练过程繁琐。
Some object detection methods based on deep learning (Fast RCNN [29], SSD [33], etc.) can also be used for night vehicle detection. However, these methods are designed for daytime object detection and their use in nighttime conditions results in a low level of feature extraction accuracy from the network structure and a low rate of vehicle detection performance.
一些基于深度学习( Fast RCNN 、SSD 等。)的目标检测方法也可以用于夜间车辆检测。然而,这些方法都是针对白天目标检测而设计的,在夜间条件下使用,导致网络结构的特征提取精度较低,车辆检测性能较差。
In a word, low-light environment, complex lighting, and the specialized structure of nighttime detection network are three challenges in nighttime vehicle detection. A low-light environment increases the rate of missed vehicles because of the faint features of the vehicles; complex lighting leads to a higher false detection rate when dealing with more complex traffic scenes; and the specialized nighttime detection network is still incomplete. However, generative adversarial network (GAN) is a style transfer network that can translate nighttime images into daytime images. GAN uses encode modules to extract features from nighttime images and uses decode modules to restore the daytime images.
总之,低光照环境、复杂光照以及夜间检测网络的专业化结构是夜间车辆检测面临的三大挑战。低光照环境由于车辆特征微弱,增加了车辆的漏检率;复杂的光照导致在处理更复杂的交通场景时误检率较高;而且专门的夜间探测网络还不完善。然而,生成对抗网络( GAN )是一种可以将夜间图像转化为白天图像的风格迁移网络。GAN使用编码模块从夜间图像中提取特征,使用解码模块恢复白天图像。
Therefore, we propose a novel nighttime vehicle detection framework named FteGanOd (feature translate-enhancement generative adversarial network for object detection) to overcome the above challenges. FteGanOd includes a feature translate-enhancement (FTE) module and an object detection (OD) module, as shown in Fig. 1. (1) FTE firstly uses CycleGAN to translate images from night to day. The multiscale features from CycleGAN are next used to fuse the encoded (nighttime) features and decoded (daytime) features to form the enhanced feature (EF) maps. Encoded features contain important information of night vehicle headlights and taillights; decoded features contain daytime features for enhancing the background brightness while suppressing most of the light sources. (2) The OD module (improved YOLO, RCNN or SSD) extracts abstract vehicle features and detects vehicles on the EF maps.
因此,本文提出了一种新颖的夜间车辆检测框架FteGanOd (用于目标检测的特征平移增强生成对抗网络)来克服上述挑战。Fte Gan Od包括特征平移增强( FTE )模块和目标检测( OD )模块,如图1所示。( 1 ) FTE首先使用CycleGAN进行夜间到白天的图像转换。然后利用CycleGAN的多尺度特征融合编码(夜间)特征和解码(白天)特征,形成增强特征( EF )图。编码后的特征包含了夜间车辆前大灯和尾灯的重要信息;解码特征包含白天特征,用于增强背景亮度,同时抑制大部分光源。( 2 ) OD模块(改进的YOLO、RCNN或SSD)在EF图上提取抽象的车辆特征并检测车辆。
The remainder of this paper is given as follows: In Section II, the methods of vehicle detection at night, detection networks based on CNN and GAN are introduced. Section III describes our night detection network FteGanOd in detail. Section IV introduces the experimental processes and discusses the experimental results. Finally, conclusions and possibilities for future work are presented in Section V.
本文余下内容安排如下:第二部分介绍夜间车辆检测方法、基于CNN和GAN的检测网络。第三节详细介绍了我们的夜间探测网络Fte Gan Od。第四部分介绍了实验过程并讨论了实验结果。最后,在第五部分给出了结论和未来工作的可能性。
II. RELATED WORK
A. NIGHTTIME VEHICLE DETECTION
The headlights/taillights are used as key information in locating vehicles for almost all nighttime vehicle detection algorithms. Searching for red or highlight lights in night images is the main technique to obtain region proposals in previous methods, which has been proven effective in most literature.
Ⅱ.相关工作
A .夜间车辆检测
几乎所有的夜间车辆检测算法都将车灯/尾灯作为车辆定位的关键信息。在夜间图像中寻找红色或高亮灯光是以往方法中获取候选区域的主要技术,这在大多数文献中被证明是有效的。
For obtaining ROIs of vehicles, the following techniques can be adopted: threshold-based segmentation methods [5], [12], [18], paired vehicle lighting-based methods [6]–[8], [14]–[16], saliency map-based methods [17], [27], and artificially designed feature-based methods [13]. After the ROIs are obtained, we need to further determine whether these candidate regions contain vehicles. X. Dai [18] used Hough transform to detect the circles of the headlights and further segment the areas to locate

本文提出了一种名为FteGanOd的夜间车辆检测方法,该方法结合了特征平移增强模块(FTE)和目标检测模块(OD)。FTE模块基于CycleGAN将夜间图像转化为日间图像,并通过多尺度特征融合增强车辆特征。OD模块则与FTE级联,提取和检测增强后的特征,提高车辆检测的精度。实验结果显示,FteGanOd在夜间车辆检测上比传统的基于CNN的检测方法表现出更高的准确性和鲁棒性,尤其是在复杂光照和车流密集的场景下。
最低0.47元/天 解锁文章
228

被折叠的 条评论
为什么被折叠?



