实时车道检测--A Novel Vision-Based Framework for Real-Time Lane Detection and Tracking

A Novel Vision-Based Framework for Real-Time Lane Detection and Tracking
SAE Technical Paper 2019-01-0690, 2019

本文提出将传统方法和 CNN结合起来实现实时车道检测

基于传统算法的车道检测方法依赖于strict assumptions ,只能在有限的场景中可以正常工作,速度快
基于CNN网络的车道检测不需要手工设计特征,只要提供足够的样本就可以完成车道检测,没有很强的假设,但是速度慢。
这里我们将两者的优势结合起来,一个 branch 是基于传统方法的,一个 branch 是基于CNN网络的,两者通过一个 coordinating unit 结合起来,最后使用 Kalman filter based tracking unit 完成车道跟踪
在这里插入图片描述

Feature-Based Lane Detection:
1)Median Filtering 中值滤波取出噪声
2)Canny Edge Detection
3)Feature Extraction 特征提取,使用 Progressive Probabilistic Hough Transform (PPHT) votes out the correct lane lines
简要说一下这个 Progressive Probabilistic Hough Transform (PPHT):如果某一条直线上的点已经达到一定的阈值,那么将这条直线上的所有的边缘点全部提取出来,归入到这条直线中去。
4)Lane Fitting,这里使用了 linear model and least square 来拟合车道线

在这里插入图片描述
在这里插入图片描述

Deep Learning Based Lane Detection
这里采用类似 U-Net 的网络结构
在这里插入图片描述
CNN网络输出背景+4个车道的每个像素类别信息
在这里插入图片描述

Lane Fitting for Lane Segments 对于 CNN网络的输出,我们使用 random sample consensus (RANSAC) 拟合车道线

Coordinating Unit and Kalman Filter
在这里插入图片描述
Coordinating unit 的作用:balance the ratio variance between the feature-based detection branch and deep learning based lane detection branch
使用传统方法检测5次后直接使用 CNN网络的检测结果,如果传统检测方法失败,使用 CNN网络检测结果

Kalman Filter:可以提升车道检测稳定性
The Kalman filter can improve lane detection performance, especially in the conditions of various illumination, worn lane lines or strong disturbance.

在这里插入图片描述

11

Training deep models for lane detection is challenging due to the very subtle and sparse supervisory signals in- herent in lane annotations. Without learning from much richer context, these models often fail in challenging sce- narios, e.g., severe occlusion, ambiguous lanes, and poor lighting conditions. In this paper, we present a novel knowl- edge distillation approach, i.e., Self Attention Distillation (SAD), which allows a model to learn from itself and gains substantial improvement without any additional supervision or labels. Specifically, we observe that attention maps ex- tracted from a model trained to a reasonable level would encode rich contextual information. The valuable contex- tual information can be used as a form of ‘free’ supervision for further representation learning through performing top- down and layer-wise attention distillation within the net- work itself. SAD can be easily incorporated in any feed- forward convolutional neural networks (CNN) and does not increase the inference time. We validate SAD on three pop- ular lane detection benchmarks (TuSimple, CULane and BDD100K) using lightweight models such as ENet, ResNet- 18 and ResNet-34. The lightest model, ENet-SAD, per- forms comparatively or even surpasses existing algorithms. Notably, ENet-SAD has 20 × fewer parameters and runs 10 × faster compared to the state-of-the-art SCNN [16], while still achieving compelling performance in all bench- marks. Our code is available at https://github. com/cardwing/Codes-for-Lane-Detection.
评论 2
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值