作者主页:编程千纸鹤
作者简介:Java领域优质创作者、优快云博客专家 、优快云内容合伙人、掘金特邀作者、阿里云博客专家、51CTO特邀作者、多年架构师设计经验、多年校企合作经验,被多个学校常年聘为校外企业导师,指导学生毕业设计并参与学生毕业答辩指导,有较为丰富的相关经验。期待与各位高校教师、企业讲师以及同行交流合作
主要内容:Java项目、Python项目、前端项目、PHP、ASP.NET、人工智能与大数据、单片机开发、物联网设计与开发设计、简历模板、学习资料、面试题库、技术互助、就业指导等
业务范围:免费功能设计、开题报告、任务书、中期检查PPT、系统功能实现、代码编写、论文编写和辅导、论文降重、长期答辩答疑辅导、腾讯会议一对一专业讲解辅导答辩、模拟答辩演练、和理解代码逻辑思路等。
收藏点赞不迷路 关注作者有好处
文末获取源码
项目编号:
一,环境介绍
语言环境:Python3.8
数据库:Mysql: mysql5.7
开发技术:YOLOV8+Python+PyQT5
开发工具:IDEA或PyCharm
二,项目简介
车辆行人多目标检测与追踪系统
结合了先进的YOLOv8目标检测技术与ByteTrack多目标跟踪算法,能够在实时视频画面中准确地检测并跟踪行人与车辆。这一系统对于改善交通安全、提高城市监控效率以及增强公共安全管理具有显著的重要性。本文基于YOLOv8深度学习框架
,通过5607张图片
,训练了一个进行车辆与行人
的目标检测模型,准确率高达94%
;然后结合ByteTrack
多目标跟踪算法,实现了目标的追踪效果。最终基于此开发了一款带UI界面的车辆行人多目标检测与追踪系统
,可用于实时检测场景中的车辆与行人检测追踪,可以更加方便的进行功能展示。该系统是基于python
与PyQT5
开发的,支持视频
以及摄像头
进行多目标检测追踪
,也可选择指定目标追踪
,支持保存追踪结果
视频。本文提供了完整的Python代码和使用教程,给感兴趣的小伙伴参考学习,完整的代码资源文件获取方式见文末。
车辆行人多目标检测与追踪系统
结合了先进的YOLOv8目标检测技术
与ByteTrack多目标跟踪算法
,能够在实时视频画面中准确地检测并跟踪行人与车辆
。这一系统对于改善交通安全、提高城市监控效率以及增强公共安全管理具有显著的重要性。实时的追踪可以帮助相关部门快速响应各种交通和安全事件,降低事故发生风险,并为城市交通规划和管理提供数据支持。
车辆行人多目标检测与追踪系统的应用场景主要包括
:
交通监控
:实时监测城市交通流量、行人穿行情况,分析交通拥堵,优化交通信号控制。
事故分析与应对
:在交通事故发生时提供准确的事故记录,辅助事故原因分析和快速响应。
安全监督
:用于公共场所和重要设施周边的安全监控,检测可疑行为,预防犯罪行为的发生。
自动驾驶辅助系统
:整合至自动驾驶系统中,帮助车辆更好地理解周边环境,避免与行人和其他车辆的碰撞。
城市规划
:通过长期数据收集分析人流和车流模式,为城市规划和基础设施建设提供决策支持。
零售与商业分析
:在商业区域监测人流和车流量,为零售和商业活动的布局提供依据。
总结来说,车辆行人多目标检测与追踪系统的应用可以在多个层面提高城市管理和居民的生活质量。该系统能够为交通安全和城市安全提供有力支撑,是智慧城市建设和智能交通系统中不可或缺的一部分。通过对实时视频画面的深度分析,该系统不仅可以预防和减少交通事故,还能为未来城市的可持续发展提供数据驱动的见解。
三,系统展示
通过网络上搜集关于车辆行人的各类图片
,并使用LabelMe标注工具对每张图片中的目标边框(Bounding Box)及类别进行标注。一共包含5607张图片
,其中训练集包含4485张图片
,验证集包含1122张图片
,部分图像及标注如下图所示。
各损失函数作用说明:
定位损失box_loss
:预测框与标定框之间的误差(GIoU),越小定位得越准;分类损失cls_loss
:计算锚框与对应的标定分类是否正确,越小分类得越准;动态特征损失(dfl_loss)
:DFLLoss是一种用于回归预测框与目标框之间距离的损失函数。在计算损失时,目标框需要缩放到特征图尺度,即除以相应的stride,并与预测的边界框计算Ciou Loss,同时与预测的anchors中心点到各边的距离计算回归DFLLoss。这个过程是YOLOv8训练流程中的一部分,通过计算DFLLoss可以更准确地调整预测框的位置,提高目标检测的准确性。本文训练结果如下:
我们通常用PR曲线
来体现精确率和召回率的关系,本文训练结果的PR曲线如下。mAP
表示Precision和Recall作为两轴作图后围成的面积,m表示平均,@后面的数表示判定iou为正负样本的阈值。mAP@.5:表示阈值大于0.5的平均mAP,可以看到本文模型两类目标检测的mAP@0.5
平均值为0.94
,结果还是非常不错的。
运行效果
四,核心代码展示
# 所需加载的模型目录
path = 'models/best.pt'
# 需要检测的图片地址
img_path = "TestFiles/car_data_1_4648.jpg"
# 加载预训练模型
# conf 0.25 object confidence threshold for detection
# iou 0.7 intersection over union (IoU) threshold for NMS
model = YOLO(path, task='detect')
# model = YOLO(path, task='detect',conf=0.5)
# 检测图片
results = model(img_path)
res = results[0].plot()
cv2.imshow("YOLOv8 Detection", res)
cv2.waitKey(0)
class ByteTrack:
"""
Initialize the ByteTrack object.
Parameters:
track_thresh (float, optional): Detection confidence threshold
for track activation.
track_buffer (int, optional): Number of frames to buffer when a track is lost.
match_thresh (float, optional): Threshold for matching tracks with detections.
frame_rate (int, optional): The frame rate of the video.
"""
def __init__(
self,
track_thresh: float = 0.25,
track_buffer: int = 30,
match_thresh: float = 0.8,
frame_rate: int = 30,
):
self.track_thresh = track_thresh
self.match_thresh = match_thresh
self.frame_id = 0
self.det_thresh = self.track_thresh + 0.1
self.max_time_lost = int(frame_rate / 30.0 * track_buffer)
self.kalman_filter = KalmanFilter()
self.tracked_tracks: List[STrack] = []
self.lost_tracks: List[STrack] = []
self.removed_tracks: List[STrack] = []
def update_with_detections(self, detections: Detections) -> Detections:
"""
Updates the tracker with the provided detections and
returns the updated detection results.
Parameters:
detections: The new detections to update with.
Returns:
Detection: The updated detection results that now include tracking IDs.
"""
tracks = self.update_with_tensors(
tensors=detections2boxes(detections=detections)
)
detections = Detections.empty()
if len(tracks) > 0:
detections.xyxy = np.array(
[track.tlbr for track in tracks], dtype=np.float32
)
detections.class_id = np.array(
[int(t.class_ids) for t in tracks], dtype=int
)
detections.tracker_id = np.array(
[int(t.track_id) for t in tracks], dtype=int
)
detections.confidence = np.array(
[t.score for t in tracks], dtype=np.float32
)
else:
detections.tracker_id = np.array([], dtype=int)
return detections
def update_with_tensors(self, tensors: np.ndarray) -> List[STrack]:
"""
Updates the tracker with the provided tensors and returns the updated tracks.
Parameters:
tensors: The new tensors to update with.
Returns:
List[STrack]: Updated tracks.
"""
self.frame_id += 1
activated_starcks = []
refind_stracks = []
lost_stracks = []
removed_stracks = []
class_ids = tensors[:, 5]
scores = tensors[:, 4]
bboxes = tensors[:, :4]
remain_inds = scores > self.track_thresh
inds_low = scores > 0.1
inds_high = scores < self.track_thresh
inds_second = np.logical_and(inds_low, inds_high)
dets_second = bboxes[inds_second]
dets = bboxes[remain_inds]
scores_keep = scores[remain_inds]
scores_second = scores[inds_second]
class_ids_keep = class_ids[remain_inds]
class_ids_second = class_ids[inds_second]
if len(dets) > 0:
"""Detections"""
detections = [
STrack(STrack.tlbr_to_tlwh(tlbr), s, c)
for (tlbr, s, c) in zip(dets, scores_keep, class_ids_keep)
]
else:
detections = []
""" Add newly detected tracklets to tracked_stracks"""
unconfirmed = []
tracked_stracks = [] # type: list[STrack]
for track in self.tracked_tracks:
if not track.is_activated:
unconfirmed.append(track)
else:
tracked_stracks.append(track)
""" Step 2: First association, with high score detection boxes"""
strack_pool = joint_tracks(tracked_stracks, self.lost_tracks)
# Predict the current location with KF
STrack.multi_predict(strack_pool)
dists = matching.iou_distance(strack_pool, detections)
dists = matching.fuse_score(dists, detections)
matches, u_track, u_detection = matching.linear_assignment(
dists, thresh=self.match_thresh
)
for itracked, idet in matches:
track = strack_pool[itracked]
det = detections[idet]
if track.state == TrackState.Tracked:
track.update(detections[idet], self.frame_id)
activated_starcks.append(track)
else:
track.re_activate(det, self.frame_id, new_id=False)
refind_stracks.append(track)
""" Step 3: Second association, with low score detection boxes"""
# association the untrack to the low score detections
if len(dets_second) > 0:
"""Detections"""
detections_second = [
STrack(STrack.tlbr_to_tlwh(tlbr), s, c)
for (tlbr, s, c) in zip(dets_second, scores_second, class_ids_second)
]
else:
detections_second = []
r_tracked_stracks = [
strack_pool[i]
for i in u_track
if strack_pool[i].state == TrackState.Tracked
]
dists = matching.iou_distance(r_tracked_stracks, detections_second)
matches, u_track, u_detection_second = matching.linear_assignment(
dists, thresh=0.5
)
for itracked, idet in matches:
track = r_tracked_stracks[itracked]
det = detections_second[idet]
if track.state == TrackState.Tracked:
track.update(det, self.frame_id)
activated_starcks.append(track)
else:
track.re_activate(det, self.frame_id, new_id=False)
refind_stracks.append(track)
for it in u_track:
track = r_tracked_stracks[it]
if not track.state == TrackState.Lost:
track.mark_lost()
lost_stracks.append(track)
"""Deal with unconfirmed tracks, usually tracks with only one beginning frame"""
detections = [detections[i] for i in u_detection]
dists = matching.iou_distance(unconfirmed, detections)
dists = matching.fuse_score(dists, detections)
matches, u_unconfirmed, u_detection = matching.linear_assignment(
dists, thresh=0.7
)
for itracked, idet in matches:
unconfirmed[itracked].update(detections[idet], self.frame_id)
activated_starcks.append(unconfirmed[itracked])
for it in u_unconfirmed:
track = unconfirmed[it]
track.mark_removed()
removed_stracks.append(track)
""" Step 4: Init new stracks"""
for inew in u_detection:
track = detections[inew]
if track.score < self.det_thresh:
continue
track.activate(self.kalman_filter, self.frame_id)
activated_starcks.append(track)
""" Step 5: Update state"""
for track in self.lost_tracks:
if self.frame_id - track.end_frame > self.max_time_lost:
track.mark_removed()
removed_stracks.append(track)
self.tracked_tracks = [
t for t in self.tracked_tracks if t.state == TrackState.Tracked
]
self.tracked_tracks = joint_tracks(self.tracked_tracks, activated_starcks)
self.tracked_tracks = joint_tracks(self.tracked_tracks, refind_stracks)
self.lost_tracks = sub_tracks(self.lost_tracks, self.tracked_tracks)
self.lost_tracks.extend(lost_stracks)
self.lost_tracks = sub_tracks(self.lost_tracks, self.removed_tracks)
self.removed_tracks.extend(removed_stracks)
self.tracked_tracks, self.lost_tracks = remove_duplicate_tracks(
self.tracked_tracks, self.lost_tracks
)
output_stracks = [track for track in self.tracked_tracks if track.is_activated]
return output_stracks
# 创建跟踪器
byte_tracker = sv.ByteTrack(track_thresh=0.25, track_buffer=30, match_thresh=0.8, frame_rate=30)
五,相关作品展示
基于Java开发、Python开发、PHP开发、C#开发等相关语言开发的实战项目
基于Nodejs、Vue等前端技术开发的前端实战项目
基于微信小程序和安卓APP应用开发的相关作品
基于51单片机等嵌入式物联网开发应用
基于各类算法实现的AI智能应用
基于大数据实现的各类数据管理和推荐系统