之前写过windows下搭建yolo算法但是比较乱,现在写一个通用的yolov5的识别。
环境是在ubuntu18上面,其实也都一样的,windows只是比较熟悉而已。
参考如下:
http://t.csdn.cn/XpQBZ
http://t.csdn.cn/nrYxD
基于yolov5的识别算法
首先还是需要安装pycharm和anaconda
这个我之前写过,链接
安装好两个软件即可
其次要注意使用anaconda在ubuntu中并不方便,建议不要在bash在进行编译。直接退出conda最好
conda deactivate
进入时conda activate
我看到一个默认不启用conda的:
直接终端输入:
conda config --set auto_activate_base false
这样就行了
配置环境
接下来正式开始:
下载好yolov5
下载网址:https://github.com/ultralytics/yolov5
我们先建立一个conda的虚拟python环境
conda create -n yolov5 python=3.9
conda activate yolov5
cd yolov5
pip install -r -requirements.txt
然后自己安装各种依赖就好了。
(我发现是不是有很多import的问题也可以用这个来实现呢?)这多方便啊
如果慢还是一样,在install 安装包后面加上路径 -i https://pypi.tuna.tsinghua.edu.cn/simple
比如pip install tensorflow-gpu==2.5.1 -i https://pypi.tuna.tsinghua.edu.cn/simple
安装完后,最好看一下自己的nvidia驱动有没有安好。
sudo ubuntu-drivers devices
sudo ubuntu-drivers autoinstall
nvidia-smi
这样之后:
接下来就按照它的readme执行就好了:
注意,需要先下载一个weights权重文件,官方里面就有:
下载了放在yolo文件目录下。
然后进行识别的测试,能识别即可。
python detect.py --weights yolov5s.pt --source 0 # webcam
img.jpg # image
vid.mp4 # video
screen # screenshot
path/ # directory
list.txt # list of images
list.streams # list of streams
'path/*.jpg' # glob
'https://youtu.be/Zgi9g1ksQHc' # YouTube
'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
数据集制作(关键)
要用好yolov5,必须是会自己制作数据集的
有两种方式,一种是自动标注的脚本,另一种则是人工标注
├── data:主要是存放一些超参数的配置文件(这些文件(yaml文件)是用来配置训练集和测试集还有验证集的路径的,其中还包括目标检测的种类数和种类的名称);还有一些官方提供测试的图片。
├── models:里面主要是一些网络构建的配置文件和函数,其中包含了该项目的四个不同的版本,分别为是s、m、l、x。检测速度分别都是从快到慢,但是精确度分别是从低到高。如果训练自己的数据集的话,就需要修改这里面相对应的yaml文件来训练自己模型。
├── utils:存放的是工具类的函数,里面有loss函数,metrics函数,plots函数等等。
├── weights:放置训练好的权重参数。
├── detect.py:利用训练好的权重参数进行目标检测
├── train.py:训练自己的数据集的函数。
├── test.py:测试训练的结果的函数
├──requirements.txt:导入相应版本的包
新建文件夹VOCData并进入
里面新建images(存放原始jpg文件),Annotations(存放标注图片后的内容)
其他文件先不要管!
先将视频文件转化成jpg文件放在images下
这里提供两种方法:
### 方法1:
# import cv2
#
# # 打开视频文件
# cap = cv2.VideoCapture('video1.mp4')
#
# # 设置保存图片的文件夹路径
# folder = 'frames/'
#
# # 读取并保存每一帧
# count = 0
# while cap.isOpened():
# ret, frame = cap.read()
# if not ret:
# break
#
# frame_name = folder + f'frame_{count:04d}.jpg' # 设置保存文件名,例如 frame_0000.jpg
# cv2.imwrite(frame_name, frame) # 保存图片文件
# count += 1
#
# # 释放资源
# cap.release()
### 方法2:
import cv2
import os
#此删除文件夹内容的函数来源于网上
def del_file(filepath):
"""
删除某一目录下的所有文件或文件夹
:param filepath: 路径
:return:
"""
del_list = os.listdir(filepath)
for f in del_list:
file_path = os.path.join(filepath, f)
if os.path.isfile(file_path):
os.remove(file_path)
def video_to_images(fps,path):
cv = cv2.VideoCapture(path)
if(not cv.isOpened()):
print("\n打开视频失败!请检查视频路径是否正确\n")
exit(0)
if not os.path.exists("images/"):
os.mkdir("images/") # 创建文件夹
else:
del_file('images/') # 清空文件夹
order = 0 #序号
h = 0
while True:
h=h+1
rval, frame = cv.read()
if h == fps:
h = 0
order = order + 1
if rval:
cv2.imwrite('./images/' + str(order) + '.jpg', frame) #图片保存位置以及命名方式
cv2.waitKey(1)
else:
break
cv.release()
print('\nsave success!\n')
# 参数设置
fps = 1 # 隔多少帧取一张图 1表示全部取
path="video1.mp4" # 视频路径 比如 D:\\images\\tram_result.mp4 或者 D:/images/tram_result.mp4
if __name__ == '__main__':
video_to_images(fps,path)
# 会在代码的当前文件夹下 生成images文件夹 用于保存图片
# 如果有images文件夹,会清空文件夹!
下载labelimg标注软件:
pip install labelimg
终端输入labelimg进入标注软件
点view选择自动保存模式,然后选择文件夹位置
默认格式是xml(即是pascalVOC),也可以改成yolo。
接着就是愉快的标注时间(标注和图片下一张等应该是有键盘快捷键的):
Ctrl + s | 保存 |
| Ctrl + d | Copy the current label and rect box |
| Space | 标记当前图片已标记 |
| w | 创建一个矩形 |
| d | 下一张图片 |
| a | 上一张图片 |
标注结束后,划分数据集以及配置文件修改
在VOC文件夹下新建一个split_train_val.py并运行:
python split_train_val.py
这是为了划分训练集、验证集、测试集
# coding:utf-8
import os
import random
import argparse
parser = argparse.ArgumentParser()
#xml文件的地址,根据自己的数据进行修改 xml一般存放在Annotations下
parser.add_argument('--xml_path', default='Annotations', type=str, help='input xml label path')
#数据集的划分,地址选择自己数据下的ImageSets/Main
parser.add_argument('--txt_path', default='ImageSets/Main', type=str, help='output txt label path')
opt = parser.parse_args()
trainval_percent = 1.0 # 训练集和验证集所占比例。 这里没有划分测试集
train_percent = 0.9 # 训练集所占比例,可自己进行调整
xmlfilepath = opt.xml_path
txtsavepath = opt.txt_path
total_xml = os.listdir(xmlfilepath)
if not os.path.exists(txtsavepath):
os.makedirs(txtsavepath)
num = len(total_xml)
list_index = range(num)
tv = int(num * trainval_percent)
tr = int(tv * train_percent)
trainval = random.sample(list_index, tv)
train = random.sample(trainval, tr)
file_trainval = open(txtsavepath + '/trainval.txt', 'w')
file_test = open(txtsavepath + '/test.txt', 'w')
file_train = open(txtsavepath + '/train.txt', 'w')
file_val = open(txtsavepath + '/val.txt', 'w')
for i in list_index:
name = total_xml[i][:-4] + '\n'
if i in trainval:
file_trainval.write(name)
if i in train:
file_train.write(name)
else:
file_val.write(name)
else:
file_test.write(name)
file_trainval.close()
file_train.close()
file_val.close()
file_test.close()
生成一个ImageSets/Main文件,其中包含:
由于保存的是xml格式,这里再对格式进行一下转换,成yolo能看懂的格式
新建一个xml_to_yolo.py文件
这里会生成labels和dataSet_path文件夹,注意修改路径。其中labels存放yolo能看懂的格式,dataSet_path存放数据集的txt文件,主要是test.txt,train.txt,val.txt这三个转换后的文件。
# -*- coding: utf-8 -*-
import xml.etree.ElementTree as ET
import os
from os import getcwd
sets = ['train', 'val', 'test']
classes = ["cone"] # 改成自己的类别
abs_path = os.getcwd()
print(abs_path)
def convert(size, box):
dw = 1. / (size[0])
dh = 1. / (size[1])
x = (box[0] + box[1]) / 2.0 - 1
y = (box[2] + box[3]) / 2.0 - 1
w = box[1] - box[0]
h = box[3] - box[2]
x = x * dw
w = w * dw
y = y * dh
h = h * dh
return x, y, w, h
def convert_annotation(image_id):
in_file = open('D:/yolov5/VOCData/Annotations/%s.xml' % (image_id), encoding='UTF-8')
out_file = open('D:/yolov5/VOCData/labels/%s.txt' % (image_id), 'w')
tree = ET.parse(in_file)
root = tree.getroot()
size = root.find('size')
w = int(size.find('width').text)
h = int(size.find('height').text)
for obj in root.iter('object'):
difficult = obj.find('difficult').text
# difficult = obj.find('Difficult').text
cls = obj.find('name').text
if cls not in classes or int(difficult) == 1:
continue
cls_id = classes.index(cls)
xmlbox = obj.find('bndbox')
b = (float(xmlbox.find('xmin').text), float(xmlbox.find('xmax').text), float(xmlbox.find('ymin').text),
float(xmlbox.find('ymax').text))
b1, b2, b3, b4 = b
# 标注越界修正
if b2 > w:
b2 = w
if b4 > h:
b4 = h
b = (b1, b2, b3, b4)
bb = convert((w, h), b)
out_file.write(str(cls_id) + " " + " ".join([str(a) for a in bb]) + '\n')
wd = getcwd()
for image_set in sets:
if not os.path.exists('D:/yolov5/VOCData/labels/'):
os.makedirs('D:/yolov5/VOCData/labels/')
image_ids = open('D:/yolov5/VOCData/ImageSets/Main/%s.txt' % (image_set)).read().strip().split()
if not os.path.exists('D:/yolov5/VOCData/dataSet_path/'):
os.makedirs('D:/yolov5/VOCData/dataSet_path/')
# 这行路径不需更改,这是相对路径
list_file = open('dataSet_path/%s.txt' % image_set, 'w')
for image_id in image_ids:
list_file.write('D:/yolov5/VOCData/images/%s.jpg\n' % image_id)
convert_annotation(image_id)
list_file.close()
回到yolov5文件夹下,
在data文件下新建一个myvoc.yaml
需要书写的是:
train: D:/Yolov5/yolov5/VOCData/dataSet_path/train.txt
val: D:/Yolov5/yolov5/VOCData/dataSet_path/val.txt
# number of classes
nc: 2
# class names
names: ["light", "post"]
这里我自己的数据集所用的是(供参考):
到此为止,数据集的相关内容就完全做好了。
修改模型配置文件:
这里以yolov5s.yaml为例,打开文件
这里就把nc设置成自己的类别数目就好了。
开始训练!
训练的代码:
python train.py --weights yolov5s.pt --cfg models/yolov5s.yaml --data data/myvoc.yaml --epoch 200 --batch-size 8 --img 640 --device cpu
如果是GPU,那就--device 0
python detect.py --weights runs/train/exp/weights/best.pt --source ../source/test.png
python detect.py --source 0 # webcam 自带摄像头
file.jpg # image 图片
file.mp4 # video 视频
path/ # directory
path/*.jpg # glob
'https://youtu.be/NUsoVlDFqZg' # YouTube
'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
然后就能识别了,其中…/ 代表当前目录的上一级目录
注意,训练时或者训练后可以利用 tensorboard 查看训练可视化:
在yolov5文件夹下,运行
tensorboard --logdir=runs
然后在浏览器中输入相应的地址就能看了。这个我没试过
问题1:
YOLOv5s summary: 214 layers, 7022326 parameters, 7022326 gradients, 15.9 GFLOPs
Transferred 342/349 items from yolov5s.pt
AMP: checks failed ❌, disabling Automatic Mixed Precision. See https://github.com/ultralytics/yolov5/issues/7908
optimizer: SGD(lr=0.01) with parameter groups 57 weight(decay=0.0), 60 weight(decay=0.0005), 60 bias
Traceback (most recent call last):
File "train.py", line 633, in <module>
main(opt)
File "train.py", line 527, in main
train(opt.hyp, opt, device, callbacks)
File "train.py", line 166, in train
ema = ModelEMA(model) if RANK in {-1, 0} else None
File "/home/cyun/gazebo_sim_model/yolov5-7.0/utils/torch_utils.py", line 412, in __init__
self.ema = deepcopy(de_parallel(model)).eval() # FP32 EMA
File "/home/cyun/anaconda3/envs/yolo/lib/python3.8/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/cyun/anaconda3/envs/yolo/lib/python3.8/copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "/home/cyun/anaconda3/envs/yolo/lib/python3.8/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/home/cyun/anaconda3/envs/yolo/lib/python3.8/copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/home/cyun/anaconda3/envs/yolo/lib/python3.8/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/cyun/anaconda3/envs/yolo/lib/python3.8/copy.py", line 296, in _reconstruct
value = deepcopy(value, memo)
File "/home/cyun/anaconda3/envs/yolo/lib/python3.8/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/cyun/anaconda3/envs/yolo/lib/python3.8/copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "/home/cyun/anaconda3/envs/yolo/lib/python3.8/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/home/cyun/anaconda3/envs/yolo/lib/python3.8/copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/home/cyun/anaconda3/envs/yolo/lib/python3.8/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/cyun/anaconda3/envs/yolo/lib/python3.8/copy.py", line 296, in _reconstruct
value = deepcopy(value, memo)
File "/home/cyun/anaconda3/envs/yolo/lib/python3.8/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/cyun/anaconda3/envs/yolo/lib/python3.8/copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "/home/cyun/anaconda3/envs/yolo/lib/python3.8/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/home/cyun/anaconda3/envs/yolo/lib/python3.8/copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/home/cyun/anaconda3/envs/yolo/lib/python3.8/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/cyun/anaconda3/envs/yolo/lib/python3.8/copy.py", line 296, in _reconstruct
value = deepcopy(value, memo)
File "/home/cyun/anaconda3/envs/yolo/lib/python3.8/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/cyun/anaconda3/envs/yolo/lib/python3.8/copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "/home/cyun/anaconda3/envs/yolo/lib/python3.8/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/home/cyun/anaconda3/envs/yolo/lib/python3.8/copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/home/cyun/anaconda3/envs/yolo/lib/python3.8/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/cyun/anaconda3/envs/yolo/lib/python3.8/copy.py", line 296, in _reconstruct
value = deepcopy(value, memo)
File "/home/cyun/anaconda3/envs/yolo/lib/python3.8/copy.py", line 153, in deepcopy
y = copier(memo)
File "/home/cyun/.local/lib/python3.8/site-packages/torch/nn/parameter.py", line 59, in __deepcopy__
result = type(self)(self.data.clone(memory_format=torch.preserve_format), self.requires_grad)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.
其实就是torch版本和cuda不对应。这很奇怪哈,明明is_available都是true了。。。
我的是11.8,去安装一下:
pip3 install numpy --pre torch --force-reinstall --index-url https://download.pytorch.org/whl/nightly/cu118
问题2
AttributeError: ‘FreeTypeFont’ object has no attribute ‘getsize’
annotator.box_label(box, label, color=color)
File “/home/cyun/gazebo_sim_model/yolov5-7.0/utils/plots.py”, line 91, in box_label
w, h = self.font.getsize(label) # text width, height
AttributeError: ‘FreeTypeFont’ object has no attribute ‘getsize’
Results saved to runs/train/exp9
pillow版本过高,降低版本:
pip install pillow==8.0.0
最新进展,成功开发了tensorRT,以及完全仿真环境下的识别功能:
ok啦。后面想想怎么更新用yolov5做双目测距的算法。。。。。。。。。。。。。
这个c++的tensorrt部分忘记记录了,源码也不好找了,后面再找时间补上吧…