本文将详细地介绍如何用自己的数据训练yolo模型。
1.数据获取
准备图片数据并用打标工具进行数据打标
打包工具下载链接https://blog.youkuaiyun.com/qq_34806812/article/details/81670310
修改类别文件,选择VOC格式,框选目标并标记----保存 则在指定的保存路径下会生成与图片名一致的XML文件。
2.利用Darknet训练模型
网上下载darknet:
git clone https://github.com/pjreddie/darknet
cd darknet
找到darnet目录下的Makefile
GPU=1 #如果使用GPU设置为1,CPU设置为0
CUDNN=1 #如果使用CUDNN设置为1,否则为0
OPENCV=0 #如果调用摄像头,还需要设置OPENCV为1,否则为0
OPENMP=0 #如果使用OPENMP设置为1,否则为0
DEBUG=0 #如果使用DEBUG设置为1,否则为0
CC=gcc
NVCC=/home/user/cuda-9.0/bin/nvcc #NVCC=nvcc 修改为自己的路径
AR=ar
ARFLAGS=rcs
OPTS=-Ofast
LDFLAGS= -lm -pthread
COMMON= -Iinclude/ -Isrc/
CFLAGS=-Wall -Wno-unused-result -Wno-unknown-pragmas -Wfatal-errors -fPIC
...
ifeq ($(GPU), 1)
COMMON+= -DGPU -I/home/hebao/cuda-9.0/include/ #修改为自己的路径
CFLAGS+= -DGPU
LDFLAGS+= -L/home/hebao/cuda-9.0/lib64 -lcuda -lcudart -lcublas -lcurand #修改为自己的路径
endif
修改完后,在当前文件夹下打开终端,输入命令
make
此时在该目录下会更新或者生成一个二进制文件 darknet
我们此时可以从网上download几个别人训练好的模型拿来测试下
wget https://pjreddie.com/media/files/yolov3-tiny.weights
./darknet detect cfg/yolov3-tiny.cfg yolov3-tiny.weights data/dog.jpg
如果测试结果正确,则该工程正确
3.我们现在开始正式训练我们的数据
(1)在darknet文件夹下新建文件夹
---VOCdevkit
---VOC2018
---Annotations
---ImageSets
---Layout
---Main
---JPEGImages
---VOC2019
---Annotations
---ImageSets
---Layout
---Main
---JPEGImages
我这里创建了2个文件夹VOC2018和VOC2019,自己文件夹都一致,如果数据不多或者数据文件名不冲突的情况下可以只创建一个文件夹。
Annotations文件夹下存放自己标注的文件(.xml文件)
JPEGImage文件夹下存放对应的图片
(2)文件夹创建好了,我们现在要来分训练数据,测试数据,验证数据了
"""
将所有数据按照1:1:5的比例保存在test.txt,val.txt与train.txt中
"""
import os
from os import listdir, getcwd
from os.path import join
if __name__ == '__main__':
source_folder = '../darknet/VOCdevkit/VOC2019/Annotations/' # 地址是所有xml的保存地点
dest = '../darknet/VOCdevkit/VOC2019/ImageSets/Main/train.txt' # 保存train.txt的地址
dest2 = '../darknet/VOCdevkit/VOC2019/ImageSets/Main/val.txt' # 保存val.txt的地址
dest3 = '../darknet/VOCdevkit/VOC2019/ImageSets/Main/test.txt' # 保存val.txt的地址
file_list = os.listdir(source_folder) # 赋值xml所在文件夹的文件列表
train_file = open(dest, 'a') # 打开文件
val_file = open(dest2, 'a') # 打开文件
test_file = open(dest3,'a')
All_num = len(file_list)
count=0
for file_obj in file_list: # 访问文件列表中的每一个文件
filename = file_obj[:-4]
print(filename)
count = count+1
if(count < All_num/7):
test_file.write(filename+"\n")
if(count >= All_num/7) and (count<All_num*2/7):
val_file.write(filename+"\n")
if(count>=All_num*2/7):
train_file.write(filename+"\n")
train_file.close() # 关闭文件
test_file.close()
val_file.close()
运行该文件后将在刚才创建的2个大文件夹下的对应Main文件夹下生成3个文件。每个文件夹下存放的是对应的图片名(不带路径和文件格式)
1000
999
530
2009
740
1550
2569
1759
1210
2299
2279
2140
46
660
1330
2800
1709
2180
69
(3)由于我们实际训练的时候是用的txt格式的标签数据,所以我们现在需要把创一份训练用的训练数据和验证数据且把数据格式转换成txt。
在darknet/scripts/文件夹下找到voc_label.py并修改
import xml.etree.ElementTree as ET
import pickle
import os
from os import listdir, getcwd
from os.path import join
#sets里填写的就是刚才创建的文件夹的后缀和在Main文件夹生成的3个文件的文件名
sets=[('2019', 'train'), ('2019', 'val'),('2019', 'test'), ('2018', 'train'), ('2018', 'val'), ('2018', 'test')]
classes = ["Hat","We-light","White-T","Black-T","Neck pillow"]#自己训练的识别类别。按照打标的顺序填写
def convert(size, box):
dw = 1./(size[0])
dh = 1./(size[1])
x = (box[0] + box[1])/2.0 - 1
y = (box[2] + box[3])/2.0 - 1
w = box[1] - box[0]
h = box[3] - box[2]
x = x*dw
w = w*dw
y = y*dh
h = h*dh
return (x,y,w,h)
def convert_annotation(year, image_id):
in_file = open('VOCdevkit/VOC%s/Annotations/%s.xml'%(year, image_id))
out_file = open('VOCdevkit/VOC%s/labels/%s.txt'%(year, image_id), 'w')
tree=ET.parse(in_file)
root = tree.getroot()
size = root.find('size')
w = int(size.find('width').text)
h = int(size.find('height').text)
for obj in root.iter('object'):
difficult = obj.find('difficult').text
cls = obj.find('name').text
if cls not in classes or int(difficult)==1:
continue
cls_id = classes.index(cls)
xmlbox = obj.find('bndbox')
b = (float(xmlbox.find('xmin').text), float(xmlbox.find('xmax').text), float(xmlbox.find('ymin').text), float(xmlbox.find('ymax').text))
bb = convert((w,h), b)
out_file.write(str(cls_id) + " " + " ".join([str(a) for a in bb]) + '\n')
wd = getcwd()
for year, image_set in sets:
if not os.path.exists('VOCdevkit/VOC%s/labels/'%(year)):
os.makedirs('VOCdevkit/VOC%s/labels/'%(year))
image_ids = open('VOCdevkit/VOC%s/ImageSets/Main/%s.txt'%(year, image_set)).read().strip().split()
list_file = open('%s_%s.txt'%(year, image_set), 'w')
for image_id in image_ids:
list_file.write('%s/VOCdevkit/VOC%s/JPEGImages/%s.jpg\n'%(wd, year, image_id))
convert_annotation(year, image_id)
list_file.close()
#后面2句的意义是将生成的几个文件并成训练文件和验证文件
os.system("cat 2018_train.txt 2018_val.txt 2019_train.txt 2019_val.txt > train.txt")
os.system("cat 2018_test.txt 2019_test.txt > test.txt")
执行该文件后则会在Annotations的同级目录下生成lables文件夹,该文件夹下存放对应的txt标签文件,每个文件为一张图片中的目标信息。
同事在voc_lable.py同目录下会生成
2018_train.txt
2018_test.txt
2018_val.txt
2019_train.txt
2019_test.txt
2019_val.txt
train.txt
test.txt
(4)修改配置文件
修改cfg/coco.data,记得路径改成自己相对应的路径
classes= 5
train = /home/xah/Documents/darknet/darknet/train.txt
valid = /home/xah/Documents/darknet/darknet/2018_test.txt
#valid = data/coco_val_5k.list
names = data/coco.names
backup = /home/xah/Documents/darknet/darknet/backup
修改cfg/yolov3-tiny.cfg
[net]
# Testing
#batch=1
#subdivisions=1 ....把test下方两行注释,train下两行取消注释
#Training
batch=64
subdivisions=8
width=416
height=416
channels=3
momentum=0.9
decay=0.0005
angle=0
saturation = 1.5
exposure = 1.5
hue=.1
learning_rate=0.001
burn_in=1000
max_batches = 500200
policy=steps
steps=400000,450000
scales=.1,.1
[convolutional]
batch_normalize=1
filters=16
size=3
stride=1
pad=1
activation=leaky
[maxpool]
size=2
stride=2
[convolutional]
batch_normalize=1
filters=32
size=3
stride=1
pad=1
activation=leaky
[maxpool]
size=2
stride=2
[convolutional]
batch_normalize=1
filters=64
size=3
stride=1
pad=1
activation=leaky
[maxpool]
size=2
stride=2
[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=leaky
[maxpool]
size=2
stride=2
[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky
[maxpool]
size=2
stride=2
[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky
[maxpool]
size=2
stride=1
[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky
###########
[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky
[convolutional]
size=1
stride=1
pad=1
filters=30 .....修改(class+5)*3
activation=linear
[yolo]
mask = 3,4,5
anchors = 10,14, 23,27, 37,58, 81,82, 135,169, 344,319
classes=5 .....修改
num=6
jitter=.3
ignore_thresh = .7
truth_thresh = 1
random=1
[route]
layers = -4
[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky
[upsample]
stride=2
[route]
layers = -1, 8
[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky
[convolutional]
size=1
stride=1
pad=1
filters=30.......修改(class+5)×3
activation=linear
[yolo]
mask = 0,1,2
anchors = 10,14, 23,27, 37,58, 81,82, 135,169, 344,319
classes=5......修改自己的训练类别
num=6
jitter=.3
ignore_thresh = .7
truth_thresh = 1
random=1
修改cfg/coco.names
Hat
We-light
White-T
Black-T
Neck pillow
(5)一切准备就绪
下载与训练模型
wget https://pjreddie.com/media/files/darknet53.conv.74
开始训练
nohup ./darknet detector train cfg/coco.data cfg/yolov3-tiny.cfg darknet53.conv.74 -gpus 0 2>1 | tee backup/train_yolov3_tiny.log &
结果:
8824: 0.066208, 0.082111 avg, 0.001000 rate, 1.178157 seconds, 564736 images
Loaded: 0.000040 seconds
Region 16 Avg IOU: 0.889900, Class: 0.995108, Obj: 0.996993, No Obj: 0.003648, .5R: 1.000000, .75R: 1.000000, count: 6
Region 23 Avg IOU: 0.880145, Class: 0.999698, Obj: 0.941376, No Obj: 0.000277, .5R: 1.000000, .75R: 1.000000, count: 2
Region 16 Avg IOU: 0.802107, Class: 0.999595, Obj: 0.965631, No Obj: 0.003531, .5R: 1.000000, .75R: 1.000000, count: 7
Region 23 Avg IOU: 0.872990, Class: 0.999911, Obj: 0.991865, No Obj: 0.000337, .5R: 1.000000, .75R: 1.000000, count: 3
Region 16 Avg IOU: 0.874307, Class: 0.996150, Obj: 0.979595, No Obj: 0.003599, .5R: 1.000000, .75R: 1.000000, count: 8
Region 23 Avg IOU: 0.850211, Class: 0.994401, Obj: 0.700242, No Obj: 0.000322, .5R: 1.000000, .75R: 1.000000, count: 4
Region 16 Avg IOU: 0.825561, Class: 0.998914, Obj: 0.999543, No Obj: 0.002424, .5R: 1.000000, .75R: 0.833333, count: 6
Region 23 Avg IOU: 0.839286, Class: 0.999842, Obj: 0.982027, No Obj: 0.000325, .5R: 1.000000, .75R: 1.000000, count: 3
Region 16 Avg IOU: 0.825628, Class: 0.995648, Obj: 0.980015, No Obj: 0.002971, .5R: 1.000000, .75R: 0.714286, count: 7
4.问题集合
如果执行训练后很久命令行很久没刷新则
打开当前目录下生成的1文件,看看是哪里报错
如果是
darknet: ./src/parser.c:315: parse_yolo: Assertion `l.outputs == params.inputs' failed.
这个需要看cfg/yolov3-tiny.cfg文件是否按照要求改好
如果是报找不到×××文件的错误,需要检查下训练文件和生成的日志文件路径是否写对了~
一般
8824: 0.066208, 0.082111 avg, 0.001000 rate, 1.178157 seconds, 564736 images
8824:表示训练次数
0.082111avg表示平均误差
0.001000rate表示当前学习率
当平均误差在0.02左右就可以停止训练啦
下一篇介绍训练日志解析和训练曲线的绘制!