Ubuntu下用自己的数据训练yolov2

本文详细介绍如何使用YOLOv2进行目标检测模型训练。从搭建环境到数据准备,再到模型训练与测试,每一步都提供了具体操作指南。特别关注于使用LabelImg进行数据标注的过程及配置文件的调整。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Train yolov2 with my own data

Operating system: Linux on Ubuntu.

Tools: LabelImg , the operation method can be find in README.rst.

 

Step 1 .Download darknet

    According to the linked web, download the darknet, and it’s easy to detect some specific class of objects using a pre-trained model. It’s highly recommended that run the following command line by line, so you can understand these better. After line 1-3, you already get an executable file, and you can find the config file for YOLO in the cfg/ subdirectory. To run line 4 means you intend to download the pre-trained yolo v3 weight file. By changing the directory of .jpg file in line 5, test can be done in different pictures. In addition, you can always switch to yolo v2 or other yolo framework by change the directory. Difference between these yolo frameworks can be found here.

 

  1. $ git clone https://github.com/pjreddie/darknet    
  2.   
  3. $ cd darknet    
  4.   
  5. $ make    
  6.   
  7. $ wget https://pjreddie.com/media/files/yolov3.weights    
  8.   
  9. $ ./darknet detect cfg/yolov3.cfg yolov3.weights data/dog.jpg   

 

After running the above command in the terminal, you can see a picture like fig 1 under the darknet file.

Fig 1.The prediction picture

Step 2 .Prepare data

     After reading reference1, reference 2 and etc., the goal is clear: 1.Prepare some JPEG type pictures (.jpg); 2. Label your data, which can be done with some tools.

Step 2.1 Prepare pictures

    I used the Open Image Datasets V4 provided by google, and you can find some information about this datasets here. In our project, I downloaded all pictures under the category Human hand, a total of 859 pictures.

Step 2.2 Label

To install the labelImg tool, I run the following commands:

 

  1. $ sudo apt-get install pyqt4-dev-tools # 安装PyQt4    
  2.   
  3. $ sudo apt-get install python-lxml    
  4.   
  5. $ git clone https://github.com/tzutalin/labelImg.git    
  6.   
  7. $ cd labelImg    
  8.   
  9. $ make all    
  10.   
  11. $ ./labelImg.py  

 

On labelImg, two types of labeled message are provided: PscalVOC and YOLO. Due to I was not familiar with the process of darknet, I chose the PscalVOC type like the other writers did in their blogs, however, I may use the YOLO type later, so I don’t have to transfer the .xml files to .txt files.

When you label the pictures, you can use some hot keys, like Ctrl + s to save your labels, and w to create a new rect box.

Step 3 Change the config files

Two config files should be changed: cfg/voc.data, cfg/yolo-voc.cfg, and one should be created.

cfg/voc.data

According to the directory of your file, rewrite the voc.data file. In my project, I only labeled one feature: hand, so I changed the classes number from 20 to 1. I saved all files related to this project under a folder named mytrain, and got my train.txt and val.txt files guided by this blog, so I changed my voc.data to this:

 

  1. classes= 1    
  2.   
  3. train = /home/tec/darknet/mytrain/train.txt    
  4.   
  5. valid = /home/tec/darknet/mytrain/val.txt    
  6.   
  7. names = data/hand.names    
  8.   
  9. backup = /home/tec/darknet/backup  

 

Don’t forget to create the hand.names file under the folder named data, in hand.names text, I only write one word: hand

cfg/yolo-voc.cfg

Due to I only have one class, so I changed the classes in [region] and the filters in the last [convolutional]:

 

  1. classes= 1    
  2.   
  3. filters=30 #filters = classes+ coords+ 1)* (NUM)=(1+4+1)×5=30  ,5表示每个grid cell预测的bounding box的数量,比如YOLO v1中是2个,YOLO v2中是5个,YOLO v3中是3个

 

Step 4 Download the pre-trained weight

You can download it here, and the password is "ynhg", in my project I put it under the darknet folder.

Step 5 Train the model

In this case, I used the following command to train, and the command will be different if you put the following files in other path: voc.data, yolo-voc.cfg,and darknet19_448.conv.23.

 

  1. ./darknet detector train cfg/voc.data cfg/yolo-voc.cfg darknet19_448.conv.23  

Until now, you will get your own fully functional darknet.

Step 6 Test

Test your model by next command:

 

  1. ./darknet detector test cfg/voc.data cfg/yolo-voc.cfg backup/yolo-voc_final.weights mytrain/beauty.jpg  

 

References

https://blog.youkuaiyun.com/ch_liu23/article/details/53558549

https://blog.youkuaiyun.com/qq_27840681/article/details/63682694

https://blog.youkuaiyun.com/qq_34484472/article/details/73135354

https://blog.youkuaiyun.com/jozeeh/article/details/79087311

https://blog.youkuaiyun.com/qq_35608277/article/details/79468896


If you have any question about this blog, you are more than welcome to raise your question in the comments section.

### 准备工作 为了在 Ubuntu 18.04 上使用 YOLOv5 训练自定义数据集,需先安装必要的依赖项并设置好 Python 环境。确保已安装最新版本的 Git 和 PyTorch 库[^1]。 ```bash sudo apt-get update && sudo apt-get install git python3-pip -y pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cpu ``` ### 下载与配置 YOLOv5 获取官方仓库中的 YOLOv5 源码,并切换至所需分支: ```bash git clone https://github.com/ultralytics/yolov5.git cd yolov5 git checkout tags/v6.2 -b v6.2 pip3 install -r requirements.txt ``` 上述命令会克隆最新的稳定版源码,并安装所需的 Python 包以支持项目运行[^2]。 ### 数据准备 创建一个新的文件夹用于存储自定义的数据集,在该目录下建立 `images` 和 `labels` 文件夹分别放置图像及其对应的标签文件。按照 COCO 或者 VOC 格式的标注方式整理图片和 XML 文档,再通过工具转换成 YOLO 所需 `.txt` 形式的边界框坐标表示法[^3]。 编写 `custom_dataset.yaml` 描述数据路径以及类别名称列表: ```yaml train: ../datasets/custom/images/train/ val: ../datasets/custom/images/validation/ nc: 80 # 类别数量 names: ['person', 'car'] # 自定义目标类名数组 ``` 此 YAML 文件指定了训练集验证集的位置还有分类数目及各物体的名字[^4]。 ### 开始训练 执行如下指令启动训练流程,其中 `-data` 参数指定之前编辑好的 dataset configuration file 路径;而 `-weights` 则指向预训练权重位置(可选),比如 yolov5s.pt 是轻量级模型初始参数之一[^5]: ```bash python3 train.py --img 640 --batch 16 --epochs 50 --data custom_dataset.yaml --cfg models/yolov5s.yaml --name my_custom_training_run --cache ``` 以上命令设置了输入尺寸、批量大小、迭代次数等超参选项来调整学习效率[^6]。 ### 结果评估 完成一轮完整的 epoch 后,默认会在 runs/train/my_custom_training_run 目录内保存 checkpoint checkpoints 及最终得到的最佳权值 best.pt 。可以利用 evaluate script 对新获得的结果做进一步分析[^7]。 ```python from utils.general import strip_optimizer, set_logging strip_optimizer('runs/train/exp/weights/best.pt') # 剥离优化器状态字典 set_logging() # 设置日志记录等级 ```
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值