大家好,传统的火灾检测系统多依赖于敏感的电子传感器,但这些系统易受环境影响而产生误报,且在大型空间中安装成本高、维护繁琐。相比之下,基于计算机视觉的火灾检测技术具有快速识别率和响应时间,适用于多变环境和广阔空间,且无需额外硬件,能有效提供直观全面的火灾数据。YOLOv5模型能够快速准确地检测到火灾火焰,在火灾初期甚至是刚刚出现火苗时就发出警报。这为及时采取灭火措施争取了宝贵的时间,极大地降低了火灾造成的损失。系统可以对特定区域进行持续实时监测,无论白天还是夜晚,都能及时察觉火灾的发生。相比传统的人工巡检或基于传感器的检测方法,具有更高的时效性和可靠性。本文将介绍基于YOLOv5算法的火灾火焰检测系统,涵盖准备工作、模型训练等方面。
下载地址:基于YOLOv5算法的火焰识别系统
1.准备工作
在PC机上配置环境,即正常按照requirements安装依赖包,根据实际情况可以自身爬取图片。通过网上的资料下载相关数据集,大部分数据集是无标注的数据集。如下载的数据集无标注,那么使用lableImg进行标注,且将标注文件的保存格式设置为PascalVOC的类型,即xml格式的label文件,而后通过脚本将标签格式转换为.txt文件,并在文件上添加类别信息和对数据进行归一化。脚本脚本代码如下:
import os
import xml.etree.ElementTree as ET
from decimal import Decimal
dirpath = '/home/jiu/data_change/label_0' # 原来存放xml文件的目录
newdir = '/home/jiu/data_change/labels' # 修改label后形成的txt目录
if not os.path.exists(newdir):
os.makedirs(newdir)
for fp in os.listdir(dirpath):
root = ET.parse(os.path.join(dirpath, fp)).getroot()
xmin, ymin, xmax, ymax = 0, 0, 0, 0
sz = root.find('size')
width = float(sz[0].text)
height = float(sz[1].text)
filename = root.find('filename').text
print(fp)
with open(os.path.join(newdir, fp.split('.')[0] + '.txt'), 'a+') as f:
for child in root.findall('object'): # 找到图片中的所有框
sub = child.find('bndbox') # 找到框的标注值并进行读取
sub_label = child.find('name')
xmin = float(sub[0].text)
ymin = float(sub[1].text)
xmax = float(sub[2].text)
ymax = float(sub[3].text)
try: # 转换成yolov的标签格式,需要归一化到(0-1)的范围内
x_center = Decimal(str(round(float((xmin + xmax) / (2 * width)),6))).quantize(Decimal('0.000000'))
y_center = Decimal(str(round(float((ymin + ymax) / (2 * height)),6))).quantize(Decimal('0.000000'))
w = Decimal(str(round(float((xmax - xmin) / width),6))).quantize(Decimal('0.000000'))
h = Decimal(str(round(float((ymax - ymin) / height),6))).quantize(Decimal('0.000000'))
print(str(x_center) + ' ' + str(y_center)+ ' '+str(w)+ ' '+str(h))
#读取需要的标签
if sub_label.text == 'fire':
f.write(' '.join([str(0), str(x_center), str(y_center), str(w), str(h) + '\n']))
except ZeroDivisionError:
print(filename, '的 width有问题')
此处提供所使用的火焰数据集,该数据集一共2059张带火焰的图片,并将其分为训练集和测试集,其中训练集1442张,测试集617张;同样的将label也分为训练集和测试集,其图片和其label相对应。
根据火焰图片,生成相应的txt标签,图片和其label相对应,命名一致:
2.项目实现
当前深度学习技术的流行,优秀的目标检测算法不断涌现,YOLO系列由于实现简单、检测速度快、精度高等特点得到了众多应用。这里的火焰检测系统基于YoloV5实现,它可以看作是单阶段目标检测。单阶段目标检测器的体系结构比两阶段目标检测器更简单,不需要生成候选区域,通过卷积神经网络提取特征直接输出目标的类别、概率和位置坐标,从而实现端到端的目标检测。单阶段目标检测器又包含基于锚框(anchor-based)的和非锚框(anchor-free)的两种方法。
SSD、YOLO 和 Retina Net 等都是 anchor-based 的单阶段检测器,它们处理速度快而但精度相对有限。Anchor-based 方法使用密集的锚框直接进行目标分类和回归,能有效提高网络的召回能力,但是冗余框很多。Anchor-free 目标检测器则抛弃锚框的设计,取而代之的使用关键点进行目标检测,诸如CornerNet,CenterNet 等,都取得了不俗的效果。
使用使用Python爬虫利用关键字在互联网上获得的图片数据,爬取数据包含室内场景下的火焰、写字楼和房屋燃烧、森林火灾和车辆燃烧等场景下的火焰图片,经过筛选后留下质量较好的图片制作成VOC格式的实验数据集。对配置文件修改,新建一个.yaml文件,在其中添加(根据实际情况修改文件路径):
# train and val data as 1) directory: path/images/, 2) file: path/images.txt, or 3) list: [path1/images/, path2/images/]
train: D:\a\fire_yolo_format\images\train
val: D:\a\fire_yolo_format\images\val
# number of classes
nc: 2
# class names
names: ['fire', 'nofire']
找到主函数的入口,这里面有模型的主要参数,在train.py的parse_opt()函数中,修改train.py中的weights、cfg、data、epochs、batch_size、imgsz、device、workers等参数。其如下所示:
def parse_opt(known=False):
parser = argparse.ArgumentParser()
parser.add_argument('--weights', type=str, default=ROOT / 'pretrained/yolov5s.pt', help='initial weights path')
parser.add_argument('--cfg', type=str, default=ROOT / 'models/yolov5s.yaml', help='model.yaml path')
parser.add_argument('--data', type=str, default=ROOT / 'data/data.yaml', help='dataset.yaml path')
parser.add_argument('--hyp', type=str, default=ROOT / 'data/hyps/hyp.scratch.yaml', help='hyperparameters path')
parser.add_argument('--epochs', type=int, default=300)
parser.add_argument('--batch-size', type=int, default=4, help='total batch size for all GPUs, -1 for autobatch')
parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=640, help='train, val image size (pixels)')
parser.add_argument('--rect', action='store_true', help='rectangular training')
parser.add_argument('--resume', nargs='?', const=True, default=False, help='resume most recent training')
parser.add_argument('--nosave', action='store_true', help='only save final checkpoint')
parser.add_argument('--noval', action='store_true', help='only validate final epoch')
parser.add_argument('--noautoanchor', action='store_true', help='disable autoanchor check')
parser.add_argument('--evolve', type=int, nargs='?', const=300, help='evolve hyperparameters for x generations')
parser.add_argument('--bucket', type=str, default='', help='gsutil bucket')
parser.add_argument('--cache', type=str, nargs='?', const='ram', help='--cache images in "ram" (default) or "disk"')
parser.add_argument('--image-weights', action='store_true', help='use weighted image selection for training')
parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
# parser.add_argument('--multi-scale', action='store_true', help='vary img-size +/- 50%%')
parser.add_argument('--multi-scale', default=True, help='vary img-size +/- 50%%')
parser.add_argument('--single-cls', action='store_true', help='train multi-class data as single-class')
parser.add_argument('--adam', action='store_true', help='use torch.optim.Adam() optimizer')
parser.add_argument('--sync-bn', action='store_true', help='use SyncBatchNorm, only available in DDP mode')
parser.add_argument('--workers', type=int, default=0, help='max dataloader workers (per RANK in DDP mode)')
parser.add_argument('--project', default=ROOT / 'runs/train', help='save to project/name')
parser.add_argument('--name', default='exp', help='save to project/name')
parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
parser.add_argument('--quad', action='store_true', help='quad dataloader')
parser.add_argument('--linear-lr', action='store_true', help='linear LR')
parser.add_argument('--label-smoothing', type=float, default=0.0, help='Label smoothing epsilon')
parser.add_argument('--patience', type=int, default=100, help='EarlyStopping patience (epochs without improvement)')
parser.add_argument('--freeze', type=int, default=0, help='Number of layers to freeze. backbone=10, all=24')
parser.add_argument('--save-period', type=int, default=-1, help='Save checkpoint every x epochs (disabled if < 1)')
parser.add_argument('--local_rank', type=int, default=-1, help='DDP parameter, do not modify')
# Weights & Biases arguments
parser.add_argument('--entity', default=None, help='W&B: Entity')
parser.add_argument('--upload_dataset', action='store_true', help='W&B: Upload dataset as artifact table')
parser.add_argument('--bbox_interval', type=int, default=-1, help='W&B: Set bounding-box image logging interval')
parser.add_argument('--artifact_alias', type=str, default='latest', help='W&B: Version of dataset artifact to use')
opt = parser.parse_known_args()[0] if known else parser.parse_args()
return opt
’–weights’:添加yolov5的预训练权重文件,此处使用的是yolov5s.pt。如使用其他预训练权重文件,则在val.py中也应当相应修改
‘–data’:数据集的配置文件,即上面定义的.yaml文件
‘–imgsz’:输入图片的大小
在detect.py的parse_opt()函数中,修改train.py中的weights、cfg、data、epochs、batch_size、imgsz、device、workers等参数,其如下所示:
def parse_opt():
parser = argparse.ArgumentParser()
parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov5s.pt', help='model path(s)')
parser.add_argument('--source', type=str, default=ROOT / 'data/images', help='file/dir/URL/glob, 0 for webcam')
parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640], help='inference size h,w')
parser.add_argument('--conf-thres', type=float, default=0.5, help='confidence threshold')
parser.add_argument('--iou-thres', type=float, default=0.45, help='NMS IoU threshold')
parser.add_argument('--max-det', type=int, default=1000, help='maximum detections per image')
parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
parser.add_argument('--view-img', action='store_true', help='show results')
parser.add_argument('--save-txt', action='store_true', help='save results to *.txt')
parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels')
parser.add_argument('--save-crop', action='store_true', help='save cropped prediction boxes')
parser.add_argument('--nosave', action='store_true', help='do not save images/videos')
parser.add_argument('--classes', nargs='+', type=int, help='filter by class: --classes 0, or --classes 0 2 3')
parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS')
parser.add_argument('--augment', action='store_true', help='augmented inference')
parser.add_argument('--visualize', action='store_true', help='visualize features')
parser.add_argument('--update', action='store_true', help='update all models')
parser.add_argument('--project', default=ROOT / 'runs/detect', help='save results to project/name')
parser.add_argument('--name', default='exp', help='save results to project/name')
parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
parser.add_argument('--line-thickness', default=3, type=int, help='bounding box thickness (pixels)')
parser.add_argument('--hide-labels', default=False, action='store_true', help='hide labels')
parser.add_argument('--hide-conf', default=False, action='store_true', help='hide confidences')
parser.add_argument('--half', action='store_true', help='use FP16 half-precision inference')
parser.add_argument('--dnn', action='store_true', help='use OpenCV DNN for ONNX inference')
opt = parser.parse_args()
opt.imgsz *= 2 if len(opt.imgsz) == 1 else 1 # expand
print_args(FILE.stem, opt)
return opt
即’–weights’:添加训练好的权重文件
‘–source’:测试图片的路径或测试视频的路径
‘–imgsz’:输入图片的大小
3.训练与测试
如在pycharm中运行,可直接运行train.py文件;如使用终端运行,则运行指令为:
python train.py --img 320 --batch 16 --epoch 300 --data data/coco128.yaml --cfg models/yolov5s.yaml --weights weights/yolov5s.pt --device '0'
在训练完成后得到最佳模型,将帧图像输入到这个网络进行预测,运行testPicture.py文件,读取一个图片并运行检测模型,首先将图片数据进行预处理后送predict进行检测,然后计算标记框的位置并在图中标注出来。
# -*- coding: UTF-8 -*-
import time
import cv2
import torch
import copy
from models.experimental import attempt_load
from utils.datasets import letterbox
from utils.general import check_img_size, non_max_suppression, scale_coords, xyxy2xywh
def load_model(weights, device):
model = attempt_load(weights, map_location=device) # load FP32 model
return model
def show_results(img, xywh, conf, class_num):
h, w, c = img.shape
labels = ['fire']
tl = 1 or round(0.002 * (h + w) / 2) + 1 # line/font thickness
x1 = int(xywh[0] * w - 0.5 * xywh[2] * w)
y1 = int(xywh[1] * h - 0.5 * xywh[3] * h)
x2 = int(xywh[0] * w + 0.5 * xywh[2] * w)
y2 = int(xywh[1] * h + 0.5 * xywh[3] * h)
cv2.rectangle(img, (x1, y1), (x2, y2), (0, 255, 0), thickness=tl, lineType=cv2.LINE_AA)
tf = max(tl - 1, 1) # font thickness
label = str(labels[int(class_num)]) + ': ' + str(conf)[:5]
cv2.putText(img, label, (x1, y1 - 2), 0, tl / 3, [0, 0, 255], thickness=tf, lineType=cv2.LINE_AA)
return img
def detect_one(model, image_path, device):
# Load model
img_size = 320
conf_thres = 0.3
iou_thres = 0.2
orgimg = cv2.imread(image_path) # BGR
# orgimg = image_path
img0 = copy.deepcopy(orgimg)
assert orgimg is not None, 'Image Not Found ' + image_path
h0, w0 = orgimg.shape[:2] # orig hw
r = img_size / max(h0, w0) # resize image to img_size
if r != 1: # always resize down, only resize up if training with augmentation
interp = cv2.INTER_AREA if r < 1 else cv2.INTER_LINEAR
img0 = cv2.resize(img0, (int(w0 * r), int(h0 * r)), interpolation=interp)
imgsz = check_img_size(img_size, s=model.stride.max()) # check img_size
img = letterbox(img0, new_shape=imgsz)[0]
# Convert
img = img[:, :, ::-1].transpose(2, 0, 1).copy() # BGR to RGB, to 3x416x416
# Run inference
t0 = time.time()
img = torch.from_numpy(img).to(device)
img = img.float() # uint8 to fp16/32
img /= 255.0 # 0 - 255 to 0.0 - 1.0
if img.ndimension() == 3:
img = img.unsqueeze(0)
# Inference
pred = model(img)[0]
# Apply NMS
pred = non_max_suppression(pred, conf_thres, iou_thres)
print('pred: ', pred)
print('img.shape: ', img.shape)
print('orgimg.shape: ', orgimg.shape)
# Process detections
for i, det in enumerate(pred): # detections per image
gn = torch.tensor(orgimg.shape)[[1, 0, 1, 0]].to(device) # normalization gain whwh
if len(det):
# Rescale boxes from img_size to im0 size
det[:, :4] = scale_coords(img.shape[2:], det[:, :4], orgimg.shape).round()
# Print results
for c in det[:, -1].unique():
n = (det[:, -1] == c).sum() # detections per class
for j in range(det.size()[0]):
xywh = (xyxy2xywh(torch.tensor(det[j, :4]).view(1, 4)) / gn).view(-1).tolist()
conf = det[j, 4].cpu().numpy()
class_num = det[j, 4].cpu().numpy()
orgimg = show_results(orgimg, xywh, conf, class_num)
# Stream results
print(f'Done. ({time.time() - t0:.3f}s)')
cv2.imshow('orgimg', orgimg)
cv2.imwrite('filename.jpg',orgimg)
if cv2.waitKey(0) == ord('q'): # q to quit
raise StopIteration
if __name__ == '__main__':
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
weights = '/home/jiu/project/fire_detect/runs/train/exp/weights/best.pt'
model = load_model(weights, device)
# using images
image_path = '/home/jiu/project/fire_detect/test_images/1.jpg'
detect_one(model, image_path, device)
print('over')
执行得到的结果如下图所示,图中火焰位置和置信度值都已标注出来,预测速度较快,基于模型可以将其设计成一个带有界面的系统,在界面上选择图片、视频或摄像头然后调用模型进行检测。
也可直接使用detect.py进行测试,并得到效果图,摄像头测试:
# -*- coding: UTF-8 -*-
import time
import cv2
import torch
import copy
from models.experimental import attempt_load
from utils.datasets import letterbox
from utils.general import check_img_size, non_max_suppression, scale_coords, xyxy2xywh
def load_model(weights, device):
model = attempt_load(weights, map_location=device) # load FP32 model
return model
def show_results(img, xywh, conf, class_num):
h, w, c = img.shape
labels = ['fire']
tl = 1 or round(0.002 * (h + w) / 2) + 1 # line/font thickness
x1 = int(xywh[0] * w - 0.5 * xywh[2] * w)
y1 = int(xywh[1] * h - 0.5 * xywh[3] * h)
x2 = int(xywh[0] * w + 0.5 * xywh[2] * w)
y2 = int(xywh[1] * h + 0.5 * xywh[3] * h)
cv2.rectangle(img, (x1, y1), (x2, y2), (0, 255, 0), thickness=tl, lineType=cv2.LINE_AA)
tf = max(tl - 1, 1) # font thickness
label = str(labels[int(class_num)]) + ': ' + str(conf)[:5]
cv2.putText(img, label, (x1, y1 - 2), 0, tl / 3, [0, 0, 255], thickness=tf, lineType=cv2.LINE_AA)
return img
def detect_one(model, image_path, device):
# Load model
img_size = 320
conf_thres = 0.3
iou_thres = 0.2
orgimg = image_path
img0 = copy.deepcopy(orgimg)
assert orgimg is not None, 'Image Not Found ' + image_path
h0, w0 = orgimg.shape[:2] # orig hw
r = img_size / max(h0, w0) # resize image to img_size
if r != 1: # always resize down, only resize up if training with augmentation
interp = cv2.INTER_AREA if r < 1 else cv2.INTER_LINEAR
img0 = cv2.resize(img0, (int(w0 * r), int(h0 * r)), interpolation=interp)
imgsz = check_img_size(img_size, s=model.stride.max()) # check img_size
img = letterbox(img0, new_shape=imgsz)[0]
# Convert
img = img[:, :, ::-1].transpose(2, 0, 1).copy() # BGR to RGB, to 3x416x416
# Run inference
t0 = time.time()
img = torch.from_numpy(img).to(device)
img = img.float() # uint8 to fp16/32
img /= 255.0 # 0 - 255 to 0.0 - 1.0
if img.ndimension() == 3:
img = img.unsqueeze(0)
# Inference
pred = model(img)[0]
# Apply NMS
pred = non_max_suppression(pred, conf_thres, iou_thres)
print('pred: ', pred)
print('img.shape: ', img.shape)
print('orgimg.shape: ', orgimg.shape)
# Process detections
for i, det in enumerate(pred): # detections per image
gn = torch.tensor(orgimg.shape)[[1, 0, 1, 0]].to(device) # normalization gain whwh
if len(det):
# Rescale boxes from img_size to im0 size
det[:, :4] = scale_coords(img.shape[2:], det[:, :4], orgimg.shape).round()
# Print results
for c in det[:, -1].unique():
n = (det[:, -1] == c).sum() # detections per class
for j in range(det.size()[0]):
xywh = (xyxy2xywh(torch.tensor(det[j, :4]).view(1, 4)) / gn).view(-1).tolist()
conf = det[j, 4].cpu().numpy()
class_num = det[j, 4].cpu().numpy()
orgimg = show_results(orgimg, xywh, conf, class_num)
# Stream results
print(f'Done. ({time.time() - t0:.3f}s)')
return orgimg
if __name__ == '__main__':
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
weights = '/home/jiu/project/fire_detect/runs/train/exp/weights/best.pt'
model = load_model(weights, device)
# using camera
cap = cv2.VideoCapture(0)
while cap.isOpened():
_, frame = cap.read()
frame = detect_one(model, frame, device)
cv2.imshow("img", frame)
cv2.waitKey(1)
print('over')