YOLOv6模型部署自动化:CI/CD流水线设计与实现

YOLOv6模型部署自动化:CI/CD流水线设计与实现

【免费下载链接】YOLOv6 meituan/YOLOv6: 是一个由美团点评团队开发的YOLO系列目标检测模型。适合用于需要高性能目标检测的应用。特点是可以提供优化的网络结构和训练流程,以达到更高的检测准确率和速度。 【免费下载链接】YOLOv6 项目地址: https://gitcode.com/gh_mirrors/yo/YOLOv6

引言:解决目标检测工程化落地的四大痛点

你是否还在为YOLOv6模型部署面临的以下问题而困扰?

  • 版本碎片化:训练环境与部署环境依赖冲突,模型版本管理混乱
  • 手工操作繁琐:从PyTorch模型到ONNX/TensorRT/NCNN格式转换需要多步人工干预
  • 质量验证缺失:部署前缺乏自动化精度和性能测试,线上故障频发
  • 跨平台适配难:云端TensorRT、边缘NCNN、移动端Android部署流程各不相同

本文将系统讲解如何构建一套完整的YOLOv6模型部署CI/CD流水线,实现从模型训练完成到多平台部署的全自动化流程。通过阅读本文,你将获得:

  • 一套可直接复用的YOLOv6部署自动化解决方案
  • 多平台模型转换(ONNX/TensorRT/NCNN)的自动化实现方法
  • 模型精度与性能的自动化验证策略
  • 基于GitHub Actions的完整CI/CD配置示例

技术架构:YOLOv6部署流水线的整体设计

流水线架构概览

mermaid

核心技术组件

组件功能技术选型
源码管理模型代码与配置版本控制Git
CI/CD引擎自动化流程编排与执行GitHub Actions
模型转换多格式模型自动转换ONNX-TensorRT/NCNN工具链
质量验证精度评估与性能测试COCO评估指标/测速脚本
部署目标多平台部署载体Kubernetes/Edge Device/Android
监控告警流水线状态监控Prometheus/Grafana

环境准备:构建一致的自动化执行环境

基础环境配置

创建Dockerfile定义标准化执行环境:

FROM nvidia/cuda:11.3.1-cudnn8-devel-ubuntu20.04

# 安装基础依赖
RUN apt-get update && apt-get install -y --no-install-recommends \
    build-essential \
    git \
    wget \
    curl \
    ca-certificates \
    libopencv-dev \
    && rm -rf /var/lib/apt/lists/*

# 安装Python环境
RUN curl -fsSL https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -o miniconda.sh && \
    bash miniconda.sh -b -p /opt/conda && \
    rm miniconda.sh && \
    /opt/conda/bin/conda create -n yolov6 python=3.8 -y && \
    /opt/conda/bin/conda clean -afy

# 设置环境变量
ENV PATH=/opt/conda/envs/yolov6/bin:$PATH

# 安装YOLOv6依赖
WORKDIR /workspace
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# 安装TensorRT
RUN wget https://developer.nvidia.com/compute/machine-learning/tensorrt/secure/8.2.3.0/tars/TensorRT-8.2.3.0.Ubuntu-20.04.x86_64-gnu.cuda-11.4.cudnn8.2.tar.gz && \
    tar -xzf TensorRT-8.2.3.0.Ubuntu-20.04.x86_64-gnu.cuda-11.4.cudnn8.2.tar.gz && \
    rm TensorRT-8.2.3.0.Ubuntu-20.04.x86_64-gnu.cuda-11.4.cudnn8.2.tar.gz && \
    mv TensorRT-8.2.3.0 /opt/tensorrt && \
    echo "export LD_LIBRARY_PATH=/opt/tensorrt/lib:\$LD_LIBRARY_PATH" >> ~/.bashrc && \
    pip install /opt/tensorrt/python/tensorrt-8.2.3.0-cp38-none-linux_x86_64.whl

# 安装NCNN
RUN git clone https://gitcode.com/nihui/ncnn.git && \
    cd ncnn && \
    mkdir build && cd build && \
    cmake -DCMAKE_BUILD_TYPE=Release -DNCNN_VULKAN=OFF .. && \
    make -j$(nproc) && \
    make install && \
    cd ../.. && rm -rf ncnn

依赖管理策略

创建requirements.txt管理Python依赖:

torch>=1.8.0
torchvision>=0.9.0
onnx>=1.10.0
onnx-simplifier>=0.4.13
opencv-python>=4.5.3
pycocotools>=2.0.2
numpy>=1.19.5
scipy>=1.7.3
tqdm>=4.62.3
seaborn>=0.11.2
matplotlib>=3.5.1

模型转换自动化:从PyTorch到多部署格式

ONNX格式转换

创建scripts/convert_onnx.sh实现自动化ONNX转换:

#!/bin/bash
set -e

# 参数解析
WEIGHTS_PATH=$1
OUTPUT_DIR=$2
IMG_SIZE=$3
BATCH_SIZE=$4

# 创建输出目录
mkdir -p ${OUTPUT_DIR}/onnx

# 基础模型转换
python deploy/ONNX/export_onnx.py \
    --weights ${WEIGHTS_PATH} \
    --img ${IMG_SIZE} \
    --batch ${BATCH_SIZE} \
    --simplify \
    --device 0

# 端到端模型转换(用于TensorRT)
python deploy/ONNX/export_onnx.py \
    --weights ${WEIGHTS_PATH} \
    --img ${IMG_SIZE} \
    --batch ${BATCH_SIZE} \
    --simplify \
    --end2end \
    --trt-version 8 \
    --device 0

# 移动文件到输出目录
mv *.onnx ${OUTPUT_DIR}/onnx/

# 生成转换报告
echo "ONNX转换完成: $(date)" > ${OUTPUT_DIR}/onnx/convert_report.txt
echo "模型路径: ${WEIGHTS_PATH}" >> ${OUTPUT_DIR}/onnx/convert_report.txt
echo "输入尺寸: ${IMG_SIZE}" >> ${OUTPUT_DIR}/onnx/convert_report.txt
echo "批处理大小: ${BATCH_SIZE}" >> ${OUTPUT_DIR}/onnx/convert_report.txt
echo "输出文件:" >> ${OUTPUT_DIR}/onnx/convert_report.txt
ls ${OUTPUT_DIR}/onnx/*.onnx >> ${OUTPUT_DIR}/onnx/convert_report.txt

TensorRT引擎构建

创建scripts/convert_tensorrt.sh实现TensorRT引擎自动化构建:

#!/bin/bash
set -e

# 参数解析
ONNX_PATH=$1
OUTPUT_DIR=$2
PRECISION=$3

# 创建输出目录
mkdir -p ${OUTPUT_DIR}/tensorrt

# 设置TensorRT工具路径
TRT_BIN="/opt/tensorrt/bin/trtexec"

# 根据精度选择不同参数
if [ "${PRECISION}" = "fp16" ]; then
    PRECISION_FLAGS="--fp16"
elif [ "${PRECISION}" = "int8" ]; then
    PRECISION_FLAGS="--int8 --calib=./data/calibration_images"
else
    PRECISION_FLAGS=""
fi

# 构建TensorRT引擎
${TRT_BIN} \
    --onnx=${ONNX_PATH} \
    --saveEngine=${OUTPUT_DIR}/tensorrt/yolov6.engine \
    --workspace=8192 \
    ${PRECISION_FLAGS} \
    --verbose

# 生成转换报告
echo "TensorRT转换完成: $(date)" > ${OUTPUT_DIR}/tensorrt/convert_report.txt
echo "ONNX路径: ${ONNX_PATH}" >> ${OUTPUT_DIR}/tensorrt/convert_report.txt
echo "精度模式: ${PRECISION}" >> ${OUTPUT_DIR}/tensorrt/convert_report.txt
echo "输出引擎: ${OUTPUT_DIR}/tensorrt/yolov6.engine" >> ${OUTPUT_DIR}/tensorrt/convert_report.txt

NCNN模型转换

创建scripts/convert_ncnn.sh实现NCNN模型自动化转换:

#!/bin/bash
set -e

# 参数解析
WEIGHTS_PATH=$1
OUTPUT_DIR=$2
IMG_SIZE=$3

# 创建输出目录
mkdir -p ${OUTPUT_DIR}/ncnn

# 导出TorchScript
python deploy/NCNN/export_torchscript.py \
    --weights ${WEIGHTS_PATH} \
    --img ${IMG_SIZE} \
    --batch 1 \
    --device cpu

# 转换为NCNN格式
TORCHSCRIPT_FILE=$(basename ${WEIGHTS_PATH%.pt}.torchscript)
mv ${TORCHSCRIPT_FILE} ${OUTPUT_DIR}/ncnn/
cd ${OUTPUT_DIR}/ncnn/

# 使用PNNX转换
pnnx ${TORCHSCRIPT_FILE} inputshape=[1,3,${IMG_SIZE}]f32

# 重命名输出文件
PARAM_FILE=$(echo ${TORCHSCRIPT_FILE%.torchscript}.ncnn.param)
BIN_FILE=$(echo ${TORCHSCRIPT_FILE%.torchscript}.ncnn.bin)
mv ${PARAM_FILE} yolov6.ncnn.param
mv ${BIN_FILE} yolov6.ncnn.bin

# 清理临时文件
rm ${TORCHSCRIPT_FILE}

# 生成转换报告
cd -
echo "NCNN转换完成: $(date)" > ${OUTPUT_DIR}/ncnn/convert_report.txt
echo "模型路径: ${WEIGHTS_PATH}" >> ${OUTPUT_DIR}/ncnn/convert_report.txt
echo "输入尺寸: ${IMG_SIZE}" >> ${OUTPUT_DIR}/ncnn/convert_report.txt
echo "输出文件: yolov6.ncnn.param, yolov6.ncnn.bin" >> ${OUTPUT_DIR}/ncnn/convert_report.txt

质量验证自动化:确保部署模型的可靠性

精度验证

创建scripts/validate_accuracy.sh实现自动化精度验证:

#!/bin/bash
set -e

# 参数解析
MODEL_PATH=$1
MODEL_TYPE=$2
DATASET_PATH=$3
OUTPUT_DIR=$4

# 创建输出目录
mkdir -p ${OUTPUT_DIR}/accuracy

# 根据模型类型选择不同验证脚本
if [ "${MODEL_TYPE}" = "onnx" ]; then
    python deploy/ONNX/eval_trt.py \
        --weights ${MODEL_PATH} \
        --batch-size=1 \
        --data ${DATASET_PATH} \
        --output ${OUTPUT_DIR}/accuracy/results.json
elif [ "${MODEL_TYPE}" = "tensorrt" ]; then
    python deploy/TensorRT/eval_yolo_trt.py \
        --imgs_dir ${DATASET_PATH}/images/val \
        --labels_dir ${DATASET_PATH}/labels/val \
        --annotations ${DATASET_PATH}/annotations/instances_val.json \
        --batch 1 \
        --img_size 640 \
        --model ${MODEL_PATH} \
        --do_pr_metric \
        --is_coco \
        --output ${OUTPUT_DIR}/accuracy/results.json
elif [ "${MODEL_TYPE}" = "ncnn" ]; then
    # NCNN精度验证
    python deploy/NCNN/infer-ncnn-model.py \
        --eval \
        --data ${DATASET_PATH} \
        --param ${MODEL_PATH%.bin}.param \
        --bin ${MODEL_PATH} \
        --img-size 640 \
        --output ${OUTPUT_DIR}/accuracy/results.json
else
    echo "不支持的模型类型: ${MODEL_TYPE}"
    exit 1
fi

# 检查精度是否达标(mAP@0.5:0.95 > 0.95 * 原始精度)
BASELINE_MAP=$(jq -r '.baseline_map' ${OUTPUT_DIR}/accuracy/results.json)
CURRENT_MAP=$(jq -r '.current_map' ${OUTPUT_DIR}/accuracy/results.json)
MIN_ACCEPTABLE=$(echo "$BASELINE_MAP * 0.95" | bc -l)

echo "基准mAP: ${BASELINE_MAP}"
echo "当前mAP: ${CURRENT_MAP}"
echo "最小可接受mAP: ${MIN_ACCEPTABLE}"

# 比较精度是否达标
if (( $(echo "$CURRENT_MAP >= $MIN_ACCEPTABLE" | bc -l) )); then
    echo "精度验证通过"
    echo "{\"status\": \"pass\", \"baseline_map\": ${BASELINE_MAP}, \"current_map\": ${CURRENT_MAP}, \"min_acceptable\": ${MIN_ACCEPTABLE}}" > ${OUTPUT_DIR}/accuracy/validation_summary.json
else
    echo "精度验证失败"
    echo "{\"status\": \"fail\", \"baseline_map\": ${BASELINE_MAP}, \"current_map\": ${CURRENT_MAP}, \"min_acceptable\": ${MIN_ACCEPTABLE}}" > ${OUTPUT_DIR}/accuracy/validation_summary.json
    exit 1
fi

性能测试

创建scripts/benchmark_performance.sh实现自动化性能测试:

#!/bin/bash
set -e

# 参数解析
MODEL_PATH=$1
MODEL_TYPE=$2
OUTPUT_DIR=$3
REPEAT=100  # 测试重复次数

# 创建输出目录
mkdir -p ${OUTPUT_DIR}/performance

# 记录开始时间
START_TIME=$(date +%s)

# 根据模型类型选择不同性能测试脚本
if [ "${MODEL_TYPE}" = "onnx" ]; then
    python tools/benchmark_onnx.py \
        --model ${MODEL_PATH} \
        --repeat ${REPEAT} \
        --output ${OUTPUT_DIR}/performance/results.json
elif [ "${MODEL_TYPE}" = "tensorrt" ]; then
    python tools/benchmark_tensorrt.py \
        --model ${MODEL_PATH} \
        --repeat ${REPEAT} \
        --output ${OUTPUT_DIR}/performance/results.json
elif [ "${MODEL_TYPE}" = "ncnn" ]; then
    python tools/benchmark_ncnn.py \
        --param ${MODEL_PATH%.bin}.param \
        --bin ${MODEL_PATH} \
        --repeat ${REPEAT} \
        --output ${OUTPUT_DIR}/performance/results.json
else
    echo "不支持的模型类型: ${MODEL_TYPE}"
    exit 1
fi

# 记录结束时间
END_TIME=$(date +%s)
DURATION=$((END_TIME - START_TIME))

# 检查性能是否达标
AVG_LATENCY=$(jq -r '.average_latency' ${OUTPUT_DIR}/performance/results.json)
FPS=$(jq -r '.fps' ${OUTPUT_DIR}/performance/results.json)
MAX_LATENCY_THRESHOLD=50  # 最大允许延迟(ms)
MIN_FPS_THRESHOLD=20      # 最小允许FPS

echo "平均延迟: ${AVG_LATENCY} ms"
echo "FPS: ${FPS}"
echo "测试时长: ${DURATION} s"

# 生成性能报告
echo "{
    \"model_type\": \"${MODEL_TYPE}\",
    \"average_latency_ms\": ${AVG_LATENCY},
    \"fps\": ${FPS},
    \"max_latency_threshold_ms\": ${MAX_LATENCY_THRESHOLD},
    \"min_fps_threshold\": ${MIN_FPS_THRESHOLD},
    \"test_duration_s\": ${DURATION},
    \"test_repeat_count\": ${REPEAT},
    \"timestamp\": \"$(date +%Y-%m-%dT%H:%M:%S)\"
}" > ${OUTPUT_DIR}/performance/summary.json

# 性能阈值检查
if (( $(echo "${AVG_LATENCY} > ${MAX_LATENCY_THRESHOLD}" | bc -l) )) || (( $(echo "${FPS} < ${MIN_FPS_THRESHOLD}" | bc -l) )); then
    echo "性能未达标"
    exit 1
else
    echo "性能验证通过"
fi

部署自动化:多平台部署流程

云端部署(Kubernetes)

创建kubernetes/yolov6-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: yolov6-inference
  namespace: computer-vision
spec:
  replicas: 3
  selector:
    matchLabels:
      app: yolov6
  template:
    metadata:
      labels:
        app: yolov6
    spec:
      containers:
      - name: yolov6-tensorrt
        image: yolov6-tensorrt:latest
        resources:
          limits:
            nvidia.com/gpu: 1
            cpu: "2"
            memory: "4Gi"
          requests:
            cpu: "1"
            memory: "2Gi"
        ports:
        - containerPort: 8080
        volumeMounts:
        - name: model-volume
          mountPath: /models
        env:
        - name: MODEL_PATH
          value: /models/yolov6.engine
        - name: INPUT_SIZE
          value: "640"
        - name: BATCH_SIZE
          value: "1"
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /ready
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 5
      volumes:
      - name: model-volume
        persistentVolumeClaim:
          claimName: yolov6-model-storage
---
apiVersion: v1
kind: Service
metadata:
  name: yolov6-service
  namespace: computer-vision
spec:
  selector:
    app: yolov6
  ports:
  - port: 80
    targetPort: 8080
  type: ClusterIP

边缘部署

创建scripts/deploy_edge.sh实现边缘设备部署:

#!/bin/bash
set -e

# 参数解析
MODEL_PATH=$1
DEVICE_IP=$2
DEVICE_USER=$3
REMOTE_DIR=$4

# 检查设备连通性
ping -c 3 ${DEVICE_IP} > /dev/null || { echo "设备不可达: ${DEVICE_IP}"; exit 1; }

# 创建远程目录
ssh ${DEVICE_USER}@${DEVICE_IP} "mkdir -p ${REMOTE_DIR}"

# 传输模型文件
scp ${MODEL_PATH} ${DEVICE_USER}@${DEVICE_IP}:${REMOTE_DIR}/

# 传输NCNN推理代码
scp deploy/NCNN/infer-ncnn-model.py ${DEVICE_USER}@${DEVICE_IP}:${REMOTE_DIR}/

# 传输服务配置文件
scp configs/edge/yolov6.service ${DEVICE_USER}@${DEVICE_IP}:/tmp/

# 远程配置服务
ssh ${DEVICE_USER}@${DEVICE_IP} "sudo mv /tmp/yolov6.service /etc/systemd/system/ && \
    sudo systemctl daemon-reload && \
    sudo systemctl enable yolov6 && \
    sudo systemctl restart yolov6"

# 验证部署状态
ssh ${DEVICE_USER}@${DEVICE_IP} "systemctl status yolov6 | grep 'active (running)'" || { echo "服务启动失败"; exit 1; }

echo "边缘设备部署成功: ${DEVICE_IP}"

Android部署

创建deploy/NCNN/Android/app/src/main/jni/CMakeLists.txt配置Android NDK构建:

cmake_minimum_required(VERSION 3.4.1)

# 导入NCNN库
set(ncnn_DIR ${CMAKE_SOURCE_DIR}/ncnn-20220420-android-vulkan/${ANDROID_ABI}/lib/cmake/ncnn)
find_package(ncnn REQUIRED)

# 添加应用代码
add_library(yolov6ncnn SHARED
    yolov6ncnn.cpp
    ndkcamera.cpp
    yolo.cpp)

# 链接库
target_link_libraries(yolov6ncnn
    ncnn
    android
    jnigraphics
    mediandk
    camera2ndk
    log)

CI/CD配置:GitHub Actions工作流

创建.github/workflows/deploy.yml定义完整CI/CD流水线:

name: YOLOv6模型部署流水线

on:
  workflow_dispatch:
    inputs:
      weights_path:
        description: '模型权重路径'
        required: true
        default: 'runs/train/exp/weights/best.pt'
      img_size:
        description: '模型输入尺寸'
        required: true
        default: '640'
      batch_size:
        description: '批处理大小'
        required: true
        default: '1'
      precision:
        description: '模型精度(fp32/fp16/int8)'
        required: true
        default: 'fp16'
        type: choice
        options:
          - fp32
          - fp16
          - int8
      deploy_target:
        description: '部署目标'
        required: true
        default: 'all'
        type: choice
        options:
          - cloud
          - edge
          - mobile
          - all

jobs:
  prepare:
    name: 环境准备
    runs-on: ubuntu-latest
    steps:
      - name: 代码检出
        uses: actions/checkout@v3
      
      - name: 设置Docker构建x86_64平台
        run: |
          docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
          docker buildx create --name yolov6-builder --use
      
      - name: 构建Docker镜像
        run: |
          docker buildx build \
            --platform linux/amd64 \
            -t yolov6-deploy:latest \
            -f Dockerfile . \
            --push=false
      
      - name: 保存Docker镜像
        run: |
          docker save yolov6-deploy:latest > yolov6-deploy.tar
      
      - name: 缓存Docker镜像
        uses: actions/cache@v3
        with:
          path: yolov6-deploy.tar
          key: docker-image-${{ github.sha }}

  convert_models:
    name: 模型转换
    needs: prepare
    runs-on: ubuntu-latest
    steps:
      - name: 代码检出
        uses: actions/checkout@v3
      
      - name: 加载Docker镜像
        uses: actions/cache@v3
        with:
          path: yolov6-deploy.tar
          key: docker-image-${{ github.sha }}
      
      - name: 导入Docker镜像
        run: docker load < yolov6-deploy.tar
      
      - name: 创建输出目录
        run: mkdir -p output/models
      
      - name: 转换ONNX模型
        run: |
          docker run --rm -v $(pwd):/workspace -w /workspace \
            --gpus all yolov6-deploy:latest \
            bash scripts/convert_onnx.sh \
              ${{ github.event.inputs.weights_path }} \
              output/models \
              ${{ github.event.inputs.img_size }} \
              ${{ github.event.inputs.batch_size }}
      
      - name: 转换TensorRT模型
        run: |
          docker run --rm -v $(pwd):/workspace -w /workspace \
            --gpus all yolov6-deploy:latest \
            bash scripts/convert_tensorrt.sh \
              output/models/onnx/yolov6s.onnx \
              output/models \
              ${{ github.event.inputs.precision }}
      
      - name: 转换NCNN模型
        run: |
          docker run --rm -v $(pwd):/workspace -w /workspace \
            yolov6-deploy:latest \
            bash scripts/convert_ncnn.sh \
              ${{ github.event.inputs.weights_path }} \
              output/models \
              ${{ github.event.inputs.img_size }}
      
      - name: 保存模型文件
        uses: actions/upload-artifact@v3
        with:
          name: converted-models
          path: output/models/

  validate:
    name: 模型验证
    needs: convert_models
    runs-on: ubuntu-latest
    steps:
      - name: 代码检出
        uses: actions/checkout@v3
      
      - name: 加载Docker镜像
        uses: actions/cache@v3
        with:
          path: yolov6-deploy.tar
          key: docker-image-${{ github.sha }}
      
      - name: 导入Docker镜像
        run: docker load < yolov6-deploy.tar
      
      - name: 下载模型文件
        uses: actions/download-artifact@v3
        with:
          name: converted-models
          path: output/models/
      
      - name: 准备验证数据集
        run: |
          mkdir -p data/val
          # 下载验证数据集示例(实际使用时替换为完整数据集)
          wget https://gitcode.com/gh_mirrors/yo/YOLOv6/raw/master/data/images/image1.jpg -O data/val/image1.jpg
          wget https://gitcode.com/gh_mirrors/yo/YOLOv6/raw/master/data/images/image2.jpg -O data/val/image2.jpg
      
      - name: 验证ONNX模型精度
        run: |
          docker run --rm -v $(pwd):/workspace -w /workspace \
            --gpus all yolov6-deploy:latest \
            bash scripts/validate_accuracy.sh \
              output/models/onnx/yolov6s.onnx \
              onnx \
              data \
              output/validation
      
      - name: 验证TensorRT模型性能
        run: |
          docker run --rm -v $(pwd):/workspace -w /workspace \
            --gpus all yolov6-deploy:latest \
            bash scripts/benchmark_performance.sh \
              output/models/tensorrt/yolov6.engine \
              tensorrt \
              output/performance
      
      - name: 保存验证报告
        uses: actions/upload-artifact@v3
        with:
          name: validation-reports
          path: output/validation/

  deploy_cloud:
    name: 云端部署
    if: github.event.inputs.deploy_target == 'cloud' || github.event.inputs.deploy_target == 'all'
    needs: validate
    runs-on: ubuntu-latest
    steps:
      - name: 代码检出
        uses: actions/checkout@v3
      
      - name: 下载模型文件
        uses: actions/download-artifact@v3
        with:
          name: converted-models
          path: output/models/
      
      - name: 配置Kubernetes客户端
        uses: azure/setup-kubectl@v3
      
      - name: 设置Kubernetes上下文
        uses: azure/k8s-set-context@v3
        with:
          kubeconfig: ${{ secrets.KUBE_CONFIG }}
      
      - name: 创建模型存储PVC
        run: |
          kubectl apply -f kubernetes/pvc.yaml
      
      - name: 部署模型到Kubernetes
        run: |
          # 替换部署配置中的模型路径和参数
          sed -i "s|MODEL_PATH_VALUE|/models/tensorrt/yolov6.engine|g" kubernetes/yolov6-deployment.yaml
          sed -i "s|INPUT_SIZE_VALUE|${{ github.event.inputs.img_size }}|g" kubernetes/yolov6-deployment.yaml
          sed -i "s|BATCH_SIZE_VALUE|${{ github.event.inputs.batch_size }}|g" kubernetes/yolov6-deployment.yaml
          
          kubectl apply -f kubernetes/yolov6-deployment.yaml
      
      - name: 验证部署状态
        run: |
          kubectl rollout status deployment/yolov6-inference -n computer-vision
          kubectl get pods -n computer-vision
      
      - name: 执行烟雾测试
        run: |
          SERVICE_IP=$(kubectl get svc yolov6-service -n computer-vision -o jsonpath='{.spec.clusterIP}')
          curl -X POST http://${SERVICE_IP}/infer \
            -H "Content-Type: application/json" \
            -d '{"image_url": "https://gitcode.com/gh_mirrors/yo/YOLOv6/raw/master/data/images/image1.jpg"}'

  deploy_edge:
    name: 边缘部署
    if: github.event.inputs.deploy_target == 'edge' || github.event.inputs.deploy_target == 'all'
    needs: validate
    runs-on: ubuntu-latest
    steps:
      - name: 代码检出
        uses: actions/checkout@v3
      
      - name: 下载模型文件
        uses: actions/download-artifact@v3
        with:
          name: converted-models
          path: output/models/
      
      - name: 部署到边缘设备
        run: |
          bash scripts/deploy_edge.sh \
            output/models/ncnn/yolov6.ncnn.bin \
            ${{ secrets.EDGE_DEVICE_IP }} \
            ${{ secrets.EDGE_DEVICE_USER }} \
            /opt/yolov6/models

  deploy_mobile:
    name: 移动端部署
    if: github.event.inputs.deploy_target == 'mobile' || github.event.inputs.deploy_target == 'all'
    needs: validate
    runs-on: ubuntu-latest
    steps:
      - name: 代码检出
        uses: actions/checkout@v3
      
      - name: 下载模型文件
        uses: actions/download-artifact@v3
        with:
          name: converted-models
          path: output/models/
      
      - name: 设置Android SDK
        uses: android-actions/setup-android@v3
      
      - name: 复制NCNN模型到Android项目
        run: |
          mkdir -p deploy/NCNN/Android/app/src/main/assets/
          cp output/models/ncnn/yolov6.ncnn.param deploy/NCNN/Android/app/src/main/assets/
          cp output/models/ncnn/yolov6.ncnn.bin deploy/NCNN/Android/app/src/main/assets/
      
      - name: 构建Android应用
        run: |
          cd deploy/NCNN/Android
          ./gradlew assembleRelease
      
      - name: 保存APK文件
        uses: actions/upload-artifact@v3
        with:
          name: yolov6-android-app
          path: deploy/NCNN/Android/app/build/outputs/apk/release/app-release.apk

  finalize:
    name: 流水线完成
    needs: [deploy_cloud, deploy_edge, deploy_mobile]
    if: always()
    runs-on: ubuntu-latest
    steps:
      - name: 汇总结果
        run: |
          echo "YOLOv6模型部署流水线已完成"
          echo "部署目标: ${{ github.event.inputs.deploy_target }}"
          echo "模型精度: ${{ github.event.inputs.precision }}"
          echo "输入尺寸: ${{ github.event.inputs.img_size }}"
      
      - name: 发送通知
        uses: 8398a7/action-slack@v3
        with:
          status: ${{ job.status }}
          fields: repo,message,commit,author,action,eventName,ref,workflow
        env:
          SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}

流水线优化与最佳实践

性能优化策略

  1. 并行化模型转换:同时转换不同格式的模型,减少整体构建时间

    # 在GitHub Actions中实现并行作业
    jobs:
      convert_onnx:
        # ONNX转换作业定义
      convert_tensorrt:
        # TensorRT转换作业定义
      convert_ncnn:
        # NCNN转换作业定义
    
  2. 缓存机制应用:缓存Docker镜像、依赖库和中间结果

    - name: 缓存Python依赖
      uses: actions/cache@v3
      with:
        path: ~/.cache/pip
        key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt') }}
        restore-keys: |
          ${{ runner.os }}-pip-
    
  3. 增量构建:仅重新处理变更的模型和代码

    # 在转换脚本中添加增量检查
    if [ ! -f "${OUTPUT_DIR}/onnx/yolov6s.onnx" ] || [ "${WEIGHTS_PATH}" -nt "${OUTPUT_DIR}/onnx/yolov6s.onnx" ]; then
        # 执行模型转换
    else
        echo "ONNX模型已是最新,跳过转换"
    fi
    

错误处理与恢复

  1. 关键步骤重试机制:对易失败步骤添加自动重试

    - name: 模型转换
      run: bash scripts/convert_onnx.sh ...
      retry:
        max_attempts: 3
        retry_on: error
    
  2. 失败快速反馈:在流水线早期捕获错误,减少无效执行

    # 在转换前检查模型文件完整性
    if ! python -c "import torch; torch.load('${WEIGHTS_PATH}')"; then
        echo "模型文件损坏或无效"
        exit 1
    fi
    
  3. 回滚策略:部署失败时自动回滚到上一稳定版本

    # 部署回滚脚本示例
    if ! kubectl rollout status deployment/yolov6-inference; then
        echo "部署失败,回滚到上一版本"
        kubectl rollout undo deployment/yolov6-inference
        exit 1
    fi
    

结论与未来展望

本文详细介绍了YOLOv6模型部署CI/CD流水线的设计与实现,涵盖了从环境准备、模型转换、质量验证到多平台部署的全流程自动化方案。通过这套流水线,可以显著降低模型部署门槛,提高部署效率,并确保部署模型的质量和可靠性。

未来可以从以下几个方面进一步优化:

  1. 模型压缩与优化集成:将模型剪枝、知识蒸馏等优化技术整合到流水线中
  2. 自适应部署策略:根据目标设备性能自动选择最优模型精度和输入尺寸
  3. A/B测试框架:支持多版本模型并行部署与效果对比
  4. 监控与反馈闭环:收集线上推理数据,自动触发模型重训练和更新

通过持续优化和扩展,这套部署流水线将能够支持更复杂的业务场景,为YOLOv6模型的工程化落地提供更全面的支持。

附录:常用命令参考

命令功能
bash scripts/convert_onnx.sh weights/best.pt output 640 1转换ONNX模型
bash scripts/convert_tensorrt.sh output/onnx/yolov6s.onnx output fp16转换TensorRT模型
bash scripts/convert_ncnn.sh weights/best.pt output 640转换NCNN模型
bash scripts/validate_accuracy.sh output/models/tensorrt/yolov6.engine tensorrt data/coco output/validation验证模型精度
bash scripts/benchmark_performance.sh output/models/ncnn/yolov6.ncnn.bin ncnn output/performance测试模型性能
kubectl apply -f kubernetes/yolov6-deployment.yaml部署到Kubernetes

【免费下载链接】YOLOv6 meituan/YOLOv6: 是一个由美团点评团队开发的YOLO系列目标检测模型。适合用于需要高性能目标检测的应用。特点是可以提供优化的网络结构和训练流程,以达到更高的检测准确率和速度。 【免费下载链接】YOLOv6 项目地址: https://gitcode.com/gh_mirrors/yo/YOLOv6

创作声明:本文部分内容由AI辅助生成(AIGC),仅供参考

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值