基于EVA02预训练模型使用Detectron2训练自己的COCO类型的数据集

EVA来源即相关资料

  1. EVA-github代码
  2. EVA:探索大规模遮罩视觉表示学习的极限 |带代码的论文 (paperswithcode.com
  3. Detectron2 快速开始,使用 WebCam 测试
  4. detectron2/MODEL_ZOO.md 在主 ·Facebook研究/Detectron2 (github.com)下载模型model_final_f10217.pkl
  5. 目标检测 | 常用数据集标注格式以及转换代码_图像分类导入数据时可以选择什么_JUST LOVE SMILE的博客-优快云博客
  6. 深度学习 | Detectron2使用指南_detectron2 用法_JUST LOVE SMILE的博客-优快云博客
  7. 使用自定义数据集 — detectron2 0.6 文档
  8. Detectron2训练自己数据集

环境搭建

训练环境info

[11/08 08:30:16] detectron2 INFO: Environment info:
----------------------  -------------------------------------------------------------------------
sys.platform            linux
Python                  3.8.10 (default, May 26 2023, 14:05:08) [GCC 9.4.0]
numpy                   1.23.2
detectron2              0.6 @/mnt/data1/download_new/EVA/EVA-master-project/EVA-02/det/detectron2
Compiler                GCC 9.4
CUDA compiler           CUDA 11.8
detectron2 arch flags   3.5, 3.7, 5.0, 5.2, 5.3, 6.0, 6.1, 7.0, 7.5
DETECTRON2_ENV_MODULE   <not set>
PyTorch                 2.0.1+cu118 @/usr/local/lib/python3.8/dist-packages/torch
PyTorch debug build     False
GPU available           Yes
GPU 0,1,2,3             NVIDIA GeForce RTX 3090 (arch=8.6)
Driver version          525.105.17
CUDA_HOME               /usr/local/cuda
TORCH_CUDA_ARCH_LIST    Kepler;Kepler+Tesla;Maxwell;Maxwell+Tegra;Pascal;Volta;Turing
Pillow                  8.4.0
torchvision             0.15.2+cu118 @/usr/local/lib/python3.8/dist-packages/torchvision
torchvision arch flags  3.5, 5.0, 6.0, 7.0, 7.5, 8.0, 8.6
fvcore                  0.1.5.post20221221
iopath                  0.1.9
cv2                     4.8.1

生成环境的requirements.txt

pip freeze > requirements.txt
absl-py==2.0.0
addict==2.4.0
antlr4-python3-runtime==4.9.3
appdirs==1.4.4
black==22.3.0
cachetools==5.3.2
certifi==2023.7.22
charset-normalizer==3.3.1
click==8.1.7
cloudpickle==3.0.0
cmake==3.27.7
contourpy==1.1.1
cryptography==2.8
cycler==0.12.1
dbus-python==1.2.16
# Editable install with no version control (detectron2==0.6)
-e /mnt/data1/download_new/EVA/EVA-master-project/EVA-02/det
distro==1.8.0
docker-pycreds==0.4.0
einops==0.7.0
fairscale==0.4.13
filelock==3.13.0
fonttools==4.43.1
fsspec==2023.10.0
future==0.18.3
fvcore==0.1.5.post20221221
gitdb==4.0.11
GitPython==3.1.40
google-auth==2.23.3
google-auth-oauthlib==1.0.0
grpcio==1.59.0
huggingface-hub==0.18.0
hydra-core==1.3.2
idna==3.4
importlib-metadata==6.8.0
importlib-resources==6.1.0
iopath==0.1.9
Jinja2==3.1.2
kiwisolver==1.4.5
lit==15.0.7
Markdown==3.5
markdown-it-py==3.0.0
MarkupSafe==2.1.3
matplotlib==3.7.3
mdurl==0.1.2
mmcv==2.1.0
mmengine==0.9.1
mpmath==1.3.0
mypy-extensions==1.0.0
networkx==3.1
ninja==1.11.1.1
numpy==1.23.2
nvidia-cublas-cu12==12.1.3.1
nvidia-cuda-cupti-cu12==12.1.105
nvidia-cuda-nvrtc-cu12==12.1.105
nvidia-cuda-runtime-cu12==12.1.105
nvidia-cudnn-cu12==8.9.2.26
nvidia-cufft-cu12==11.0.2.54
nvidia-curand-cu12==10.3.2.106
nvidia-cusolver-cu12==11.4.5.107
nvidia-cusparse-cu12==12.1.0.106
nvidia-nccl-cu12==2.18.1
nvidia-nvjitlink-cu12==12.3.52
nvidia-nvtx-cu12==12.1.105
oauthlib==3.2.2
omegaconf==2.3.0
onnx==1.15.0
opencv-python==4.8.1.78
packaging==23.2
pathspec==0.11.2
pathtools==0.1.2
Pillow==8.4.0
platformdirs==3.11.0
portalocker==2.8.2
protobuf==4.24.4
psutil==5.9.6
pyasn1==0.5.0
pyasn1-modules==0.3.0
pycocotools==2.0.7
pydot==1.4.2
Pygments==2.16.1
PyGObject==3.36.0
pyOpenSSL==19.0.0
pyparsing==3.1.1
pyre-extensions==0.0.29
python-dateutil==2.8.2
PyYAML==6.0.1
requests==2.31.0
requests-oauthlib==1.3.1
rich==13.6.0
rsa==4.9
safetensors==0.4.0
scikit-build==0.17.6
scipy==1.10.1
sentry-sdk==1.32.0
setproctitle==1.3.3
setuptools-scm==8.0.4
shapely==2.0.2
six==1.14.0
smmap==5.0.1
sympy==1.12
tabulate==0.9.0
tensorboard==2.14.0
tensorboard-data-server==0.7.2
termcolor==2.3.0
timm==0.9.8
tomli==2.0.1
torch==2.0.1+cu118
torchaudio==2.0.2+cu118
torchvision==0.15.2+cu118
tqdm==4.66.1
triton==2.0.0
typing-inspect==0.9.0
typing_extensions==4.8.0
urllib3==2.0.7
wandb==0.15.12
Werkzeug==3.0.1
xformers==0.0.20
xops==0.0.1
yacs==0.1.8
yapf==0.40.2
zipp==3.17.0

1. [docker-build](EVA/EVA-02/det/docker/README.md at master · baaivision/EVA (github.com))

#cd /mnt/data1/download_new/EVA/EVA-master-project/EVA-02/det/docker
cd docker/
# Build: 可用id-u查询uid 
#docker build --build-arg USER_ID=$UID -t detectron2:v0 .
docker build --build-arg USER_ID=1000 -t detectron2:v0 .

# Launch (require GPUs):
#容器创建
docker run --gpus all -it \
  --shm-size=156gb --env="DISPLAY" --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" -v /mnt/data:/mnt/data -v /mnt/data1:/mnt/data1 -e NVIDIA_DRIVER_CAPABILITIES=compute,utility,video --name=detectron2 detectron2:v0

# Grant docker access to host X server to show images
xhost +local:`docker inspect --format='{{ .Config.Hostname }}' detectron2`
#安装缺少的环境
pip install matplotlib pip -i http://pypi.douban.com/simple/ --trusted-host pypi.douban.com

apt-get install g++
#进容器后编译insatll
cd EVA-master-project/EVA-02/det
python -m pip install -e . -i http://pypi.douban.com/simple/ --trusted-host pypi.douban.com
#安装缺少的环境
pip install Pillow==8.4.0 scipy numpy einops opencv-python -i http://pypi.douban.com/simple/ --trusted-host pypi.douban.com

pip install shapely -i http://pypi.douban.com/simple/ --trusted-host pypi.douban.com

pip install numpy==1.23.2 xformers -i http://pypi.douban.com/simple/ --trusted-host pypi.douban.com


#pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu117 -i http://pypi.tuna.tsinghua.edu.cn/simple/ --trusted-host pypi.tuna.tsinghua.edu.cn

2. Dockerfile

#FROM nvidia/cuda:11.1.1-cudnn8-devel-ubuntu18.04
FROM nvidia/cuda:11.8.0-devel-ubuntu18.04
# use an older system (18.04) to avoid opencv incompatibility (issue#3524)

ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update --fix-missing && apt-get install -y \
	python3-opencv ca-certificates python3-dev git wget sudo ninja-build
RUN ln -sv /usr/bin/python3 /usr/bin/python

# create a non-root user
ARG USER_ID=1000
RUN useradd -m --no-log-init --system  --uid ${USER_ID} appuser -g sudo
RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
USER appuser
WORKDIR /home/appuser

ENV PATH="/home/appuser/.local/bin:${PATH}"
#RUN wget https://bootstrap.pypa.io/pip/3.6/get-pip.py && \
	#python3 get-pip.py --user && \
	#rm get-pip.py
RUN sudo apt update && sudo apt install -y python3-pip
RUN pip3 install setuptools_scm  -i http://pypi.douban.com/simple/ --trusted-host pypi.douban.com
RUN pip3 install scikit-build  -i http://pypi.douban.com/simple/ --trusted-host pypi.douban.com
#RUN pip3 install tokenize -i http://pypi.douban.com/simple/ --trusted-host pypi.douban.com
RUN pip3 install setuptools -i http://pypi.douban.com/simple/ --trusted-host pypi.douban.com

RUN pip3 install --upgrade pip -i http://pypi.tuna.tsinghua.edu.cn/simple/ --trusted-host pypi.tuna.tsinghua.edu.cn
RUN pip3 install cmake  -i http://pypi.tuna.tsinghua.edu.cn/simple/ --trusted-host pypi.tuna.tsinghua.edu.cn

# install dependencies
# See https://pytorch.org/ for other options if you use a different version of CUDA
RUN pip3 install --user  tensorboard  onnx -i http://pypi.douban.com/simple/ --trusted-host pypi.douban.com
  # cmake from apt-get is too old
#RUN pip3 install --user torch==1.10 torchvision==0.11.1 -f https://download.pytorch.org/whl/cu111/torch_stable.html -i http://pypi.tuna.tsinghua.edu.cn/simple/ --trusted-host pypi.tuna.tsinghua.edu.cn
RUN pip3 install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118 -i http://pypi.tuna.tsinghua.edu.cn/simple/ --trusted-host pypi.tuna.tsinghua.edu.cn
#RUN pip3 install --user 'git+https://github.com/facebookresearch/fvcore'
# install detectron2
#RUN git clone https://github.com/facebookresearch/detectron2 detectron2_repo
# set FORCE_CUDA because during `docker build` cuda is not accessible
ENV FORCE_CUDA="1"
# This will by default build detectron2 for all common cuda architectures and take a lot more time,
# because inside `docker build`, there is no way to tell which architecture will be used.
ARG TORCH_CUDA_ARCH_LIST="Kepler;Kepler+Tesla;Maxwell;Maxwell+Tegra;Pascal;Volta;Turing"
ENV TORCH_CUDA_ARCH_LIST="${TORCH_CUDA_ARCH_LIST}"

#RUN pip3 install --user -e detectron2_repo

# Set a fixed model cache directory.
ENV FVCORE_CACHE="/tmp"
WORKDIR /home/appuser/detectron2_repo

# run detectron2 under user "appuser":
# wget http://images.cocodataset.org/val2017/000000439715.jpg -O input.jpg
# python3 demo/demo.py  \
	#--config-file configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml \
	#--input input.jpg --output outputs/ \
	#--opts MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl

推理测试

 wget http://images.cocodataset.org/val2017/000000439715.jpg -O input.jpg 
 
python3 demo/demo.py \ --config-file configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml \ --input input.jpg --output outputs/ \ --opts MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl

#my_detect
python demo/demo.py --config-file configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml --input demo/input.jpg --output demo/out --opts MODEL.WEIGHTS demo/demo-model/model_final_f10217.pkl

报错解决

Traceback (most recent call last):
  File "demo/demo.py", line 112, in <module>
    predictions, visualized_output = demo.run_on_image(img)
  File "/mnt/data1/download_new/EVA/EVA-master-project/EVA-02/det/demo/predictor.py", line 48, in run_on_image
    predictions = self.predictor(image)
  File "/mnt/data1/download_new/EVA/EVA-master-project/EVA-02/det/detectron2/engine/defaults.py", line 328, in __call__
    predictions = self.model([inputs])[0]
  File "/root/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/mnt/data1/download_new/EVA/EVA-master-project/EVA-02/det/detectron2/modeling/meta_arch/rcnn.py", line 157, in forward
    return self.inference(batched_inputs)
  File "/mnt/data1/download_new/EVA/EVA-master-project/EVA-02/det/detectron2/modeling/meta_arch/rcnn.py", line 230, in inference
    results, _ = self.roi_heads(images, features, proposals, None, keep_all_before_merge=keep_all_before_merge)
  File "/root/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
TypeError: forward() got an unexpected keyword argument 'keep_all_before_merge'
#解决去掉keep_all_before_merge=keep_all_before_merge
#results, _ = self.roi_heads(images, features, proposals, None, keep_all_before_merge=keep_all_before_merge)
results, _ = self.roi_heads(images, features, proposals, None)

detect结果

在这里插入图片描述

模型选取

  • model

  • eva02_L_coco_det_sys_o365.pth
    在这里插入图片描述

  • 在这里插入图片描述

  • config
    EVA-02/det/projects/ViTDet/configs/eva2_o365_to_coco/eva2_o365_to_coco_cascade_mask_rcnn_vitdet_l_8attn_1536_lrd0p8.py

训练脚本

All configs can be trained with:

python tools/lazyconfig_train_net.py \
    --num-gpus 8  --num-machines ${WORLD_SIZE} --machine-rank ${RANK} --dist-url "tcp://$MASTER_ADDR:60900" \
    --config-file /path/to/config.py \
    train.init_checkpoint=/path/to/init_checkpoint.pth \
    train.output_dir=/path/to/output
#my
python tools/my_lazyconfig_train_net.py --num-gpus 4  --num-machines 1 --machine-rank 0   --config-file projects/ViTDet/configs/eva2_o365_to_coco/eva2_o365_to_coco_cascade_mask_rcnn_vitdet_l_8attn_1536_lrd0p8.py train.init_checkpoint=/mnt/data1/download_new/EVA/models-EVA-02/model-select/eva02_L_coco_det_sys_o365.pth train.output_dir=/mnt/data1/download_new/EVA/EVA-master-project/EVA-02/det/work-dir 
#可断点续训EVA-master-project/EVA-02/det/detectron2/engine/defaults.py
--resume

Detectron2训练自己的数据集

  1. 需要将自己的数据集转为COCO格式或者VOC
  2. 将数据集注册到Detectron2中–把数据集加载的代码放到train里面

数据集转换

mmdetection中将xml转换成coco格式 - 知乎 (zhihu.com)

有的模型(mask_rcnn)训练需要加载分割的数据

#可使用脚本添加mask的数据
import json
from pprint import pprint
def convert_bbox_to_polygon(bbox):
    x = bbox[0]
    y = bbox[1]
    w = bbox[2]
    h = bbox[3]
    polygon = [x,y,(x+w),y,(x+w),(y+h),x,(y+h)]
    return([polygon])
def main():
    file_path = "/mnt/data1/download_new/EVA/EVA-master-project/EVA-02/det/data/dataset-city-2023.9.14-puti-coco-json-2023.10.30/coco/annotations/instances_val2017.json"
    f = open(file_path)
    data = json.load(f)
    for line in data["annotations"]:
        segmentation = convert_bbox_to_polygon(line["bbox"])
        line["segmentation"] = segmentation
    with open("/mnt/data1/download_new/EVA/EVA-master-project/EVA-02/det/data/dataset-city-2023.9.14-puti-coco-json-2023.10.30/coco/annotations/instances_val2017_1.json", 'w') as f:
        f.write(json.dumps(data))
    print('DONE')
main()

CLASS_NAMES列表

注意这个CLASS_NAMES列表,这里面一定要与你的COCO格式的文件种类的ID顺序保持一致 程序中会将CLASS_NAMES这个列表映射为0,len(CLASS_NAMES))的形式,如果您的COCO格式的数据集,category_id是从1开始的,最好在你的 json文件中加上 category_id:0 name:background,可以不含该背景类的标注 Annotations信息,比如背景所在的区域/宽/高等但是一定要有这个category_id:0的这个类,不然等到训练后测试时,你就傻眼了!!!!!但是算上背景类会有个问题,一方面会让你的分类和回归头部的特征图数目增加,另一方面可能导致计算的mAP值变小,因为多计算了一个背景类!!!! 所以推荐你的json文件中的 category_id直接就是从0开始,那样就不用加 cate_id:0 background 那个类了,非COCO数据集最好采用下标为 0-n-1(n为数据集的类数),比如,category_id:0,name:name1,category_id:1,name:name2 …类名尽量用英文!!!!中文会乱码

默认mmdetection中将xml转换成coco格式 - 知乎 (zhihu.com)的VOC2COCO.py是默认以category_id:1开始的

=注意=

#可以修改VOC2COCO.py
'''label_ids = {name: i + 1 for i, name in enumerate(category_list)}
修改为:
    label_ids = {name: i  for i, name in enumerate(category_list)}'''
import shutil
import cv2
from tqdm import tqdm
import sys, os, json, glob
import xml.etree.ElementTree as ET

category_list = ['Illegal_ad_poster', 'garbage_paper_type', 'garbage_plastic_bag', 'advertisement_hanging', 'Illegal_ad_paper',
        'street_merchant_booth', 'foam_box', 'plastic_crate', 'paper_box', 'paper_box_pile', 'packed_goods',
        'outdoor_umbrella_open', 'street_merchant_general_goods', 'hang_clothes', 'instant_canopy_open',
        'street_merchant_clothes', 'street_merchant_fruit', 'advertisement_light_box', 'street_merchant_basket',
        'stool', 'banner_horizontal', 'tarpaulin', 'wood_plank', 'garbage_pop_can', 'other_waste', 'float_garbage',
        'bottle', 'street_merchant_meat_stall', 'foam_box_pile', 'chair', 'traffic_cone', 'onetime_dinnerware',
        'isolation_guardrail', 'street_merchant_vegetable', 'advertisement_roll', 'float_bucket',
        'street_merchant_auto', 'dumps', 'construction_enclosure', 'plastic_road_barrier_down', 'sandpile', 'pvc_stack',
        'bucket', 'used_door_window', 'garbage_tree_leaves_pile', 'street_merchant_tricycle', 'commercial_trash_can',
        'garbage_can_without_cover', 'warning_column', 'traffic_cone_down', 'table', 'garbage_can_close',
        'plastic_crate_pile', 'outdoor_umbrella_close', 'commercial_trash_can_overflow', 'packed_refuse_stack',
        'road_cracking', 'freezer', 'packed_refuse', 'tie_paper_box', 'wood_working_ladder', 'instant_canopy_close',
        'material_bag', 'doorway_tyre_pile', 'float_watermilfoil', 'wood_crate_pile', 'steel_tube', 'scaffold',
        'garbage_construction_waste', 'cigarette_end', 'garbage_can_open', 'plastic_road_barrier',
        'garbage_can_overflow', 'pothole', 'sewage', 'plastic_bull_barrel', 'construction_sign', 'garbage_book',
        'Illegal_ad_painting', 'bricks_red', 'advertisement_column', 'wood_crate', 'garbage_dirt_mound',
        'toilet_paper_pile', 'bricks_grey', 'used_sofa', 'street_merchant_toy', 'used_cabinet', 'doorway_tyre',
        'advertisement_flag', 'garbage_dixie_cup', 'plastic_bull_barrel_damage', 'warning_column_damage', 'tire_dirty',
        'plastic_lane_divider_down', 'float_duckweed', 'used_bed', 'warning_column_down', 'loess_space', 'float_foam',
        'road_waterlogging', 'hang_beddings', 'plastic_lane_divider', 'iron_horse_guardrail', 'garbage_luggage',
        'advertisement_truss', 'concrete_mixer']

def convert_to_cocodetection(dir, datasets_name, output_dir):
    """
    input:
        dir:the path to DIOR dataset
        output_dir:the path write the coco form json file
    """
    annotations_path = os.path.join(dir, "Annotations")
    namelist_path = os.path.join(dir, "ImageSets")
    train_images_path = os.path.join(dir, "train2017")
    val_images_path = os.path.join(dir, "val2017")
    id_num = 0

    # 将数据集的类别信息 存放到字典中 #id从1开始,mmdet的id从0开始的
    #label_ids = {name: i + 1 for i, name in enumerate(category_list)}
    label_ids = {name: i  for i, name in enumerate(category_list)}
    categories = []
    for k, v in label_ids.items():
        categories.append({"name": k, "id": v})

    # 读取xml文件并转化为json
    for mode in ["train", "val"]:
        images = []
        annotations = []
        print(f"start loading {mode} data...")
        if mode == "train":
            f = open(namelist_path + "/" + "train.txt", "r")
            images_path = train_images_path
        else:
            f = open(namelist_path + "/" + "val.txt", "r")
            images_path = val_images_path

        # 依次读取训练集或测试集中的每一张图片的名字
        for name in tqdm(f.readlines()):
            # 图片基本信息处理
            image = {}
            name = name.replace("\n", "")
            image_name = name + ".jpeg"
            annotation_name = name + ".xml"
            # 获取图像的height和width
            height, width = cv2.imread(images_path + "/" + image_name).shape[:2]
            # 向image字典中添加信息
            image["file_name"] = image_name
            image["height"] = height
            image["width"] = width
            image["id"] = name
            images.append(image)

            # xml标注文件信息解析
            tree = ET.parse(annotations_path + "/" + annotation_name)
            root = tree.getroot()
            for obj in root.iter('object'):
                annotation = {}
                # 获得类别 =string 类型
                category = obj.find('name').text
                # 如果类别不是对应在我们预定好的class文件中则跳过
                if category not in category_list:
                    continue
                # 找到bndbox 对象
                xmlbox = obj.find('bndbox')
                # 获取对应的bndbox的数组 = ['xmin','xmax','ymin','ymax']
                xmin=float(xmlbox.find('xmin').text)
                ymin=float(xmlbox.find('ymin').text)
                xmax=float(xmlbox.find('xmax').text)
                ymax=float(xmlbox.find('ymax').text)
                # 执行边界框坐标的裁剪逻辑
                xmin = float(xmin) if float(xmin) > 0 else 0
                ymin = float(ymin) if float(ymin) > 0 else 0
                xmax = float(xmax) if float(xmax) < width else width - 1
                ymax = float(ymax) if float(ymax) < height else height - 1

                #bbox = (float(xmlbox.find('xmin').text), float(xmlbox.find('ymin').text),
                        #float(xmlbox.find('xmax').text), float(xmlbox.find('ymax').text))
                bbox = (xmin, ymin,
                        xmax, ymax)
                # 整数化
                bbox = [int(i) for i in bbox]
                # 将voc的xyxy坐标格式,转换为coco的xywh格式
                bbox = xyxy_to_xywh(bbox)
                # 将xml中的信息存入annotations
                annotation["image_id"] = name
                annotation["bbox"] = bbox
                annotation["category_id"] = category_list.index(category)
                annotation["id"] = id_num
                annotation["iscrowd"] = 0
                annotation["segmentation"] = []
                annotation["area"] = bbox[2] * bbox[3]
                id_num += 1
                annotations.append(annotation)

        # 汇总所有信息,保存在字典中
        dataset_dict = {}
        dataset_dict["images"] = images
        dataset_dict["annotations"] = annotations
        dataset_dict["categories"] = categories
        json_str = json.dumps(dataset_dict)
        save_file = f'{output_dir}/{datasets_name}_{mode}.json'
        with open(save_file, 'w') as json_file:
            json_file.write(json_str)
    print("json file write done...")


def xyxy_to_xywh(boxes):
    width = boxes[2] - boxes[0]
    height = boxes[3] - boxes[1]
    return [boxes[0], boxes[1], width, height]


if __name__ == '__main__':
    # 数据集的路径
    DATASET_ROOT = '/mnt/data1/download_new/EVA/EVA-master-project/EVA-02/det/data/dataset-city-2023.9.14-puti-coco-json-2023.10.30/coco/'
    # 数据集名称
    DATASET_NAME = 'dataset-city-2023.9.14-puti-coco-json-2023.11.07'
    # 输出coco格式的存放路径
    JSON_ROOT = '/mnt/data1/download_new/EVA/EVA-master-project/EVA-02/det/data/dataset-city-2023.9.14-puti-coco-json-2023.10.30/coco/annotations/'
    # 递归删除之前存放帧图片的文件夹,并新建一个
    try:
        shutil.rmtree(JSON_ROOT)
    except OSError:
        pass
    os.mkdir(JSON_ROOT)
    convert_to_cocodetection(dir=DATASET_ROOT, datasets_name=DATASET_NAME, output_dir=JSON_ROOT)

eva2_o365_to_coco_cascade_mask_rcnn_vitdet_l_8attn_1536_lrd0p8.py

from functools import partial  
  
from ..common.coco_loader_lsj_1536 import dataloader  
from .cascade_mask_rcnn_vitdet_b_100ep import (  
    lr_multiplier,  
    model,  
    train,  
    optimizer,  
    get_vit_lr_decay_rate,  
)  
  
from detectron2.config import LazyCall as L  
from fvcore.common.param_scheduler import *  
from detectron2.solver import WarmupParamScheduler  
#DATASETS = {  
  #"TRAIN": ("mydata_train"),  #"TEST": ("mydata_val",)#}  
#DATASETS=dict(TRAIN="mydata_train",TEST="mydata_val")  
#DATALOADER = {  
#  "NUM_WORKERS": 2  
    #}  
#MODEL = {  
#  "WEIGHTS": r"/mnt/data1/download_new/EVA/models-EVA-02/model-select/eva02_L_coco_det_sys_o365.pth"  
#}  
#SOLVER = {  
#  "IMS_PER_BATCH": 2,  
#  "REFERENCE_WORLD_SIZE": 4  
#}  
  
  
model.backbone.net.img_size = 1536  model.backbone.square_pad = 1536  model.backbone.net.patch_size = 16  model.backbone.net.window_size = 16  
model.backbone.net.embed_dim = 1024  
model.backbone.net.depth = 24  
model.backbone.net.num_heads = 16  
model.backbone.net.mlp_ratio = 4*2/3  
model.backbone.net.use_act_checkpoint = True  
model.backbone.net.drop_path_rate = 0.3  
#设置num_classes  
model.roi_heads.num_classes=107  
#model.roi_heads.box_predictors=107  
#model.roi_heads.mask_head=107  
  
# 2, 5, 8, 11, 14, 17, 20, 23 for global attention  
model.backbone.net.window_block_indexes = (  
    list(range(0, 2)) + list(range(3, 5)) + list(range(6, 8)) + list(range(9, 11)) + list(range(12, 14)) + list(range(15, 17)) + list(range(18, 20)) + list(range(21, 23))  
)  
  
optimizer.lr=4e-5  
optimizer.params.lr_factor_func = partial(get_vit_lr_decay_rate, lr_decay_rate=0.8, num_layers=24)  
optimizer.params.overrides = {}  
optimizer.params.weight_decay_norm = None  
#训练最大次数  
train.max_iter = 100  
  
train.model_ema.enabled=True  
train.model_ema.device="cuda"  
train.model_ema.decay=0.9999  
  
lr_multiplier = L(WarmupParamScheduler)(  
    scheduler=L(CosineParamScheduler)(  
        start_value=1,  
        end_value=1,  
    ),  
    warmup_length=0.01,  
    warmup_factor=0.001,  
)  
  
dataloader.test.num_workers=0  
dataloader.train.total_batch_size=4  
#更改dataloader下test的datase的names为注册的数据集  
dataloader.test.dataset.names="mydata_val"  
dataloader.train.dataset.names="mydata_train"  
dataloader.evaluator.dataset_name="mydata_val"

EVA-master-project/EVA-02/det/tools/my_lazyconfig_train_net.py

#!/usr/bin/env python  
# Copyright (c) Facebook, Inc. and its affiliates.  
"""  
Training script using the new "LazyConfig" python config files.  
  
This scripts reads a given python config file and runs the training or evaluation.  
It can be used to train any models or dataset as long as they can be  
instantiated by the recursive construction defined in the given config file.  
  
Besides lazy construction of models, dataloader, etc., this scripts expects a  
few common configuration parameters currently defined in "configs/common/train.py".  
To add more complicated training logic, you can easily add other configs  
in the config file and implement a new train_net.py to handle them.  
"""  
import logging  
import os  
  
from detectron2.checkpoint import DetectionCheckpointer  
from detectron2.config import LazyConfig, instantiate  
from detectron2.engine import (  
    AMPTrainer,  
    SimpleTrainer,  
    default_argument_parser,  
    default_setup,  
    default_writers,  
    hooks,  
    launch,  
)  
from detectron2.engine.defaults import create_ddp_model  
from detectron2.evaluation import inference_on_dataset, print_csv_format  
from detectron2.utils import comm  
  
from detectron2.modeling import GeneralizedRCNNWithTTA, ema  
  
  
logger = logging.getLogger("detectron2")  
"""添加代码"""  
from detectron2.data.datasets import register_coco_instances  
from detectron2.data import MetadataCatalog  
# 自定义类别列表,包含数据集的物体类别名称  
classnames=['Illegal_ad_poster', 'garbage_paper_type', 'garbage_plastic_bag', 'advertisement_hanging', 'Illegal_ad_paper',  
        'street_merchant_booth', 'foam_box', 'plastic_crate', 'paper_box', 'paper_box_pile', 'packed_goods',  
        'outdoor_umbrella_open', 'street_merchant_general_goods', 'hang_clothes', 'instant_canopy_open',  
        'street_merchant_clothes', 'street_merchant_fruit', 'advertisement_light_box', 'street_merchant_basket',  
        'stool', 'banner_horizontal', 'tarpaulin', 'wood_plank', 'garbage_pop_can', 'other_waste', 'float_garbage',  
        'bottle', 'street_merchant_meat_stall', 'foam_box_pile', 'chair', 'traffic_cone', 'onetime_dinnerware',  
        'isolation_guardrail', 'street_merchant_vegetable', 'advertisement_roll', 'float_bucket',  
        'street_merchant_auto', 'dumps', 'construction_enclosure', 'plastic_road_barrier_down', 'sandpile', 'pvc_stack',  
        'bucket', 'used_door_window', 'garbage_tree_leaves_pile', 'street_merchant_tricycle', 'commercial_trash_can',  
        'garbage_can_without_cover', 'warning_column', 'traffic_cone_down', 'table', 'garbage_can_close',  
        'plastic_crate_pile', 'outdoor_umbrella_close', 'commercial_trash_can_overflow', 'packed_refuse_stack',  
        'road_cracking', 'freezer', 'packed_refuse', 'tie_paper_box', 'wood_working_ladder', 'instant_canopy_close',  
        'material_bag', 'doorway_tyre_pile', 'float_watermilfoil', 'wood_crate_pile', 'steel_tube', 'scaffold',  
        'garbage_construction_waste', 'cigarette_end', 'garbage_can_open', 'plastic_road_barrier',  
        'garbage_can_overflow', 'pothole', 'sewage', 'plastic_bull_barrel', 'construction_sign', 'garbage_book',  
        'Illegal_ad_painting', 'bricks_red', 'advertisement_column', 'wood_crate', 'garbage_dirt_mound',  
        'toilet_paper_pile', 'bricks_grey', 'used_sofa', 'street_merchant_toy', 'used_cabinet', 'doorway_tyre',  
        'advertisement_flag', 'garbage_dixie_cup', 'plastic_bull_barrel_damage', 'warning_column_damage', 'tire_dirty',  
        'plastic_lane_divider_down', 'float_duckweed', 'used_bed', 'warning_column_down', 'loess_space', 'float_foam',  
        'road_waterlogging', 'hang_beddings', 'plastic_lane_divider', 'iron_horse_guardrail', 'garbage_luggage',  
        'advertisement_truss', 'concrete_mixer']  
# 数据集训练集和验证集的图像和标注文件路径  
mydata_train_images='/mnt/data1/download_new/EVA/EVA-master-project/EVA-02/det/data/dataset-city-2023.9.14-puti-coco-json-2023.10.30/coco/train2017' #可以直接写绝对路径  
mydata_train_labels='/mnt/data1/download_new/EVA/EVA-master-project/EVA-02/det/data/dataset-city-2023.9.14-puti-coco-json-2023.10.30/coco/annotations/instances_train2017.json' #可以直接写绝对路径  
mydata_val_images='/mnt/data1/download_new/EVA/EVA-master-project/EVA-02/det/data/dataset-city-2023.9.14-puti-coco-json-2023.10.30/coco/val2017' #可以直接写绝对路径  
mydata_val_labels='/mnt/data1/download_new/EVA/EVA-master-project/EVA-02/det/data/dataset-city-2023.9.14-puti-coco-json-2023.10.30/coco/annotations/instances_val2017.json' #可以直接写绝对路径  
# 注册自定义的训练数据集 "mydata_train"
#register_coco_instances("mydata_train", {}, mydata_train_labels, mydata_train_images)  
# 获取 "mydata_train" 数据集的元数据并设置物体类别信息  
MetadataCatalog.get("mydata_train").set(thing_classes=classnames,evaluator_type='coco', # 指定评估方式  
                                                    json_file=mydata_train_labels,  
                                                    image_root=mydata_train_images)  
#print(MetadataCatalog.get("mydata_train").thing_classes)  
# 注册自定义的验证数据集 "mydata_val"
#register_coco_instances("mydata_val", {}, mydata_val_labels, mydata_val_images)  
# 获取 "mydata_val" 数据集的元数据并设置物体类别信息  
MetadataCatalog.get("mydata_val").set(thing_classes=classnames,evaluator_type='coco', # 指定评估方式  
                                                    json_file=mydata_val_labels,  
                                                    image_root=mydata_val_images)  
  
  
  
"""到这里,再后面"""  
def do_test_with_tta(cfg, model):  
    # may add normal test results for comparison  
    if "evaluator" in cfg.dataloader:  
        model = GeneralizedRCNNWithTTA(cfg, model, batch_size=1)  
        ret = inference_on_dataset(  
            model, instantiate(cfg.dataloader.test), instantiate(cfg.dataloader.evaluator)  
        )  
        print_csv_format(ret)  
        return ret  
  
  
def do_test(cfg, model, eval_only=False):  
    #执行评估的函数 do_test,该函数根据传入的配置和模型执行评估任务  
    logger = logging.getLogger("detectron2")#获取一个记录器,用于记录日志信息。  
  
    if eval_only:#如果 eval_only 参数为 True,表示在仅评估模式下运行  
        logger.info("Run evaluation under eval-only mode")  
        if cfg.train.model_ema.enabled and cfg.train.model_ema.use_ema_weights_for_eval_only:  
            logger.info("Run evaluation with EMA.")  
        else:  
            logger.info("Run evaluation without EMA.")  
        #检查配置中是否存在名为 "evaluator" 的数据加载器配置。  
        if "evaluator" in cfg.dataloader:  
            #使用 inference_on_dataset 函数执行模型的推理任务,并返回评估结果。评估结果包括各种指标,如准确率、精确度、召回率等。  
            ret = inference_on_dataset(  
                model, instantiate(cfg.dataloader.test), instantiate(cfg.dataloader.evaluator)  
            )  
            #print_csv_format(ret):将评估结果以 CSV 格式打印出来。  
            print_csv_format(ret)  
        return ret  
    #如果不处于 "eval_only" 模式,继续执行下面的代码。  
    logger.info("Run evaluation without EMA.")  
    if "evaluator" in cfg.dataloader:  
        ## 使用 inference_on_dataset 函数执行模型的推理任务,并返回评估结果  
        ret = inference_on_dataset(  
            model, instantiate(cfg.dataloader.test), instantiate(cfg.dataloader.evaluator)  
        )  
        # # 打印评估结果以 CSV 格式  
        print_csv_format(ret)  
  
        if cfg.train.model_ema.enabled:  
            # # 检查是否启用了模型的指数移动平均(EMA)  
            logger.info("Run evaluation with EMA.")  
            # 使用 EMA 模型执行评估,此处将 EMA 模型应用到模型上  
            with ema.apply_model_ema_and_restore(model):  
                # 再次使用 inference_on_dataset 函数,此次评估时使用了 EMA 模型  
                if "evaluator" in cfg.dataloader:  
                    # 获取 EMA 模型的评估结果  
                    ema_ret = inference_on_dataset(  
                        model, instantiate(cfg.dataloader.test), instantiate(cfg.dataloader.evaluator)  
                    )  
                    # 打印 EMA 模型的评估结果  
                    print_csv_format(ema_ret)  
                    # 将 EMA 模型的评估结果合并到原始评估结果中  
                    ret.update(ema_ret)  
        # 返回合并后的评估结果  
        return ret  
  
  
def do_train(args, cfg):  
    """  
    Args:        cfg: an object with the following attributes:            model: instantiate to a module            dataloader.{train,test}: instantiate to dataloaders            dataloader.evaluator: instantiate to evaluator for test set            optimizer: instantaite to an optimizer            lr_multiplier: instantiate to a fvcore scheduler            train: other misc config defined in `configs/common/train.py`, including:                output_dir (str)                init_checkpoint (str)                amp.enabled (bool)                max_iter (int)                eval_period, log_period (int)                device (str)                checkpointer (dict)                ddp (dict)    """    model = instantiate(cfg.model)#创建了目标检测模型,cfg.model 包含了用于构建模型的配置信息。  
    logger = logging.getLogger("detectron2")  
    logger.info("Model:\n{}".format(model))  
    model.to(cfg.train.device)#模型移动到指定的计算设备,通常是 GPU  
    cfg.optimizer.params.model = model#模型添加到优化器的参数中,以便在优化时使用。  
    optim = instantiate(cfg.optimizer)#创建了优化器,cfg.optimizer 包含了优化器的配置信息。  
  
    train_loader = instantiate(cfg.dataloader.train)#创建了数据加载器,用于加载训练数据。cfg.dataloader.train 包含了数据加载器的配置信息。  
    print("train_loader:",train_loader)  
    model = create_ddp_model(model, **cfg.train.ddp)#根据配置信息创建模型的指数移动平均 (EMA) 版本。  
    # build model ema  
    ema.may_build_model_ema(cfg, model)  
    #创建一个训练器对象,根据配置中是否启用混合精度训练 (AMPTrainer) 或普通训练 (SimpleTrainer)。  
    trainer = (AMPTrainer if cfg.train.amp.enabled else SimpleTrainer)(model, train_loader, optim)  
    #检查点管理器,用于保存和加载模型检查点  
    checkpointer = DetectionCheckpointer(  
        model,  
        cfg.train.output_dir,  
        trainer=trainer,  
        # save model ema  
        **ema.may_get_ema_checkpointer(cfg, model)  
    )  
    #注册各种训练过程中的回调函数,例如迭代计时、学习率调度器、定期检查点、评估回调等。  
    trainer.register_hooks(  
        [  
            hooks.IterationTimer(),  
            ema.EMAHook(cfg, model) if cfg.train.model_ema.enabled else None,  
            hooks.LRScheduler(scheduler=instantiate(cfg.lr_multiplier)),  
            hooks.PeriodicCheckpointer(checkpointer, **cfg.train.checkpointer)  
            if comm.is_main_process()  
            else None,  
            hooks.EvalHook(cfg.train.eval_period, lambda: do_test(cfg, model)),  
            hooks.PeriodicWriter(  
                default_writers(cfg.train.output_dir, cfg.train.max_iter,  
                                use_wandb=args.wandb),  
                period=cfg.train.log_period,  
            )  
            if comm.is_main_process()  
            else None,  
        ]  
    )  
    #checkpointer.resume_or_load(cfg.train.init_checkpoint, resume=args.resume):根据是否继续训练(args.resume)和配置中的检查点路径 (cfg.train.init_checkpoint),恢复或加载模型检查点。  
    checkpointer.resume_or_load(cfg.train.init_checkpoint, resume=args.resume)  
    if args.resume and checkpointer.has_checkpoint():  
        # The checkpoint stores the training iteration that just finished, thus we start  
        # at the next iteration  
        start_iter = trainer.iter + 1  
    else:  
        start_iter = 0  
    #开始训练模型,start_iter 是起始迭代次数,cfg.train.max_iter 是训练的最大迭代次数。  
    trainer.train(start_iter, cfg.train.max_iter)  
  
  
def main(args):  
    #main(args):这是主函数,接受一个 args 参数,其中包含了配置文件和其他训练选项。  
    #主训练任务的执行逻辑  
    cfg = LazyConfig.load(args.config_file)#从配置文件加载训练配置,args.config_file 包含了训练的配置信息。  
    #cfg = LazyConfig.apply_overrides(cfg, args.opts):根据命令行参数中的覆盖选项,修改配置文件中的配置项。  
    # 这可以用于在命令行中修改配置,例如更改学习率、批大小等。  
    cfg = LazyConfig.apply_overrides(cfg, args.opts)  
    #增加start=========  
    #cfg.DATASETS.TRAIN=instantiate(cfg.DATASETS.TRAIN)    #print("cfg.DATASETS.TRAIN:",cfg.DATASETS.TRAIN)    #cfg.DATASETS.TEST=instantiate(cfg.DATASETS.TEST)    #print("cfg.DATASETS.TEST:",cfg.DATASETS.TEST)    #assert cfg.DATASETS.TRAIN == "mydata_train"    #assert cfg.DATASETS.TEST == "mydata_val"    #cfg.DATASETS.TRAIN = ("mydata_train")    #cfg.DATASETS.TEST = ("mydata_val",)  # 没有不用填  
    #cfg.DATALOADER.NUM_WORKERS = 2  
    # 预训练模型文件,可自行提前下载  
    #cfg.MODEL.WEIGHTS = r"/mnt/data1/download_new/EVA/models-EVA-02/model-select/eva02_L_coco_det_sys_o365.pth"  
    # 或者使用自己的预训练模型  
    # cfg.MODEL.WEIGHTS = "../tools/output/model_00999.pth"  
    #cfg.SOLVER.IMS_PER_BATCH = 2    #========end    #default_setup(cfg, args):初始化 Detectron2 库,包括 GPU 设置、分布式训练等。  
    default_setup(cfg, args)  
  
    if args.eval_only:  
        #如果设置了 eval_only 参数,表示仅执行评估而不进行训练  
        #根据配置中的模型参数,实例化模型。  
        model = instantiate(cfg.model)  
        #将模型移到指定的设备(通常是 GPU)上。  
        model.to(cfg.train.device)  
        #使用 DistributedDataParallel(DDP)封装模型,用于分布式训练。  
        model = create_ddp_model(model)  
  
        # using ema for evaluation  
        #构建模型的指数移动平均(EMA)版本,用于评估。  
        ema.may_build_model_ema(cfg, model)  
        #用于管理模型的检查点,加载预训练模型。  
        DetectionCheckpointer(model, **ema.may_get_ema_checkpointer(cfg, model)).load(cfg.train.init_checkpoint)  
        # Apply ema state for evaluation  
        #如果使用了模型的 EMA 版本,将 EMA 模型应用到评估中。  
        if cfg.train.model_ema.enabled and cfg.train.model_ema.use_ema_weights_for_eval_only:  
            ema.apply_model_ema(model)  
        #执行评估任务,如果设置了 eval_only 参数,然后打印评估结果。  
        print(do_test(cfg, model, eval_only=True))  
    else:#执行训练任务,包括模型训练、参数优化等。这部分代码是通过调用 do_train 函数来实现的,该函数负责整个训练过程的管理。  
        do_train(args, cfg)  
  
  
if __name__ == "__main__":  
    #args = default_argument_parser().parse_args():创建了一个参数解析器,并解析命令行参数。default_argument_parser() 函数通常是用于解析与训练相关的参数,例如 GPU 数量、分布式训练设置等。  
    args = default_argument_parser().parse_args()  
    #launch(...):这是一种方法,通常用于启动多进程分布式训练。它接受以下参数:  
    '''  
                main:要运行的主函数,这里是 main 函数。  
                args.num_gpus:指定要使用的 GPU 数量。  
                num_machines:指定机器的数量,通常用于分布式训练。  
                machine_rank:指定当前机器的排名,也用于分布式训练。  
                dist_url:指定分布式训练的 URL,用于协调不同机器之间的通信。  
                args=(args,):传递 args 对象作为参数传递给 main 函数。'''  
    launch(  
        main,  
        args.num_gpus,  
        num_machines=args.num_machines,  
        machine_rank=args.machine_rank,  
        dist_url=args.dist_url,  
        args=(args,),  
    )
  • 注册自定义的训练数据集 "mydata_train"和注册自定义的验证数据集 “mydata_val”
#把my_lazyconfig_train_net.py这两句取消注释
register_coco_instances("mydata_train", {}, mydata_train_labels, mydata_train_images) 
register_coco_instances("mydata_val", {}, mydata_val_labels, mydata_val_images)

EVA-master-project/EVA-02/det/detectron2/data/datasets/builtin_meta.py

#增加类别
MY_COCO_CATEGORIES =[  
    {"color": [255, 0, 0], "isthing": 1, "id": 0, "name": "Illegal_ad_poster"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 1, "name": "garbage_paper_type"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 2, "name": "garbage_plastic_bag"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 3, "name": "advertisement_hanging"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 4, "name": "Illegal_ad_paper"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 5, "name": "street_merchant_booth"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 6, "name": "foam_box"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 7, "name": "plastic_crate"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 8, "name": "paper_box"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 9, "name": "paper_box_pile"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 10, "name": "packed_goods"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 11, "name": "outdoor_umbrella_open"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 12, "name": "street_merchant_general_goods"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 13, "name": "hang_clothes"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 14, "name": "instant_canopy_open"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 15, "name": "street_merchant_clothes"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 16, "name": "street_merchant_fruit"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 17, "name": "advertisement_light_box"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 18, "name": "street_merchant_basket"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 19, "name": "stool"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 20, "name": "banner_horizontal"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 21, "name": "tarpaulin"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 22, "name": "wood_plank"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 23, "name": "garbage_pop_can"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 24, "name": "other_waste"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 25, "name": "float_garbage"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 26, "name": "bottle"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 27, "name": "street_merchant_meat_stall"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 28, "name": "foam_box_pile"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 29, "name": "chair"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 30, "name": "traffic_cone"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 31, "name": "onetime_dinnerware"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 32, "name": "isolation_guardrail"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 33, "name": "street_merchant_vegetable"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 34, "name": "advertisement_roll"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 35, "name": "float_bucket"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 36, "name": "street_merchant_auto"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 37, "name": "dumps"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 38, "name": "construction_enclosure"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 39, "name": "plastic_road_barrier_down"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 40, "name": "sandpile"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 41, "name": "pvc_stack"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 42, "name": "bucket"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 43, "name": "used_door_window"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 44, "name": "garbage_tree_leaves_pile"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 45, "name": "street_merchant_tricycle"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 46, "name": "commercial_trash_can"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 47, "name": "garbage_can_without_cover"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 48, "name": "warning_column"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 49, "name": "traffic_cone_down"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 50, "name": "table"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 51, "name": "garbage_can_close"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 52, "name": "plastic_crate_pile"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 53, "name": "outdoor_umbrella_close"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 54, "name": "commercial_trash_can_overflow"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 55, "name": "packed_refuse_stack"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 56, "name": "road_cracking"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 57, "name": "freezer"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 58, "name": "packed_refuse"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 59, "name": "tie_paper_box"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 60, "name": "wood_working_ladder"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 61, "name": "instant_canopy_close"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 62, "name": "material_bag"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 63, "name": "doorway_tyre_pile"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 64, "name": "float_watermilfoil"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 65, "name": "wood_crate_pile"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 66, "name": "steel_tube"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 67, "name": "scaffold"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 68, "name": "garbage_construction_waste"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 69, "name": "cigarette_end"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 70, "name": "garbage_can_open"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 71, "name": "plastic_road_barrier"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 72, "name": "garbage_can_overflow"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 73, "name": "pothole"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 74, "name": "sewage"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 75, "name": "plastic_bull_barrel"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 76, "name": "construction_sign"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 77, "name": "garbage_book"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 78, "name": "Illegal_ad_painting"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 79, "name": "bricks_red"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 80, "name": "advertisement_column"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 81, "name": "wood_crate"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 82, "name": "garbage_dirt_mound"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 83, "name": "toilet_paper_pile"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 84, "name": "bricks_grey"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 85, "name": "used_sofa"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 86, "name": "street_merchant_toy"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 87, "name": "used_cabinet"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 88, "name": "doorway_tyre"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 89, "name": "advertisement_flag"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 90, "name": "garbage_dixie_cup"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 91, "name": "plastic_bull_barrel_damage"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 92, "name": "warning_column_damage"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 93, "name": "tire_dirty"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 94, "name": "plastic_lane_divider_down"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 95, "name": "float_duckweed"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 96, "name": "used_bed"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 97, "name": "warning_column_down"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 98, "name": "loess_space"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 99, "name": "float_foam"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 100, "name": "road_waterlogging"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 101, "name": "hang_beddings"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 102, "name": "plastic_lane_divider"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 103, "name": "iron_horse_guardrail"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 104, "name": "garbage_luggage"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 105, "name": "advertisement_truss"},  
     {"color": [255, 0, 0], "isthing": 1, "id": 106, "name": "concrete_mixer"},  
]
#修改函数_get_coco_instances_meta()
def _get_coco_instances_meta():  
    #thing_ids = [k["id"] for k in COCO_CATEGORIES if k["isthing"] == 1]  
    #thing_colors = [k["color"] for k in COCO_CATEGORIES if k["isthing"] == 1]    #assert len(thing_ids) == 80, len(thing_ids)    thing_ids = [k["id"] for k in MY_COCO_CATEGORIES if k["isthing"] == 1]  
    thing_colors = [k["color"] for k in MY_COCO_CATEGORIES if k["isthing"] == 1]  
    assert len(thing_ids) == 107, len(thing_ids)#类别数  
    # Mapping from the incontiguous COCO category id to an id in [0, 79]  
    thing_dataset_id_to_contiguous_id = {k: i for i, k in enumerate(thing_ids)}  
    #thing_classes = [k["name"] for k in COCO_CATEGORIES if k["isthing"] == 1]  
    thing_classes = [k["name"] for k in MY_COCO_CATEGORIES if k["isthing"] == 1]  
    ret = {  
        "thing_dataset_id_to_contiguous_id": thing_dataset_id_to_contiguous_id,  
        "thing_classes": thing_classes,  
        "thing_colors": thing_colors,  
    }  
    return ret

  • 修改函数_get_builtin_metadata(dataset_name)
  • 原来:
if dataset_name == "coco":
return _get_coco_instances_meta()
  • 修改后:
if dataset_name == "MY_COCO":  
    return _get_coco_instances_meta()

EVA-master-project/EVA-02/det/detectron2/data/datasets/builtin.py

#增加新的数据集
_PREDEFINED_SPLITS_MY_COCO = {}  
_PREDEFINED_SPLITS_MY_COCO["MY_COCO"] = {  
    "mydata_train": ("/mnt/data1/download_new/EVA/EVA-master-project/EVA-02/det/data/dataset-city-2023.9.14-puti-coco-json-2023.10.30/coco/train2017", "/mnt/data1/download_new/EVA/EVA-master-project/EVA-02/det/data/dataset-city-2023.9.14-puti-coco-json-2023.10.30/coco/annotations/instances_train2017.json"),  
    "mydata_val": ("/mnt/data1/download_new/EVA/EVA-master-project/EVA-02/det/data/dataset-city-2023.9.14-puti-coco-json-2023.10.30/coco/val2017", "/mnt/data1/download_new/EVA/EVA-master-project/EVA-02/det/data/dataset-city-2023.9.14-puti-coco-json-2023.10.30/coco/annotations/instances_val2017.json"),  
    # "my_train" : 数据集名称  
    # 'coco/train2017/images' : 图片存放路径  
    # 'coco/annotations/instances_train2017.json' : 标注信息json路径  
  
}
#修改函数register_all_coco(root)
def register_all_coco(root):  
    #用于注册COCO数据集的不同拆分(splits),例如训练集、验证集等。  
    # 这段代码使用了 _PREDEFINED_SPLITS_COCO 和 _PREDEFINED_SPLITS_COCO_PANOPTIC 字典,这些字典包含了数据集的详细信息,如图像路径和标注文件路径。  
    # 遍历 _PREDEFINED_SPLITS_COCO 中的每个数据集,包括数据集的名称和其不同拆分的信息  
    #for dataset_name, splits_per_dataset in _PREDEFINED_SPLITS_COCO.items():  
    #把_PREDEFINED_SPLITS_COCO.items()修改为_PREDEFINED_SPLITS_MY_COCO.items()
    for dataset_name, splits_per_dataset in _PREDEFINED_SPLITS_MY_COCO.items():  
  
        print("Dataset Name:", dataset_name)  
        for key, (image_root, json_file) in splits_per_dataset.items():  
            print("Split Key:", key)  
            print("Image Root:", image_root)  
            print("JSON File:", json_file)  
            # Assume pre-defined datasets live in `./datasets`.  
            # 使用 register_coco_instances 函数注册每个拆分的COCO实例数据集  
            register_coco_instances(  
                key,  
                _get_builtin_metadata(dataset_name),# 获取数据集的元数据  
                os.path.join(root, json_file) if "://" not in json_file else json_file,# COCO标注文件的路径  
                os.path.join(root, image_root),# 包含图像的文件夹路径  
            )  
    # 处理 COCO Panoptic 数据集的两个版本:分开版本和标准版本  
    '''for (  
        prefix,        (panoptic_root, panoptic_json, semantic_root),    ) in _PREDEFINED_SPLITS_COCO_PANOPTIC.items():        prefix_instances = prefix[: -len("_panoptic")]        instances_meta = MetadataCatalog.get(prefix_instances)        image_root, instances_json = instances_meta.image_root, instances_meta.json_file        # The "separated" version of COCO panoptic segmentation dataset,        # e.g. used by Panoptic FPN        # 注册分开版本的 COCO Panoptic 数据集  
        register_coco_panoptic_separated(            prefix,            _get_builtin_metadata("coco_panoptic_separated"), # 获取元数据  
            image_root,            os.path.join(root, panoptic_root), # Panoptic数据的根目录  
            os.path.join(root, panoptic_json), # Panoptic标注文件的路径  
            os.path.join(root, semantic_root),# 语义分割标注文件的路径  
            instances_json,# 实例分割标注文件的路径  
        )        # The "standard" version of COCO panoptic segmentation dataset,        # e.g. used by Panoptic-DeepLab        # 注册标准版本的 COCO Panoptic 数据集  
        register_coco_panoptic(            prefix,            _get_builtin_metadata("coco_panoptic_standard"), # 获取元数据  
            image_root,            os.path.join(root, panoptic_root),# Panoptic数据的根目录  
            os.path.join(root, panoptic_json), # Panoptic标注文件的路径  
            instances_json,# 实例分割标注文件的路径  
        )'''

if __name__.endswith(".builtin"):#数据集注册  
    # Assume pre-defined datasets live in `./datasets`.  
    #datasetroot ='/mnt/data1/download_new/EVA/EVA-master-project/EVA-02/det/data/dataset-city-2023.9.14-puti-coco-json-2023.10.30/'    #os.getenv("DETECTRON2_DATASETS", datasetroot) 尝试从环境变量中获取名为 DETECTRON2_DATASETS 的变量的值  
    #os.path.expanduser() 用于扩展 ~ 符号,将用户的主目录路径添加到给定的路径中,以确保路径是完整的。  
    #_root = os.path.expanduser(os.getenv("DETECTRON2_DATASETS", datasetroot))  
    _root = os.path.expanduser(os.getenv("DETECTRON2_DATASETS", "datasets"))  
    datasets_path = os.getenv("DETECTRON2_DATASETS")  
    #print(f"DETECTRON2_DATASETS: {datasets_path}")  
    register_all_coco(_root)  
    register_all_lvis(_root)  
    register_all_cityscapes(_root)  
    register_all_cityscapes_panoptic(_root)  
    register_all_pascal_voc(_root)  
    register_all_ade20k(_root)
'''可注释函数中的register_all_coco(_root)在my_lazyconfig_train_net.py中使用register_coco_instances("mydata_val", {}, mydata_val_labels, mydata_val_images)和register_coco_instances("mydata_train", {}, mydata_train_labels, mydata_train_images)注册自定义的训练数据集mydata_train和mydata_val'''

训练报错

CrossEntropyLoss的 device-side assert triggered报错


[11/08 07:46:59 d2.engine.hooks]: Overall training speed: 2 iterations in 0:00:03 (1.7485 s / it)
[11/08 07:46:59 d2.engine.hooks]: Total training time: 0:00:03 (0:00:00 on hooks)
[11/08 07:46:59 d2.utils.events]:  eta: 0:02:25  iter: 4  total_loss: 5.22  loss_cls_stage0: 0.9515  loss_box_reg_stage0: 0.08669  loss_cls_stage1: 1.018  loss_box_reg_stage1: 0.1473  loss_cls_stage2: 0.9457  loss_box_reg_stage2: 0.2063  loss_mask: 0.9384  loss_rpn_cls: 0.1416  loss_rpn_loc: 0.05984  time: 1.5362  data_time: 0.1103  lr: 4e-05  max_mem: 13301M
Traceback (most recent call last):
  File "tools/my_lazyconfig_train_net.py", line 278, in <module>
    launch(
  File "/mnt/data1/download_new/EVA/EVA-master-project/EVA-02/det/detectron2/engine/launch.py", line 82, in launch
    main_func(*args)
  File "tools/my_lazyconfig_train_net.py", line 264, in main
    do_train(args, cfg)
  File "tools/my_lazyconfig_train_net.py", line 214, in do_train
    trainer.train(start_iter, cfg.train.max_iter)
  File "/mnt/data1/download_new/EVA/EVA-master-project/EVA-02/det/detectron2/engine/train_loop.py", line 149, in train
    self.run_step()
  File "/mnt/data1/download_new/EVA/EVA-master-project/EVA-02/det/detectron2/engine/train_loop.py", line 419, in run_step
    loss_dict = self.model(data)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/mnt/data1/download_new/EVA/EVA-master-project/EVA-02/det/detectron2/modeling/meta_arch/rcnn.py", line 187, in forward
    _, detector_losses = self.roi_heads(images, features, proposals, gt_instances)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/mnt/data1/download_new/EVA/EVA-master-project/EVA-02/det/detectron2/modeling/roi_heads/cascade_rcnn.py", line 158, in forward
    losses = self._forward_box(features, proposals, targets)
  File "/mnt/data1/download_new/EVA/EVA-master-project/EVA-02/det/detectron2/modeling/roi_heads/cascade_rcnn.py", line 222, in _forward_box
    stage_losses = predictor.losses(predictions, proposals)
  File "/mnt/data1/download_new/EVA/EVA-master-project/EVA-02/det/detectron2/modeling/roi_heads/fast_rcnn.py", line 412, in losses
    loss_cls = cross_entropy(scores, gt_classes, reduction="mean")
  File "/mnt/data1/download_new/EVA/EVA-master-project/EVA-02/det/detectron2/layers/wrappers.py", line 56, in wrapped_loss_func
    return loss_func(input, target, reduction=reduction, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py", line 3029, in cross_entropy
    return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
RuntimeError: CUDA error: device-side assert triggered
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

解决

1. 检查config.yaml中的num_classes的值
2. #设置num_classes  在eva2_o365_to_coco_cascade_mask_rcnn_vitdet_l_8attn_1536_lrd0p8.py 中
model.roi_heads.num_classes=107  

一些训练参数

    cfg.INPUT.CROP.ENABLED = True
    cfg.INPUT.MAX_SIZE_TRAIN = 640 # 训练图片输入的最大尺寸
    cfg.INPUT.MAX_SIZE_TEST = 640 # 测试数据输入的最大尺寸
    cfg.INPUT.MIN_SIZE_TRAIN = (512, 768) # 训练图片输入的最小尺寸,可以设定为多尺度训练
    cfg.INPUT.MIN_SIZE_TEST = 640
    #cfg.INPUT.MIN_SIZE_TRAIN_SAMPLING,其存在两种配置,分别为 choice 与 range :
    # range 让图像的短边从 512-768随机选择
    #choice : 把输入图像转化为指定的,有限的几种图片大小进行训练,即短边只能为 512或者768
    cfg.INPUT.MIN_SIZE_TRAIN_SAMPLING = 'range'
#  本句一定要看下注释!!!!!!!!
    cfg.MODEL.RETINANET.NUM_CLASSES = 81  # 类别数+1(因为有background,也就是你的 cate id 从 1 开始,如果您的数据集Json下标从 0 开始,这个改为您对应的类别就行,不用再加背景类!!!!!)
    #cfg.MODEL.WEIGHTS="/home/yourstorePath/.pth"
    cfg.MODEL.WEIGHTS = "/home/yourstorePath/model_final_5bd44e.pkl"    # 预训练模型权重
    cfg.SOLVER.IMS_PER_BATCH = 4  # batch_size=2; iters_in_one_epoch = dataset_imgs/batch_size

    # 根据训练数据总数目以及batch_size,计算出每个epoch需要的迭代次数
    #9000为你的训练数据的总数目,可自定义
    ITERS_IN_ONE_EPOCH = int(9000 / cfg.SOLVER.IMS_PER_BATCH)

    # 指定最大迭代次数
    cfg.SOLVER.MAX_ITER = (ITERS_IN_ONE_EPOCH * 12) - 1 # 12 epochs,
    # 初始学习率
    cfg.SOLVER.BASE_LR = 0.002
    # 优化器动能
    cfg.SOLVER.MOMENTUM = 0.9
    #权重衰减
    cfg.SOLVER.WEIGHT_DECAY = 0.0001
    cfg.SOLVER.WEIGHT_DECAY_NORM = 0.0
    # 学习率衰减倍数
    cfg.SOLVER.GAMMA = 0.1
    # 迭代到指定次数,学习率进行衰减
    cfg.SOLVER.STEPS = (7000,)
    # 在训练之前,会做一个热身运动,学习率慢慢增加初始学习率
    cfg.SOLVER.WARMUP_FACTOR = 1.0 / 1000
    # 热身迭代次数
    cfg.SOLVER.WARMUP_ITERS = 1000

    cfg.SOLVER.WARMUP_METHOD = "linear"
    # 保存模型文件的命名数据减1
    cfg.SOLVER.CHECKPOINT_PERIOD = ITERS_IN_ONE_EPOCH - 1

    # 迭代到指定次数,进行一次评估
    cfg.TEST.EVAL_PERIOD = ITERS_IN_ONE_EPOCH

注意事项

mmcv-full安装

pip安装xformers需要安装最新的pytorch,但没有带上对应的cuda版本

#可以先安装[Previous PyTorch Versions | PyTorch](https://pytorch.org/get-started/previous-versions/)
pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118 -i http://pypi.tuna.tsinghua.edu.cn/simple/ --trusted-host pypi.tuna.tsinghua.edu.cn

#在安装xformers
pip install xformers -i http://pypi.douban.com/simple/ --trusted-host pypi.douban.com

还有一些环境安装的错误没有记,遇到面向百度吧

  • 人生苦短,我用docker
E build: Catch exception when building RKNN model! E build: Traceback (most recent call last): E build: File "rknn/api/rknn_base.py", line 1977, in rknn.api.rknn_base.RKNNBase.build E build: File "rknn/api/graph_optimizer.py", line 552, in rknn.api.graph_optimizer.GraphOptimizer.fold_constant E build: File "/home/alientek/anaconda3/envs/python3.8-tk2-2.0/lib/python3.8/site-packages/onnxruntime/__init__.py", line 56, in <module> E build: raise import_capi_exception E build: File "/home/alientek/anaconda3/envs/python3.8-tk2-2.0/lib/python3.8/site-packages/onnxruntime/__init__.py", line 23, in <module> E build: from onnxruntime.capi._pybind_state import ExecutionMode # noqa: F401 E build: File "/home/alientek/anaconda3/envs/python3.8-tk2-2.0/lib/python3.8/site-packages/onnxruntime/capi/_pybind_state.py", line 32, in <module> E build: from .onnxruntime_pybind11_state import * # noqa E build: ImportError: /home/alientek/anaconda3/envs/python3.8-tk2-2.0/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_pybind11_state.cpython-38-x86_64-linux-gnu.so: invalid ELF header W If you can't handle this error, please try updating to the latest version of the toolkit2 and runtime from: https://console.zbox.filez.com/l/I00fc3 (Pwd: rknn) Path: RKNPU2_SDK / 2.X.X / develop / If the error still exists in the latest version, please collect the corresponding error logs and the model, convert script, and input data that can reproduce the problem, and then submit an issue on: https://redmine.rock-chips.com (Please consult our sales or FAE for the redmine account)
04-03
<think>好的,我现在遇到了一个RKNN模型构建时出现的ImportError问题,错误信息是“invalid ELF header onnxruntime Pybind11”。我需要仔细分析可能的原因,并找到解决方案。 首先,这个错误提示提到了ELF头无效,ELF是Linux系统下的可执行文件格式。通常,ELF头无效可能是因为文件损坏或者架构不兼容。onnxruntime和Pybind11这两个库可能与此相关,所以需要检查它们的安装情况。 可能的原因之一是onnxruntime的版本与当前系统架构不匹配。比如,用户可能在ARM架构的设备上安装了x86版本的onnxruntime,导致ELF头无法识别。或者,安装过程中文件损坏,导致动态库不完整。此外,Pybind11作为C++和Python的接口库,如果版本与onnxruntime不兼容,也可能引发问题。 接下来,我需要逐步排查。首先确认onnxruntime的安装是否正确,使用pip list查看已安装的版本。然后检查系统架构是否匹配,例如在Ubuntu上通过uname -m查看是x86_64还是aarch64。如果架构不匹配,需要下载对应的版本重新安装。此外,尝试更新Pybind11到最新版本,或者安装与onnxruntime兼容的特定版本。如果问题依旧,可能需要清理缓存并重新安装相关依赖。 另外,用户可能使用了虚拟环境,需要确保在正确的环境中安装了所有依赖。有时候,权限问题也可能导致文件损坏,可以尝试用sudo或者重新配置权限。最后,参考社区或官方文档是否有类似问题的解决方案,例如在RKNN的GitHub仓库中搜索相关错误信息,看是否有已知的修复方法或补丁。</think>### RKNN模型构建出现ImportError: invalid ELF header的解决方案 当RKNN模型构建时出现`ImportError: invalid ELF header`错误(涉及onnxruntime和Pybind11),通常与**库文件架构不兼容**或**安装文件损坏**有关。以下是逐步排查和解决方法: --- #### **步骤1:确认onnxruntime版本与系统架构匹配** 1. 检查系统架构: ```bash uname -m # 输出x86_64(x86架构)或aarch64(ARM架构) ``` 2. 查看onnxruntime安装版本: ```bash pip show onnxruntime ``` - 若系统为ARM架构(如NVIDIA Jetson),需安装**onnxruntime的ARM版本**: ```bash pip install onnxruntime==1.14.1 --extra-index-url https://www.piwheels.org/simple ``` - 若系统为x86架构,直接安装官方版本: ```bash pip install onnxruntime==1.14.1 ``` --- #### **步骤2:检查Pybind11兼容性** Pybind11作为C++绑定工具,需与Python版本和onnxruntime兼容: ```bash # 升级Pybind11到最新稳定版 pip install pybind11 --upgrade # 或指定兼容版本(例如1.10.0) pip install pybind11==1.10.0 ``` --- #### **步骤3:清理并重装依赖** 1. 完全卸载onnxruntime和Pybind11: ```bash pip uninstall onnxruntime pybind11 ``` 2. 清除缓存后重新安装: ```bash pip cache purge pip install onnxruntime pybind11 ``` --- #### **步骤4:验证ELF文件完整性** 若错误仍存在,手动检查onnxruntime库文件的ELF头: ```bash readelf -h $(python -c "import onnxruntime; print(onnxruntime.__file__)") ``` - 若输出包含`Machine: AArch64`但系统为x86,说明安装版本错误,需更换架构匹配的包。 --- #### **其他可能性** - **虚拟环境问题**:确保在构建RKNN时使用Python环境与安装依赖的环境一致。 - **权限问题**:尝试使用`sudo pip install`或修复文件权限。 --- ### 相关问题 1. 如何检查Python库的架构兼容性? 2. RKNN模型转换时出现内存不足错误怎么办? 3. 如何在ARM设备上优化onnxruntime的性能?[^1] [^1]: 参考onnxruntime官方文档:https://onnxruntime.ai/docs/
评论 5
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值