Debug result = unpickler.load() ModuleNotFoundError: No module named ‘models‘

文章讲述了将torch训练的yolov5模型转换为TensorRT时遇到的问题,原因是torch.save保存了额外的训练参数,导致在其他机器上map_location失效。解决方法是先用yolov5自带的export.py转换为ONNX,再进行trt转换。
部署运行你感兴趣的模型镜像

1.torch训练的yolov5转trt出现问题如下:

Using CUDA device0 _CudaDeviceProperties(name='NVIDIA GeForce RTX 3080', total_memory=10017MB)

Find Pytorch weight
Traceback (most recent call last):
  File "export.py", line 243, in <module>
    ckpt = torch.load(opt.weight, map_location=device)
  File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 592, in load
    return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
  File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 851, in _load
    result = unpickler.load()
ModuleNotFoundError: No module named 'models'

2.解决办法:

直接先用yolov5自带的export.py转成.onnx模型,再通过onnx转trt,问题解决

Find ONNX weight

TensorRT: starting export with TensorRT 8.4.0.6...
[08/24/2023-18:57:25] [TRT] [I] [MemUsageChange] Init CUDA: CPU +359, GPU +0, now: CPU 426, GPU 401 (MiB)
[08/24/2023-18:57:26] [TRT] [I] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 444 MiB, GPU 401 MiB
[08/24/2023-18:57:27] [TRT] [I] [MemUsageSnapshot] End constructing builder kernel library: CPU 819 MiB, GPU 523 MiB
[08/24/2023-18:57:27] [TRT] [I] ----------------------------------------------------------------
[08/24/2023-18:57:27] [TRT] [I] Input filename:   ../best.onnx
[08/24/2023-18:57:27] [TRT] [I] ONNX IR version:  0.0.6
[08/24/2023-18:57:27] [TRT] [I] Opset version:    11
[08/24/2023-18:57:27] [TRT] [I] Producer name:    pytorch
[08/24/2023-18:57:27] [TRT] [I] Producer version: 1.9
[08/24/2023-18:57:27] [TRT] [I] Domain:           
[08/24/2023-18:57:27] [TRT] [I] Model version:    0
[08/24/2023-18:57:27] [TRT] [I] Doc string:       
[08/24/2023-18:57:27] [TRT] [I] ----------------------------------------------------------------
[08/24/2023-18:57:27] [TRT] [W] onnx2trt_utils.cpp:365: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
TensorRT: Network Description:
TensorRT:       input "images" with shape (1, 3, 640, 640) and dtype DataType.FLOAT
TensorRT:       output "output" with shape (1, 25200, 20) and dtype DataType.FLOAT
TensorRT: building FP16 engine in ../best.engine
[08/24/2023-18:57:29] [TRT] [W] TensorRT was linked against cuBLAS/cuBLAS LT 11.8.0 but loaded cuBLAS/cuBLAS LT 11.3.0
[08/24/2023-18:57:29] [TRT] [I] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +637, GPU +268, now: CPU 1545, GPU 791 (MiB)
[08/24/2023-18:57:29] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +356, GPU +258, now: CPU 1901, GPU 1049 (MiB)
[08/24/2023-18:57:29] [TRT] [W] TensorRT was linked against cuDNN 8.3.2 but loaded cuDNN 8.0.5
[08/24/2023-18:57:29] [TRT] [I] Local timing cache in use. Profiling results in this builder pass will not be stored.
[08/24/2023-18:58:37] [TRT] [I] Some tactics do not have sufficient workspace memory to run. Increasing workspace size will enable more tactics, please check verbose output for requested sizes.
[08/24/2023-19:06:05] [TRT] [I] Detected 1 inputs and 4 output network tensors.
[08/24/2023-19:06:08] [TRT] [I] Total Host Persistent Memory: 218880
[08/24/2023-19:06:08] [TRT] [I] Total Device Persistent Memory: 1197056
[08/24/2023-19:06:08] [TRT] [I] Total Scratch Memory: 0
[08/24/2023-19:06:08] [TRT] [I] [MemUsageStats] Peak memory usage of TRT CPU/GPU memory allocators: CPU 48 MiB, GPU 2470 MiB
[08/24/2023-19:06:08] [TRT] [I] [BlockAssignment] Algorithm ShiftNTopDown took 29.1457ms to assign 9 blocks to 142 nodes requiring 25804804 bytes.
[08/24/2023-19:06:08] [TRT] [I] Total Activation Memory: 25804804
[08/24/2023-19:06:08] [TRT] [I] [MemUsageChange] TensorRT-managed allocation in building engine: CPU +40, GPU +42, now: CPU 40, GPU 42 (MiB)
export.py:172: CryptographyDeprecationWarning: Python 3.6 is no longer supported by the Python core team. Therefore, support for it is deprecated in cryptography and will be removed in a future release.
  from cryptography.fernet import Fernet
TensorRT: export success, saved as ../best.engine

3.原因及其他解决办法

网上查了一下,主要原因是在保存训练的模型时,使用的torch.save(model, path),而在加载时使用的model = torch.load(path);export.py中对pt的加载源码如下:

if pt:
        logger.info("Find Pytorch weight")
        ckpt = torch.load(opt.weight, map_location=device)
        if opt.noema:
            model = ckpt['model']
        else:
            model = ckpt['ema'] if ckpt.get('ema') else ckpt['model']
            
        meta = get_meta_data(ckpt, model, meta)

        if opt.int8:
            zero_scale_fix(model, device)
            if model.__name__ != "EfficentYolo":
                for sub_fusion_list in op_concat_fusion_list[model.__name__]:
                    ops = [get_module(model, op_name) for op_name in sub_fusion_list]
                    concat_quant_amax_fuse(ops)
                for sub_fusion_list in op_concat_fusion_list[model.type]:
                    ops = [get_module(model, op_name) for op_name in sub_fusion_list]
                    concat_quant_amax_fuse(ops)
    
        model.float()
        if not opt.int8:
            model.fuse()
        model.to(device)
        model.eval()
        if opt.int8:
            quant_nn.TensorQuantizer.use_fb_fake_quant = True
        im = torch.zeros(1, 3, *imgsz).to(device)

        # 模型detect layer为了支持onnx的导出,所必须的更改
    #     model.detect.inplace = False
        if not(hasattr(model, 'type') and model.type in ['anchorfree', 'anchorbase']):
            model.type = 'anchorbase'
        model.detect.dynamic = dynamic
        model.detect.export = True  # 减少输出数量
        # 验证torch模型是否正常
        for _ in range(2):
            y = model(im)  # dry runs
            
        # 从模型中读取模型的labels,并保存到labels.txt下
        labels = str({i:l for i,l in enumerate(model.labels)})
        
        with open(file.parents[0]/'labels.txt','w') as f:
            f.write(labels)
        logger.info("the torch model is very successful, it's no possible!")
        
        if 'onnx' in opt.include or 'trt' in opt.include:
            try:
                import tensorrt as trt
                if model.type == 'anchorfree':
                    export_onnx(model, im, file, opt.opset, train=False, dynamic=False, simple=opt.simple)
                elif model.type == 'anchorbase':
                    if int(trt.__version__[0]) == 7:  # TensorRT 7 handling https://github.com/ultralytics/yolov5/issues/6012
                        model.detect.inplace = False
                        grid = model.detect.anchor_grid
                        model.detect.anchor_grid = [a[..., :1, :1, :] for a in grid]
                        export_onnx(model, im, file, opt.opset, train=False, dynamic=False, simple=opt.simple)  # opset 12
                        model.detect.anchor_grid = grid
                    else:  # TensorRT >= 8
                        export_onnx(model, im, file, opt.opset, train=False, dynamic=False, simple=opt.simple)  # opset 13
            except:
                logger.info("TRT ERROR, will custom onnx!")
                export_onnx(model, im, file, opt.opset, train=False, dynamic=False, simple=opt.simple)
                
            onnx_file = file.with_suffix('.onnx')
            add_meta_to_model(onnx_file, meta)
            if opt.int8:
                get_remove_qdq_onnx_and_cache(file.with_suffix('.onnx'))
                add_meta_to_model(str(onnx_file).replace('.onnx', '_wo_qdq.onnx'), meta)
                
        if 'trt' in opt.include:
            if opt.old:
                meta = False
            export_engine(onnx_file, None, meta=meta, half=opt.half, int8=opt.int8, workspace=opt.worker, encode=opt.encode, verbose=opt.verbose)
    else:    
        logger.info("Find ONNX weight")
        if not opt.old:
            meta = get_meta_data(file, None, meta)
            meta['half'] = opt.half
            meta['int8'] = opt.int8
            meta['encode'] = opt.encode
        if opt.old:
            meta = False

猜测可能是:
(1)模型在训练时,保存了一些其他参数信息,这些参数可能涉及到训练模型的位置等,模型迁移到其他机器上时,比如需要使用的机器上转trt时,找不到该位置了,可以先转成通用的onnx模型,再转trt。

(2)一般出现这种问题,多半不是最终训练完的模型,举个例子,例如yolov5m训练完之后的大小在40M左右,但是中途保存下来的模型best.pt和last.pt大小应该是在160M左右,多了120M的信息可能就是当前机器的一些东西了,如果把模型移植到其他机器上使用和转换,就会出现问题。

您可能感兴趣的与本文相关的镜像

PyTorch 2.8

PyTorch 2.8

PyTorch
Cuda

PyTorch 是一个开源的 Python 机器学习库,基于 Torch 库,底层由 C++ 实现,应用于人工智能领域,如计算机视觉和自然语言处理

PowerShell 7 环境已加载 (版本: 7.5.2) PS C:\Users\Administrator\Desktop> cd E:\AI_System PS E:\AI_System> .\venv\Scripts\activate (venv) PS E:\AI_System> python main.py 2025-08-27 23:51:07,817 - CoreConfig - INFO - 📂 从 E:\AI_System\config\default.json 加载配置: { "LOG_DIR": "E:/AI_System/logs", "CONFIG_DIR": "E:/AI_System/config", "MODEL_CACHE_DIR": "E:/AI_System/model_cache", "AGENT_NAME": "\u5c0f\u84dd", "DEFAULT_USER": "\u7ba1\u7406\u5458", "MAX_WORKERS": 4, "AGENT_RESPONSE_TIMEOUT": 30.0, "MODEL_BASE_PATH": "E:/AI_Models", "MODEL_PATHS": { "TEXT_BASE": "E:/AI_Models/Qwen2-7B", "TEXT_CHAT": "E:/AI_Models/deepseek-7b-chat", "MULTIMODAL": "E:/AI_Models/deepseek-vl2", "IMAGE_GEN": "E:/AI_Models/sdxl", "YI_VL": "E:/AI_Models/yi-vl", "STABLE_DIFFUSION": "E:/AI_Models/stable-diffusion-xl-base-1.0" }, "NETWORK": { "HOST": "0.0.0.0", "FLASK_PORT": 8000, "GRADIO_PORT": 7860 }, "DATABASE": { "DB_HOST": "localhost", "DB_PORT": 5432, "DB_NAME": "ai_system", "DB_USER": "ai_user", "DB_PASSWORD": "secure_password_here" }, "SECURITY": { "SECRET_KEY": "generated-secret-key-here" }, "ENVIRONMENT": { "ENV": "dev", "LOG_LEVEL": "DEBUG", "USE_GPU": true }, "DIRECTORIES": { "DEFAULT_MODEL": "E:/AI_Models/Qwen2-7B", "WEB_UI_DIR": "E:/AI_System/web_ui", "AGENT_DIR": "E:/AI_System/agent", "PROJECT_ROOT": "E:/AI_System" } } 2025-08-27 23:51:07,818 - CoreConfig - INFO - 📂 从 E:\AI_System\config\local.json 加载配置: {} 2025-08-27 23:51:07,821 - CoreConfig - INFO - 🌐 从 E:\AI_System\.env 加载环境变量 2025-08-27 23:51:07,822 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_ROOT=E:\AI_System 2025-08-27 23:51:07,823 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_DIRECTORIES__PROJECT_ROOT=E:\AI_System 2025-08-27 23:51:07,823 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_DIRECTORIES__AGENT_DIR=E:\AI_System\agent 2025-08-27 23:51:07,825 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_DIRECTORIES__WEB_UI_DIR=E:\AI_System\web_ui 2025-08-27 23:51:07,827 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_DIRECTORIES__DEFAULT_MODEL=E:\AI_Models\Qwen2-7B 2025-08-27 23:51:07,829 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_ENVIRONMENT__LOG_LEVEL=DEBUG 2025-08-27 23:51:07,831 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_DATABASE__DB_HOST=localhost 2025-08-27 23:51:07,831 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_DATABASE__DB_PORT=5432 2025-08-27 23:51:07,831 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_DATABASE__DB_NAME=ai_system 2025-08-27 23:51:07,833 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_DATABASE__DB_USER=ai_user 2025-08-27 23:51:07,835 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_DATABASE__DB_PASSWORD=****** 2025-08-27 23:51:07,837 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_SECURITY__SECRET_KEY=****** 2025-08-27 23:51:07,839 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_MODEL_PATHS__TEXT_BASE=E:\AI_Models\Qwen2-7B 2025-08-27 23:51:07,839 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_MODEL_PATHS__TEXT_CHAT=E:\AI_Models\deepseek-7b-chat 2025-08-27 23:51:07,840 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_MODEL_PATHS__MULTIMODAL=E:\AI_Models\deepseek-vl2 2025-08-27 23:51:07,840 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_MODEL_PATHS__IMAGE_GEN=E:\AI_Models\sdxl 2025-08-27 23:51:07,841 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_MODEL_PATHS__YI_VL=E:\AI_Models\yi-vl 2025-08-27 23:51:07,843 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_MODEL_PATHS__STABLE_DIFFUSION=E:\AI_Models\stable-diffusion-xl-base-1.0 2025-08-27 23:51:07,845 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_NETWORK__HOST=0.0.0.0 2025-08-27 23:51:07,846 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_NETWORK__FLASK_PORT=8000 2025-08-27 23:51:07,847 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_NETWORK__GRADIO_PORT=7860 2025-08-27 23:51:07,847 - CoreConfig - WARNING - ⚠️ 模型路径不存在: STABLE_DIFFUSION = E:/AI_Models/stable-diffusion-xl-base-1.0 2025-08-27 23:51:07,848 - CoreConfig - INFO - ✅ 配置系统初始化完成 ❌ 导入Agent模块失败: No module named 'agent.memory' Traceback (most recent call last): File "E:\AI_System\main.py", line 12, in <module> from agent.cognitive_architecture import CognitiveSystem File "E:\AI_System\agent\cognitive_architecture.py", line 5, in <module> from .memory import MemorySystem ModuleNotFoundError: No module named 'agent.memory'
08-28
PowerShell 7 环境已加载 (版本: 7.5.2) PS C:\Users\Administrator\Desktop> cd E:\AI_System PS E:\AI_System> venv\Scripts\activate # 激活虚拟环境 (venv) PS E:\AI_System> python main.py # 启动系统 2025-08-27 22:35:31,335 - CoreConfig - INFO - 📂 从 E:\AI_System\config\default.json 加载配置: { "LOG_DIR": "E:/AI_System/logs", "CONFIG_DIR": "E:/AI_System/config", "MODEL_CACHE_DIR": "E:/AI_System/model_cache", "AGENT_NAME": "\u5c0f\u84dd", "DEFAULT_USER": "\u7ba1\u7406\u5458", "MAX_WORKERS": 4, "AGENT_RESPONSE_TIMEOUT": 30.0, "MODEL_BASE_PATH": "E:/AI_Models", "MODEL_PATHS": { "TEXT_BASE": "E:/AI_Models/Qwen2-7B", "TEXT_CHAT": "E:/AI_Models/deepseek-7b-chat", "MULTIMODAL": "E:/AI_Models/deepseek-vl2", "IMAGE_GEN": "E:/AI_Models/sdxl", "YI_VL": "E:/AI_Models/yi-vl", "STABLE_DIFFUSION": "E:/AI_Models/stable-diffusion-xl-base-1.0" }, "NETWORK": { "HOST": "0.0.0.0", "FLASK_PORT": 8000, "GRADIO_PORT": 7860 }, "DATABASE": { "DB_HOST": "localhost", "DB_PORT": 5432, "DB_NAME": "ai_system", "DB_USER": "ai_user", "DB_PASSWORD": "secure_password_here" }, "SECURITY": { "SECRET_KEY": "generated-secret-key-here" }, "ENVIRONMENT": { "ENV": "dev", "LOG_LEVEL": "DEBUG", "USE_GPU": true }, "DIRECTORIES": { "DEFAULT_MODEL": "E:/AI_Models/Qwen2-7B", "WEB_UI_DIR": "E:/AI_System/web_ui", "AGENT_DIR": "E:/AI_System/agent", "PROJECT_ROOT": "E:/AI_System" } } 2025-08-27 22:35:31,339 - CoreConfig - INFO - 📂 从 E:\AI_System\config\local.json 加载配置: {} 2025-08-27 22:35:31,342 - CoreConfig - INFO - 🌐 从 E:\AI_System\.env 加载环境变量 2025-08-27 22:35:31,342 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_ROOT=E:\AI_System 2025-08-27 22:35:31,342 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_DIRECTORIES__PROJECT_ROOT=E:\AI_System 2025-08-27 22:35:31,350 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_DIRECTORIES__AGENT_DIR=E:\AI_System\agent 2025-08-27 22:35:31,350 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_DIRECTORIES__WEB_UI_DIR=E:\AI_System\web_ui 2025-08-27 22:35:31,351 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_DIRECTORIES__DEFAULT_MODEL=E:\AI_Models\Qwen2-7B 2025-08-27 22:35:31,353 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_ENVIRONMENT__LOG_LEVEL=DEBUG 2025-08-27 22:35:31,353 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_DATABASE__DB_HOST=localhost 2025-08-27 22:35:31,353 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_DATABASE__DB_PORT=5432 2025-08-27 22:35:31,366 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_DATABASE__DB_NAME=ai_system 2025-08-27 22:35:31,366 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_DATABASE__DB_USER=ai_user 2025-08-27 22:35:31,366 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_DATABASE__DB_PASSWORD=****** 2025-08-27 22:35:31,367 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_SECURITY__SECRET_KEY=****** 2025-08-27 22:35:31,367 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_MODEL_PATHS__TEXT_BASE=E:\AI_Models\Qwen2-7B 2025-08-27 22:35:31,367 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_MODEL_PATHS__TEXT_CHAT=E:\AI_Models\deepseek-7b-chat 2025-08-27 22:35:31,367 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_MODEL_PATHS__MULTIMODAL=E:\AI_Models\deepseek-vl2 2025-08-27 22:35:31,368 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_MODEL_PATHS__IMAGE_GEN=E:\AI_Models\sdxl 2025-08-27 22:35:31,368 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_MODEL_PATHS__YI_VL=E:\AI_Models\yi-vl 2025-08-27 22:35:31,368 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_MODEL_PATHS__STABLE_DIFFUSION=E:\AI_Models\stable-diffusion-xl-base-1.0 2025-08-27 22:35:31,368 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_NETWORK__HOST=0.0.0.0 2025-08-27 22:35:31,368 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_NETWORK__FLASK_PORT=8000 2025-08-27 22:35:31,368 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_NETWORK__GRADIO_PORT=7860 2025-08-27 22:35:31,369 - CoreConfig - WARNING - ⚠️ 模型路径不存在: STABLE_DIFFUSION = E:/AI_Models/stable-diffusion-xl-base-1.0 2025-08-27 22:35:31,369 - CoreConfig - INFO - ✅ 配置系统初始化完成 2025-08-27 22:35:31,369 - ModelManager - ERROR - ❌ 关键依赖缺失: No module named 'diskcache' 2025-08-27 22:35:31,369 - ModelManager - WARNING - ⚠️ 使用简化缓存系统 2025-08-27 22:35:31,369 - ModelManager - INFO - ✅ 备用下载系统初始化完成 2025-08-27 22:35:31,370 - ModelManager - INFO - ✅ 成功导入BaseModel 2025-08-27 22:35:31,371 - CognitiveArchitecture - INFO - ✅ 成功从core.base_module导入CognitiveModule基类 2025-08-27 22:35:31,371 - CognitiveArchitecture - ERROR - ❌ 自我认知模块导入失败: No module named 'agent.digital_body_schema' 2025-08-27 22:35:31,371 - CognitiveArchitecture - WARNING - ⚠️ 使用占位符自我认知模块 2025-08-27 22:35:31,381 - Main - INFO - ================================================== 2025-08-27 22:35:31,381 - Main - INFO - 🚀 启动AI系统 2025-08-27 22:35:31,383 - Main - INFO - ================================================== 系统配置摘要: +--------------------------+-----------------------+ | 配置路径 | 值 | +--------------------------+-----------------------+ | AGENT_NAME | 小蓝 | | DIRECTORIES.PROJECT_ROOT | E:\AI_System | | LOG_DIR | E:/AI_System/logs | | AGENT_DIR | 未设置 | | WEB_UI_DIR | 未设置 | | NETWORK.HOST | 0.0.0.0 | | NETWORK.FLASK_PORT | 8000 | | MODEL_PATHS.TEXT_BASE | E:/AI_Models/Qwen2-7B | | ENVIRONMENT.USE_GPU | True | | ENVIRONMENT.LOG_LEVEL | DEBUG | +--------------------------+-----------------------+ 2025-08-27 22:35:33,636 - Main - INFO - ✅ 启动前检查通过 2025-08-27 22:35:33,636 - Main - INFO - 添加子目录到路径: core 2025-08-27 22:35:33,636 - Main - INFO - 添加子目录到路径: utils 2025-08-27 22:35:33,636 - Main - INFO - 添加子目录到路径: config 2025-08-27 22:35:33,638 - Main - INFO - 添加子目录到路径: cognitive_arch 2025-08-27 22:35:33,638 - Main - INFO - 添加子目录到路径: environment 2025-08-27 22:35:33,638 - Main - INFO - 📁 项目根目录: E:\AI_System 2025-08-27 22:35:33,638 - CoreConfig - WARNING - ⚠️ 模型路径不存在: STABLE_DIFFUSION = E:/AI_Models/stable-diffusion-xl-base-1.0 2025-08-27 22:35:33,638 - Main - INFO - 模型验证结果: 5/6 个模型有效 2025-08-27 22:35:33,638 - Main - INFO - TEXT_BASE ✅ 有效 (E:\AI_Models\Qwen2-7B) 2025-08-27 22:35:33,638 - Main - INFO - TEXT_CHAT ✅ 有效 (E:\AI_Models\deepseek-7b-chat) 2025-08-27 22:35:33,638 - Main - INFO - MULTIMODAL ✅ 有效 (E:\AI_Models\deepseek-vl2) 2025-08-27 22:35:33,638 - Main - INFO - IMAGE_GEN ✅ 有效 (E:\AI_Models\sdxl) 2025-08-27 22:35:33,638 - Main - INFO - YI_VL ✅ 有效 (E:\AI_Models\yi-vl) 2025-08-27 22:35:33,638 - Main - INFO - STABLE_DIFFUSION ❌ 无效 (E:\AI_Models\stable-diffusion-xl-base-1.0) 2025-08-27 22:35:33,638 - ModelManager - INFO - 创建简化缓存系统: E:/AI_System/model_cache 2025-08-27 22:35:33,638 - ModelManager - INFO - ✅ 模型缓存初始化完成 | 路径: E:/AI_System/model_cache 2025-08-27 22:35:33,774 - ModelManager - INFO - ✅ 成功导入ModelLoader 2025-08-27 22:35:33,774 - ModelManager - INFO - 📦 初始化模型管理器 | 设备: cuda | 模型目录: models | 缓存目录: E:/AI_System/model_cache | 默认模型: E:\AI_Models\Qwen2-7B 2025-08-27 22:35:33,774 - ModelManager - INFO - 🔄 正在加载核心语言模型: E:\AI_Models\Qwen2-7B 2025-08-27 22:35:33,774 - ModelManager - ERROR - ❌ 使用ModelLoader加载认知模型失败: 'ModelLoader' object has no attribute 'get_cognitive_model' 2025-08-27 22:35:33,774 - ModelManager - ERROR - ❌ 加载模型失败: join() argument must be str, bytes, or os.PathLike object, not 'int' 2025-08-27 22:35:33,774 - Main - INFO - ✅ 模型管理器初始化完成 | 设备: cuda | 默认模型: E:\AI_Models\Qwen2-7B 2025-08-27 22:35:33,774 - Main - WARNING - ⚠️ 硬件监控模块缺失,跳过启动 2025-08-27 22:35:33,774 - Main - WARNING - ⚠️ 生活调度模块缺失,跳过启动 2025-08-27 22:35:33,774 - Main - INFO - ✅ 环境管理器已启动 WARNING:root:快捷方式目标不存在: E:\AI_Workspace\03_前端交互\前端代码.lnk -> E:\AI_System\agent\web_ui WARNING:root:快捷方式目标不存在: E:\AI_Workspace\03_前端交互\配置文件.lnk -> E:\AI_System\agent\config 2025-08-27 22:35:33,894 - WebServer - INFO - ============================================================ INFO:WebServer:============================================================ 2025-08-27 22:35:33,894 - WebServer - INFO - 模型存储路径: E:\AI_Models INFO:WebServer:模型存储路径: E:\AI_Models 2025-08-27 22:35:33,894 - WebServer - INFO - 智能体系统路径: E:\AI_System\agent INFO:WebServer:智能体系统路径: E:\AI_System\agent 2025-08-27 22:35:33,894 - WebServer - INFO - Web UI路径: E:\AI_System\agent INFO:WebServer:Web UI路径: E:\AI_System\agent 2025-08-27 22:35:33,894 - WebServer - INFO - 配置路径: E:\AI_System\agent INFO:WebServer:配置路径: E:\AI_System\agent 2025-08-27 22:35:33,894 - WebServer - INFO - 主模型路径: E:\AI_Models\Qwen2-7B INFO:WebServer:主模型路径: E:\AI_Models\Qwen2-7B 2025-08-27 22:35:33,894 - WebServer - INFO - 备用模型路径: E:\AI_Models\deepseek-7b-chat INFO:WebServer:备用模型路径: E:\AI_Models\deepseek-7b-chat 2025-08-27 22:35:33,894 - WebServer - INFO - ============================================================ INFO:WebServer:============================================================ 2025-08-27 22:35:33,894 - WebServer - INFO - 强制启用离线模式 INFO:WebServer:强制启用离线模式 2025-08-27 22:35:33,899 - Main - INFO - 🌐 启动Web服务器: http://0.0.0.0:8000 INFO:Main:🌐 启动Web服务器: http://0.0.0.0:8000 2025-08-27 22:35:33,899 - WebServer - INFO - ============================================================ INFO:WebServer:============================================================ 2025-08-27 22:35:33,899 - WebServer - INFO - 🚀 启动Web服务器 (星型架构) INFO:WebServer:🚀 启动Web服务器 (星型架构) 2025-08-27 22:35:33,899 - WebServer - INFO - ============================================================ INFO:WebServer:============================================================ 2025-08-27 22:35:34,404 - PlasticCore - INFO - 可塑核心初始化完成 INFO:PlasticCore:可塑核心初始化完成 2025-08-27 22:35:34,405 - WebServer - INFO - ✅ 核心接口已初始化 INFO:WebServer:✅ 核心接口已初始化 2025-08-27 22:35:34,405 - PlasticCore - WARNING - 核心系统已在运行中 WARNING:PlasticCore:核心系统已在运行中 2025-08-27 22:35:34,405 - WebServer - INFO - ✅ 核心系统已启动 INFO:WebServer:✅ 核心系统已启动 2025-08-27 22:35:34,406 - WebServer - INFO - 🔄 健康监控线程已启动 INFO:WebServer:🔄 健康监控线程已启动 2025-08-27 22:35:34,406 - WebServer - INFO - 📊 系统监控已启动 INFO:WebServer:📊 系统监控已启动 2025-08-27 22:35:34,406 - WebServer - INFO - 🌐 Web服务器运行在 0.0.0.0:8000 INFO:WebServer:🌐 Web服务器运行在 0.0.0.0:8000 * Serving Flask app 'web_ui.server' * Debug mode: off INFO:werkzeug:WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Running on all addresses (0.0.0.0) * Running on http://127.0.0.1:8000 * Running on http://192.168.10.140:8000 INFO:werkzeug:Press CTRL+C to quit 2025-08-27 22:35:36,950 - WebServer - INFO - 正在关闭系统... INFO:WebServer:正在关闭系统... 2025-08-27 22:35:39,430 - PlasticCore - INFO - 开始关闭核心系统... INFO:PlasticCore:开始关闭核心系统... 2025-08-27 22:35:39,430 - PlasticCore - INFO - 核心系统已关闭 INFO:PlasticCore:核心系统已关闭 2025-08-27 22:35:39,430 - WebServer - INFO - 系统已安全关闭 INFO:WebServer:系统已安全关闭 (venv) PS E:\AI_System> pip install python-dotenv prettytable Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Requirement already satisfied: python-dotenv in e:\ai_system\venv\lib\site-packages (1.0.1) Requirement already satisfied: prettytable in e:\ai_system\venv\lib\site-packages (3.16.0) Requirement already satisfied: wcwidth in e:\ai_system\venv\lib\site-packages (from prettytable) (0.2.13) [notice] A new release of pip available: 22.3.1 -> 25.2 [notice] To update, run: python.exe -m pip install --upgrade pip (venv) PS E:\AI_System> python -m core.config 2025-08-27 22:35:58,197 - CoreConfig - INFO - 📂 从 E:\AI_System\config\default.json 加载配置: { "LOG_DIR": "E:/AI_System/logs", "CONFIG_DIR": "E:/AI_System/config", "MODEL_CACHE_DIR": "E:/AI_System/model_cache", "AGENT_NAME": "\u5c0f\u84dd", "DEFAULT_USER": "\u7ba1\u7406\u5458", "MAX_WORKERS": 4, "AGENT_RESPONSE_TIMEOUT": 30.0, "MODEL_BASE_PATH": "E:/AI_Models", "MODEL_PATHS": { "TEXT_BASE": "E:/AI_Models/Qwen2-7B", "TEXT_CHAT": "E:/AI_Models/deepseek-7b-chat", "MULTIMODAL": "E:/AI_Models/deepseek-vl2", "IMAGE_GEN": "E:/AI_Models/sdxl", "YI_VL": "E:/AI_Models/yi-vl", "STABLE_DIFFUSION": "E:/AI_Models/stable-diffusion-xl-base-1.0" }, "NETWORK": { "HOST": "0.0.0.0", "FLASK_PORT": 8000, "GRADIO_PORT": 7860 }, "DATABASE": { "DB_HOST": "localhost", "DB_PORT": 5432, "DB_NAME": "ai_system", "DB_USER": "ai_user", "DB_PASSWORD": "secure_password_here" }, "SECURITY": { "SECRET_KEY": "generated-secret-key-here" }, "ENVIRONMENT": { "ENV": "dev", "LOG_LEVEL": "DEBUG", "USE_GPU": true }, "DIRECTORIES": { "DEFAULT_MODEL": "E:/AI_Models/Qwen2-7B", "WEB_UI_DIR": "E:/AI_System/web_ui", "AGENT_DIR": "E:/AI_System/agent", "PROJECT_ROOT": "E:/AI_System" } } 2025-08-27 22:35:58,197 - CoreConfig - INFO - 📂 从 E:\AI_System\config\local.json 加载配置: {} 2025-08-27 22:35:58,198 - CoreConfig - INFO - 🌐 从 E:\AI_System\.env 加载环境变量 2025-08-27 22:35:58,198 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_ROOT=E:\AI_System 2025-08-27 22:35:58,198 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_DIRECTORIES__PROJECT_ROOT=E:\AI_System 2025-08-27 22:35:58,198 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_DIRECTORIES__AGENT_DIR=E:\AI_System\agent 2025-08-27 22:35:58,198 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_DIRECTORIES__WEB_UI_DIR=E:\AI_System\web_ui 2025-08-27 22:35:58,198 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_DIRECTORIES__DEFAULT_MODEL=E:\AI_Models\Qwen2-7B 2025-08-27 22:35:58,198 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_ENVIRONMENT__LOG_LEVEL=DEBUG 2025-08-27 22:35:58,198 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_DATABASE__DB_HOST=localhost 2025-08-27 22:35:58,198 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_DATABASE__DB_PORT=5432 2025-08-27 22:35:58,198 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_DATABASE__DB_NAME=ai_system 2025-08-27 22:35:58,198 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_DATABASE__DB_USER=ai_user 2025-08-27 22:35:58,198 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_DATABASE__DB_PASSWORD=****** 2025-08-27 22:35:58,198 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_SECURITY__SECRET_KEY=****** 2025-08-27 22:35:58,198 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_MODEL_PATHS__TEXT_BASE=E:\AI_Models\Qwen2-7B 2025-08-27 22:35:58,198 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_MODEL_PATHS__TEXT_CHAT=E:\AI_Models\deepseek-7b-chat 2025-08-27 22:35:58,198 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_MODEL_PATHS__MULTIMODAL=E:\AI_Models\deepseek-vl2 2025-08-27 22:35:58,198 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_MODEL_PATHS__IMAGE_GEN=E:\AI_Models\sdxl 2025-08-27 22:35:58,198 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_MODEL_PATHS__YI_VL=E:\AI_Models\yi-vl 2025-08-27 22:35:58,198 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_MODEL_PATHS__STABLE_DIFFUSION=E:\AI_Models\stable-diffusion-xl-base-1.0 2025-08-27 22:35:58,198 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_NETWORK__HOST=0.0.0.0 2025-08-27 22:35:58,199 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_NETWORK__FLASK_PORT=8000 2025-08-27 22:35:58,199 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_NETWORK__GRADIO_PORT=7860 2025-08-27 22:35:58,200 - CoreConfig - WARNING - ⚠️ 模型路径不存在: STABLE_DIFFUSION = E:/AI_Models/stable-diffusion-xl-base-1.0 2025-08-27 22:35:58,200 - CoreConfig - INFO - ✅ 配置系统初始化完成 2025-08-27 22:35:58,200 - ModelManager - ERROR - ❌ 关键依赖缺失: No module named 'diskcache' 2025-08-27 22:35:58,200 - ModelManager - WARNING - ⚠️ 使用简化缓存系统 2025-08-27 22:35:58,200 - ModelManager - INFO - ✅ 备用下载系统初始化完成 2025-08-27 22:35:58,200 - ModelManager - INFO - ✅ 成功导入BaseModel 2025-08-27 22:35:58,201 - CognitiveArchitecture - INFO - ✅ 成功从core.base_module导入CognitiveModule基类 2025-08-27 22:35:58,201 - CognitiveArchitecture - ERROR - ❌ 自我认知模块导入失败: No module named 'agent.digital_body_schema' 2025-08-27 22:35:58,201 - CognitiveArchitecture - WARNING - ⚠️ 使用占位符自我认知模块 E:\Python310\lib\runpy.py:126: RuntimeWarning: 'core.config' found in sys.modules after import of package 'core', but prior to execution of 'core.config'; this may result in unpredictable behaviour warn(RuntimeWarning(msg)) 2025-08-27 22:35:58,224 - CoreConfig - INFO - 📂 从 E:\AI_System\config\default.json 加载配置: { "LOG_DIR": "E:/AI_System/logs", "CONFIG_DIR": "E:/AI_System/config", "MODEL_CACHE_DIR": "E:/AI_System/model_cache", "AGENT_NAME": "\u5c0f\u84dd", "DEFAULT_USER": "\u7ba1\u7406\u5458", "MAX_WORKERS": 4, "AGENT_RESPONSE_TIMEOUT": 30.0, "MODEL_BASE_PATH": "E:/AI_Models", "MODEL_PATHS": { "TEXT_BASE": "E:/AI_Models/Qwen2-7B", "TEXT_CHAT": "E:/AI_Models/deepseek-7b-chat", "MULTIMODAL": "E:/AI_Models/deepseek-vl2", "IMAGE_GEN": "E:/AI_Models/sdxl", "YI_VL": "E:/AI_Models/yi-vl", "STABLE_DIFFUSION": "E:/AI_Models/stable-diffusion-xl-base-1.0" }, "NETWORK": { "HOST": "0.0.0.0", "FLASK_PORT": 8000, "GRADIO_PORT": 7860 }, "DATABASE": { "DB_HOST": "localhost", "DB_PORT": 5432, "DB_NAME": "ai_system", "DB_USER": "ai_user", "DB_PASSWORD": "secure_password_here" }, "SECURITY": { "SECRET_KEY": "generated-secret-key-here" }, "ENVIRONMENT": { "ENV": "dev", "LOG_LEVEL": "DEBUG", "USE_GPU": true }, "DIRECTORIES": { "DEFAULT_MODEL": "E:/AI_Models/Qwen2-7B", "WEB_UI_DIR": "E:/AI_System/web_ui", "AGENT_DIR": "E:/AI_System/agent", "PROJECT_ROOT": "E:/AI_System" } } 2025-08-27 22:35:58,224 - CoreConfig - INFO - 📂 从 E:\AI_System\config\local.json 加载配置: {} 2025-08-27 22:35:58,226 - CoreConfig - INFO - 🌐 从 E:\AI_System\.env 加载环境变量 2025-08-27 22:35:58,226 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_ROOT=E:\AI_System 2025-08-27 22:35:58,226 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_DIRECTORIES__PROJECT_ROOT=E:\AI_System 2025-08-27 22:35:58,226 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_DIRECTORIES__AGENT_DIR=E:\AI_System\agent 2025-08-27 22:35:58,226 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_DIRECTORIES__WEB_UI_DIR=E:\AI_System\web_ui 2025-08-27 22:35:58,226 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_DIRECTORIES__DEFAULT_MODEL=E:\AI_Models\Qwen2-7B 2025-08-27 22:35:58,226 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_ENVIRONMENT__LOG_LEVEL=DEBUG 2025-08-27 22:35:58,226 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_DATABASE__DB_HOST=localhost 2025-08-27 22:35:58,226 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_DATABASE__DB_PORT=5432 2025-08-27 22:35:58,226 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_DATABASE__DB_NAME=ai_system 2025-08-27 22:35:58,226 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_DATABASE__DB_USER=ai_user 2025-08-27 22:35:58,226 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_DATABASE__DB_PASSWORD=****** 2025-08-27 22:35:58,226 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_SECURITY__SECRET_KEY=****** 2025-08-27 22:35:58,226 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_MODEL_PATHS__TEXT_BASE=E:\AI_Models\Qwen2-7B 2025-08-27 22:35:58,226 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_MODEL_PATHS__TEXT_CHAT=E:\AI_Models\deepseek-7b-chat 2025-08-27 22:35:58,226 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_MODEL_PATHS__MULTIMODAL=E:\AI_Models\deepseek-vl2 2025-08-27 22:35:58,226 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_MODEL_PATHS__IMAGE_GEN=E:\AI_Models\sdxl 2025-08-27 22:35:58,227 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_MODEL_PATHS__YI_VL=E:\AI_Models\yi-vl 2025-08-27 22:35:58,227 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_MODEL_PATHS__STABLE_DIFFUSION=E:\AI_Models\stable-diffusion-xl-base-1.0 2025-08-27 22:35:58,227 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_NETWORK__HOST=0.0.0.0 2025-08-27 22:35:58,227 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_NETWORK__FLASK_PORT=8000 2025-08-27 22:35:58,227 - CoreConfig - INFO - 🔄 环境变量覆盖: AI_SYSTEM_NETWORK__GRADIO_PORT=7860 2025-08-27 22:35:58,227 - CoreConfig - WARNING - ⚠️ 模型路径不存在: STABLE_DIFFUSION = E:/AI_Models/stable-diffusion-xl-base-1.0 2025-08-27 22:35:58,227 - CoreConfig - INFO - ✅ 配置系统初始化完成 ================================================== 配置系统测试 ================================================== AGENT_NAME: 小蓝 PROJECT_ROOT: E:\AI_System LOG_DIR: E:/AI_System/logs AGENT_DIR: E:/AI_System/agent WEB_UI_DIR: E:/AI_System/web_ui DB_HOST: localhost DEFAULT_MODEL: E:/AI_Models/Qwen2-7B 模型路径验证结果: 2025-08-27 22:35:58,227 - CoreConfig - WARNING - ⚠️ 模型路径不存在: STABLE_DIFFUSION = E:/AI_Models/stable-diffusion-xl-base-1.0 2025-08-27 22:35:58,227 - CoreConfig - WARNING - ⚠️ 模型路径不存在: STABLE_DIFFUSION = E:/AI_Models/stable-diffusion-xl-base-1.0 TEXT_BASE ✅ 有效 (E:\AI_Models\Qwen2-7B) TEXT_CHAT ✅ 有效 (E:\AI_Models\deepseek-7b-chat) MULTIMODAL ✅ 有效 (E:\AI_Models\deepseek-vl2) IMAGE_GEN ✅ 有效 (E:\AI_Models\sdxl) YI_VL ✅ 有效 (E:\AI_Models\yi-vl) STABLE_DIFFUSION ❌ 无效 (E:\AI_Models\stable-diffusion-xl-base-1.0) 系统配置摘要: +--------------------------+-----------------------+ | 配置路径 | 值 | +--------------------------+-----------------------+ | AGENT_NAME | 小蓝 | | DIRECTORIES.PROJECT_ROOT | E:\AI_System | | LOG_DIR | E:/AI_System/logs | | AGENT_DIR | 未设置 | | WEB_UI_DIR | 未设置 | | NETWORK.HOST | 0.0.0.0 | | NETWORK.FLASK_PORT | 8000 | | MODEL_PATHS.TEXT_BASE | E:/AI_Models/Qwen2-7B | | ENVIRONMENT.USE_GPU | True | | ENVIRONMENT.LOG_LEVEL | DEBUG | +--------------------------+-----------------------+ 测试完成! (venv) PS E:\AI_System>
08-28
import os # os.environ["CUDA_VISIBLE_DEVICES"] = "8" import datetime import logging import math import time from os import path as osp import sys import torch sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) from basicsr.data import build_dataloader, build_dataset from basicsr.data.data_sampler import EnlargedSampler from basicsr.data.prefetch_dataloader import CPUPrefetcher, CUDAPrefetcher from basicsr.models import build_model from basicsr.utils import (AvgTimer, MessageLogger, check_resume, get_env_info, get_root_logger, get_time_str, init_tb_logger, init_wandb_logger, make_exp_dirs, mkdir_and_rename, scandir) from basicsr.utils.options import copy_opt_file, dict2str, parse_options def init_tb_loggers(opt): # initialize wandb logger before tensorboard logger to allow proper sync if (opt['logger'].get('wandb') is not None) and (opt['logger']['wandb'].get('project') is not None) and ('debug' not in opt['name']): assert opt['logger'].get('use_tb_logger') is True, ('should turn on tensorboard when using wandb') init_wandb_logger(opt) tb_logger = None if opt['logger'].get('use_tb_logger') and 'debug' not in opt['name']: tb_logger = init_tb_logger(log_dir=osp.join(opt['root_path'], 'tb_logger', opt['name'])) return tb_logger def create_train_val_dataloader(opt, logger): # create train and val dataloaders train_loader, val_loaders = None, [] for phase, dataset_opt in opt['datasets'].items(): if phase == 'train': dataset_enlarge_ratio = dataset_opt.get('dataset_enlarge_ratio', 1) train_set = build_dataset(dataset_opt) train_sampler = EnlargedSampler(train_set, opt['world_size'], opt['rank'], dataset_enlarge_ratio) train_loader = build_dataloader( train_set, dataset_opt, num_gpu=opt['num_gpu'], dist=opt['dist'], sampler=train_sampler, seed=opt['manual_seed']) num_iter_per_epoch = math.ceil( len(train_set) * dataset_enlarge_ratio / (dataset_opt['batch_size_per_gpu'] * opt['world_size'])) total_iters = int(opt['train']['total_iter']) total_epochs = math.ceil(total_iters / (num_iter_per_epoch)) logger.info('Training statistics:' f'\n\tNumber of train images: {len(train_set)}' f'\n\tDataset enlarge ratio: {dataset_enlarge_ratio}' f'\n\tBatch size per gpu: {dataset_opt["batch_size_per_gpu"]}' f'\n\tWorld size (gpu number): {opt["world_size"]}' f'\n\tRequire iter number per epoch: {num_iter_per_epoch}' f'\n\tTotal epochs: {total_epochs}; iters: {total_iters}.') elif phase.split('_')[0] == 'val': val_set = build_dataset(dataset_opt) val_loader = build_dataloader( val_set, dataset_opt, num_gpu=opt['num_gpu'], dist=opt['dist'], sampler=None, seed=opt['manual_seed']) logger.info(f'Number of val images/folders in {dataset_opt["name"]}: {len(val_set)}') val_loaders.append(val_loader) else: raise ValueError(f'Dataset phase {phase} is not recognized.') return train_loader, train_sampler, val_loaders, total_epochs, total_iters def load_resume_state(opt): resume_state_path = None if opt['auto_resume']: state_path = osp.join('experiments', opt['name'], 'training_states') if osp.isdir(state_path): states = list(scandir(state_path, suffix='state', recursive=False, full_path=False)) if len(states) != 0: states = [float(v.split('.state')[0]) for v in states] resume_state_path = osp.join(state_path, f'{max(states):.0f}.state') opt['path']['resume_state'] = resume_state_path else: if opt['path'].get('resume_state'): resume_state_path = opt['path']['resume_state'] if resume_state_path is None: resume_state = None else: device_id = torch.cuda.current_device() resume_state = torch.load(resume_state_path, map_location=lambda storage, loc: storage.cuda(device_id)) check_resume(opt, resume_state['iter']) return resume_state def train_pipeline(root_path): # parse options, set distributed setting, set random seed opt, args = parse_options(root_path, is_train=True) opt['root_path'] = root_path torch.backends.cudnn.benchmark = True # torch.backends.cudnn.deterministic = True # load resume states if necessary resume_state = load_resume_state(opt) # mkdir for experiments and logger if resume_state is None: make_exp_dirs(opt) if opt['logger'].get('use_tb_logger') and 'debug' not in opt['name'] and opt['rank'] == 0: mkdir_and_rename(osp.join(opt['root_path'], 'tb_logger', opt['name'])) # copy the yml file to the experiment root copy_opt_file(args.opt, opt['path']['experiments_root']) # WARNING: should not use get_root_logger in the above codes, including the called functions # Otherwise the logger will not be properly initialized log_file = osp.join(opt['path']['log'], f"train_{opt['name']}_{get_time_str()}.log") logger = get_root_logger(logger_name='basicsr', log_level=logging.INFO, log_file=log_file) logger.info(get_env_info()) logger.info(dict2str(opt)) # initialize wandb and tb loggers tb_logger = init_tb_loggers(opt) # create train and validation dataloaders result = create_train_val_dataloader(opt, logger) train_loader, train_sampler, val_loaders, total_epochs, total_iters = result # create model model = build_model(opt) if resume_state: # resume training model.resume_training(resume_state) # handle optimizers and schedulers logger.info(f"Resuming training from epoch: {resume_state['epoch']}, iter: {resume_state['iter']}.") start_epoch = resume_state['epoch'] current_iter = resume_state['iter'] else: start_epoch = 0 current_iter = 0 # create message logger (formatted outputs) msg_logger = MessageLogger(opt, current_iter, tb_logger) # dataloader prefetcher prefetch_mode = opt['datasets']['train'].get('prefetch_mode') if prefetch_mode is None or prefetch_mode == 'cpu': prefetcher = CPUPrefetcher(train_loader) elif prefetch_mode == 'cuda': prefetcher = CUDAPrefetcher(train_loader, opt) logger.info(f'Use {prefetch_mode} prefetch dataloader') if opt['datasets']['train'].get('pin_memory') is not True: raise ValueError('Please set pin_memory=True for CUDAPrefetcher.') else: raise ValueError(f"Wrong prefetch_mode {prefetch_mode}. Supported ones are: None, 'cuda', 'cpu'.") # training logger.info(f'Start training from epoch: {start_epoch}, iter: {current_iter}') data_timer, iter_timer = AvgTimer(), AvgTimer() start_time = time.time() for epoch in range(start_epoch, total_epochs + 1): train_sampler.set_epoch(epoch) prefetcher.reset() train_data = prefetcher.next() while train_data is not None: data_timer.record() current_iter += 1 if current_iter > total_iters: break # update learning rate model.update_learning_rate(current_iter, warmup_iter=opt['train'].get('warmup_iter', -1)) # training model.feed_data(train_data) model.optimize_parameters(current_iter) iter_timer.record() if current_iter == 1: # reset start time in msg_logger for more accurate eta_time # not work in resume mode msg_logger.reset_start_time() # log if current_iter % opt['logger']['print_freq'] == 0: log_vars = {'epoch': epoch, 'iter': current_iter} log_vars.update({'lrs': model.get_current_learning_rate()}) log_vars.update({'time': iter_timer.get_avg_time(), 'data_time': data_timer.get_avg_time()}) log_vars.update(model.get_current_log()) msg_logger(log_vars) # save models and training states if current_iter % opt['logger']['save_checkpoint_freq'] == 0: logger.info('Saving models and training states.') model.save(epoch, current_iter) # validation if opt.get('val') is not None and (current_iter % opt['val']['val_freq'] == 0): if len(val_loaders) > 1: logger.warning('Multiple validation datasets are *only* supported by SRModel.') for val_loader in val_loaders: model.validation(val_loader, current_iter, tb_logger, opt['val']['save_img']) data_timer.start() iter_timer.start() train_data = prefetcher.next() # end of iter # end of epoch consumed_time = str(datetime.timedelta(seconds=int(time.time() - start_time))) logger.info(f'End of training. Time consumed: {consumed_time}') logger.info('Save the latest model.') model.save(epoch=-1, current_iter=-1) # -1 stands for the latest if opt.get('val') is not None: for val_loader in val_loaders: model.validation(val_loader, current_iter, tb_logger, opt['val']['save_img']) if tb_logger: tb_logger.close() if __name__ == '__main__': root_path = osp.abspath(osp.join(__file__, osp.pardir, osp.pardir)) train_pipeline(root_path) 这是我的训练代码
09-03
这个错误提示表明你的项目中缺少名为'debug_toolbar'的Python模块。'debug_toolbar'是一个Django的第三方库,用于在开发环境中提供一个可视化的调试工具栏。 要解决这个错误,你可以按照以下步骤进行操作: 1. 确保你已经在你的Django项目的虚拟环境中安装了'debug_toolbar'库。你可以使用如下命令安装它: ``` pip install django-debug-toolbar ``` 2. 如果你已经安装了'debug_toolbar',那么可能是因为你的项目的依赖关系没有正确配置。你可以检查项目的requirements.txt文件,确保其中包含了'debug_toolbar'作为依赖项。如果没有,在文件中添加以下内容: ``` django-debug-toolbar ``` 3. 在你的Django项目的settings.py文件中,确保'debug_toolbar'被添加到INSTALLED_APPS列表中。在settings.py文件中找到该列表,并添加以下内容: ```python INSTALLED_APPS = [ ... 'debug_toolbar', ... ] ``` 4. 在settings.py文件中找到MIDDLEWARE列表,并添加'debug_toolbar.middleware.DebugToolbarMiddleware'到列表中。例如: ```python MIDDLEWARE = [ ... 'debug_toolbar.middleware.DebugToolbarMiddleware', ... ] ``` 完成上述步骤后,保存文件并重新运行你的Django项目,应该就能解决'ModuleNotFoundError: No module named 'debug_toolbar''的问题了。 请注意,debug_toolbar通常只在开发环境中使用,并且不应该在生产环境中启用。因此,确保在部署到生产环境之前,在settings.py文件中正确配置和管理'debug_toolbar'。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值