error C4772: #import 引用了缺少的类型库中的类型;“__missing_type__”用作占位符

本文记录了解决WPSAPI在VS2008环境下因版本更新导致的编译错误过程,包括处理__missing_type__错误、非法基类及语法错误等,最终通过调整#import指令及使用auto_search解决问题。

环境:cpu64,win7 64操作系统,vs2008,office2013,wps2013个人版

1.由于换了一个笔记本,xp->win7,office2010->2013,wps2012->2013,导致原来的代码不能用了

原始代码:

#import "lib/wps/2012.8.1.0.2998/kso10.dll"  rename("RGB", "KsoRGB") rename("DocumentProperties","KsoDocumentProperties")
using namespace KSO;
#import "lib/wps/2012.8.1.0.2998/wpscore.dll" auto_search auto_rename no_auto_exclude
编译后:

1>e:\kqprint_new\kqprint_new\kqprint\wpsoper.h(7) : error C4772: #import 引用了缺少的类型库中的类型;“__missing_type__”用作占位符
1>e:\kqprint_new\kqprint_new\kqprint\debug\wpscore.tlh(1931) : error C2516: “__missing_type__”: 是非法基类
1>        e:\program files (x86)\microsoft visual studio 9.0\vc\include\comdef.h(283) : 参见“__missing_type__”的声明
1>e:\kqprint_new\kqprint_new\kqprint\debug\wpscore.tlh(1931) : error C2146: 语法错误 : 缺少“,”(在标识符“ColorFormat”的前面)
1>e:\kqprint_new\kqprint_new\kqprint\debug\wpscore.tlh(1932) : error C2499: “WPS::ColorFormat”: 类不能是其自身的基类
1>e:\kqprint_new\kqprint_new\kqprint\debug\wpscore.tlh(1935) : error C2516: “__missing_type__”: 是非法基类
1>        e:\program files (x86)\microsoft visual studio 9.0\vc\include\comdef.h(283) : 参见“__missing_type__”的声明
1>e:\kqprint_new\kqprint_new\kqprint\debug\wpscore.tlh(1935) : error C2146: 语法错误 : 缺少“,”(在标识符“FillFormat”的前面)
1>e:\kqprint_new\kqprint_new\kqprint\debug\wpscore.tlh(1936) : error C2499: “WPS::FillFormat”: 类不能是其自身的基类

........

2.百度:error C2516: “__missing_type__”: 是非法基类

参考http://bbs.wps.cn/forum.php?mod=viewthread&tid=22331586,换成

#import "D:\Program Files (x86)\Kingsoft\WPS Office\9.1.0.4715\office6\ksoapi.dll"  
#import "D:\Program Files (x86)\Kingsoft\WPS Office\9.1.0.4715\office6\wpsapi.dll" rename_namespace("WPS")
编译后:

1>e:\hbj\test3\test0722\test0722\debug\wpsapi.tlh(22067) : warning C4003: “ExitWindows”宏的实参不足
1>e:\hbj\test3\test0722\test0722\debug\wpsapi.tlh(22067) : error C2059: 语法错误 : “常量”
1>e:\hbj\test3\test0722\test0722\debug\wpsapi.tlh(29762) : error C2146: 语法错误 : 缺少“;”(在标识符“VBE”的前面)
1>e:\hbj\test3\test0722\test0722\debug\wpsapi.tlh(29762) : error C4430: 缺少类型说明符 - 假定为 int。注意: C++ 不支持默认 int
1>e:\hbj\test3\test0722\test0722\debug\wpsapi.tlh(29762) : error C4430: 缺少类型说明符 - 假定为 int。注意: C++ 不支持默认 int
1>e:\hbj\test3\test0722\test0722\debug\wpsapi.tlh(29979) : error C2146: 语法错误 : 缺少“;”(在标识符“GetVBE”的前面)
1>e:\hbj\test3\test0722\test0722\debug\wpsapi.tlh(29979) : error C4430: 缺少类型说明符 - 假定为 int。注意: C++ 不支持默认 int
1>e:\hbj\test3\test0722\test0722\debug\wpsapi.tlh(29979) : error C4430: 缺少类型说明符 - 假定为 int。注意: C++ 不支持默认 int
1>e:\hbj\test3\test0722\test0722\debug\wpsapi.tlh(29979) : warning C4183: “GetVBE”: 缺少返回类型;假定为返回“int”的成员函数
........

3.百度,google:warning C4003: “ExitWindows”宏的实参不足

参考:http://bbs.youkuaiyun.com/topics/300159601

但是wps2013没有VBE6EXT.OLB,用vbeapi.dll代替

改成:

#import "D:\Program Files (x86)\Kingsoft\WPS Office\9.1.0.4715\office6\vbeapi.dll"
#import "D:\Program Files (x86)\Kingsoft\WPS Office\9.1.0.4715\office6\ksoapi.dll"  
#import "D:\Program Files (x86)\Kingsoft\WPS Office\9.1.0.4715\office6\wpsapi.dll" rename_namespace("WPS")
编译后:

1>e:\hbj\test3\test0722\test0722\debug\vbeapi.tlh(632) : error C2146: 语法错误 : 缺少“;”(在标识符“CommandBars”的前面)
1>e:\hbj\test3\test0722\test0722\debug\vbeapi.tlh(632) : error C4430: 缺少类型说明符 - 假定为 int。注意: C++ 不支持默认 int
1>e:\hbj\test3\test0722\test0722\debug\vbeapi.tlh(632) : error C4430: 缺少类型说明符 - 假定为 int。注意: C++ 不支持默认 int
1>e:\hbj\test3\test0722\test0722\debug\vbeapi.tlh(653) : error C2146: 语法错误 : 缺少“;”(在标识符“GetCommandBars”的前面)
1>e:\hbj\test3\test0722\test0722\debug\vbeapi.tlh(653) : error C4430: 缺少类型说明符 - 假定为 int。注意: C++ 不支持默认 int
1>e:\hbj\test3\test0722\test0722\debug\vbeapi.tlh(653) : error C4430: 缺少类型说明符 - 假定为 int。注意: C++ 不支持默认 int
1>e:\hbj\test3\test0722\test0722\debug\vbeapi.tlh(653) : warning C4183: “GetCommandBars”: 缺少返回类型;假定为返回“int”的成员函数
1>e:\hbj\test3\test0722\test0722\debug\vbeapi.tli(244) : error C2143: 语法错误 : 缺少“;”(在“VBE::VBE::GetCommandBars”的前面)
1>e:\hbj\test3\test0722\test0722\debug\vbeapi.tli(244) : error C2433: “_CommandBarsPtr”: 不允许在数据声明中使用“inline”
1>e:\hbj\test3\test0722\test0722\debug\vbeapi.tli(244) : error C4430: 缺少类型说明符 - 假定为 int。注意: C++ 不支持默认 int
1>e:\hbj\test3\test0722\test0722\debug\vbeapi.tli(244) : error C4430: 缺少类型说明符 - 假定为 int。注意: C++ 不支持默认 int

.....

4.继续google:缺少“;”(在标识符“CommandBars”的前面)

但是没有什么结果了,于是心灰意冷,直接用

#import "D:\Program Files (x86)\Kingsoft\WPS Office\9.1.0.4715\office6\wpsapi.dll" auto_search auto_rename no_auto_exclude
结果编译通过

百度auto_search用法,

http://msdn.microsoft.com/zh-cn/subscriptions/downloads/0346ezww%28v=vs.90%29.aspx

http://technet.microsoft.com/zh-cn/library/298h7faa

auto_search 指定当使用 #import 引用了类型库,并且它本身引用了另一个类型库时,编译器可以为其他类型库执行隐式 #import。 

原来造成上边的原因是wpsapi.dll引用了其他类库

// Cross-referenced type libraries:
//
//  #import "D:\Program Files (x86)\Kingsoft\WPS Office\9.1.0.4715\office6\ksoapi.dll"
//  #import "C:\Program Files (x86)\Common Files\Microsoft Shared\VBA\VBA6\VBE6EXT.OLB"

则另一种代码也可以编译通过:

#import "D:\Program Files (x86)\Kingsoft\WPS Office\9.1.0.4715\office6\ksoapi.dll"
#import "C:\Program Files (x86)\Common Files\Microsoft Shared\VBA\VBA6\VBE6EXT.OLB"
#import "D:\Program Files (x86)\Kingsoft\WPS Office\9.1.0.4715\office6\wpsapi.dll"/* auto_search,*/rename("ExitWindows","_ExitWindows")

总结:引用的类库可以通过wpsapi.thi文件查看,而auto_search居然可以自动查找类库,否则你就得自己指定了


“# E:\AI_System\agent\model_manager.py import os import sys import logging import time import hashlib import importlib.util from pathlib import Path from typing import Dict, List, Type, Optional, Tuple, Any from diskcache import Cache from huggingface_hub import hf_hub_download, HfApi import requests # 设置日志 logger = logging.getLogger('ModelManager') # 添加项目根目录到路径 PROJECT_ROOT = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) if PROJECT_ROOT not in sys.path: sys.path.append(PROJECT_ROOT) logger.info(f"添加项目根目录到路径: {PROJECT_ROOT}") # 导入模型基类 try: from agent.models.base_model import BaseModel logger.info("✅ 成功导入BaseModel") except ImportError as e: logger.error(f"❌ 导入BaseModel失败: {str(e)}") # 创建占位符基类 logger.warning("⚠️ 创建占位符BaseModel") class BaseModel: def __init__(self, name, version="1.0"): self.name = name self.version = version def generate(self, prompt: str) -> str: return f"占位符响应: {prompt}" def generate_response(self, prompt: str) -> str: return self.generate(prompt) def get_capabilities(self) -> list: return ["基础功能"] def get_status(self) -> dict: return { "name": self.name, "version": self.version, "status": "active" } class ModelManager: """模型管理器 - 负责加载、下载、缓存和管理AI模型""" def __init__(self, model_dir: str = "models", device: str = "cpu", default_model: str = None, cache_dir: str = "model_cache"): """ 初始化模型管理器 :param model_dir: 模型存储目录 :param device: 运行设备 (cpu, cuda, mps) :param default_model: 默认加载的模型名称 :param cache_dir: 模型缓存目录 """ self._setup_logger() self.device = device self.model_dir = model_dir self.default_model = default_model self.language_model = None self.loaded_models: Dict[str, BaseModel] = {} # 确保模型目录存在 os.makedirs(model_dir, exist_ok=True) # 初始化缓存系统 self.cache_dir = cache_dir self._init_cache() # 初始化模型加载器 self.model_loader = self._init_model_loader() self.log( f"📦 初始化模型管理器 | 设备: {device} | 模型目录: {model_dir} | " f"缓存目录: {cache_dir} | 默认模型: {default_model}" ) # 如果指定了默认模型,则自动加载 if default_model: self.load_core_language_model(default_model) def _setup_logger(self): """配置日志记录器""" if not logger.handlers: handler = logging.StreamHandler() formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') handler.setFormatter(formatter) logger.addHandler(handler) logger.setLevel(logging.INFO) logger.propagate = False def _init_cache(self): """初始化模型缓存系统""" try: os.makedirs(self.cache_dir, exist_ok=True) self.cache = Cache(self.cache_dir) self.log(f"✅ 模型缓存初始化完成 | 路径: {self.cache_dir}") except Exception as e: self.log(f"❌ 缓存初始化失败: {str(e)}", "error") # 回退到临时缓存 self.cache_dir = "temp_cache" os.makedirs(self.cache_dir, exist_ok=True) self.cache = Cache(self.cache_dir) self.log("⚠️ 使用临时缓存目录", "warning") def _init_model_loader(self) -> Any: """初始化模型加载器""" try: from core.model_loader import ModelLoader return ModelLoader() except ImportError as e: self.log(f"❌ 无法导入ModelLoader: {str(e)}", "error") return None def log(self, message: str, level: str = "info"): """记录日志""" log_levels = { "debug": logger.debug, "info": logger.info, "warning": logger.warning, "error": logger.error, "critical": logger.critical } log_func = log_levels.get(level.lower(), logger.info) log_func(message) def download_model(self, repo_id: str, filename: str, revision: str = "main", force_download: bool = False) -> Optional[str]: """ 下载模型文件并缓存 :param repo_id: Hugging Face仓ID (如: "THUDM/chatglm-6b") :param filename: 模型文件名 :param revision: 模型版本 :param force_download: 是否强制重新下载 :return: 本地文件路径 """ # 生成缓存键 cache_key = f"{repo_id}/{filename}@{revision}" cache_key_hash = hashlib.md5(cache_key.encode()).hexdigest() # 检查缓存 if not force_download and cache_key_hash in self.cache: cached_path = self.cache[cache_key_hash] if os.path.exists(cached_path): self.log(f"📦 从缓存加载模型文件: {filename} (Hash: {cache_key_hash[:8]})") return cached_path # 下载模型 self.log(f"⬇️ 开始下载模型: {repo_id}/{filename} (版本: {revision})") start_time = time.time() try: # 使用Hugging Face Hub下载 local_path = hf_hub_download( repo_id=repo_id, filename=filename, revision=revision, cache_dir=self.cache_dir, resume_download=True ) # 记录下载信息 download_time = time.time() - start_time file_size = os.path.getsize(local_path) / (1024 * 1024) # MB self.log(f"✅ 下载完成: {filename} | 大小: {file_size:.2f}MB | 耗时: {download_time:.1f}s") # 更新缓存 self.cache[cache_key_hash] = local_path return local_path except Exception as e: self.log(f"❌ 下载失败: {repo_id}/{filename} - {str(e)}", "error") # 尝试备用下载方式 return self._fallback_download(repo_id, filename, revision) def _fallback_download(self, repo_id: str, filename: str, revision: str) -> Optional[str]: """备用下载方法(当huggingface_hub不可用时)""" self.log("⚠️ 尝试备用下载方法", "warning") try: # 构造下载URL url = f"https://huggingface.co/{repo_id}/resolve/{revision}/{filename}" # 创建目标路径 os.makedirs(os.path.join(self.cache_dir, repo_id), exist_ok=True) local_path = os.path.join(self.cache_dir, repo_id, filename) # 下载文件 with requests.get(url, stream=True) as r: r.raise_for_status() with open(local_path, 'wb') as f: for chunk in r.iter_content(chunk_size=8192): f.write(chunk) self.log(f"✅ 备用下载成功: {filename}") return local_path except Exception as e: self.log(f"❌ 备用下载失败: {str(e)}", "error") return None def load_model(self, model_name: str) -> Optional[BaseModel]: """ 加载指定模型 :param model_name: 模型名称 :return: 加载的模型实例 """ if model_name in self.loaded_models: self.log(f"模型 '{model_name}' 已加载") return self.loaded_models[model_name] try: # 尝试从模型目录加载 model_path = os.path.join(self.model_dir, f"{model_name}.py") if os.path.exists(model_path): spec = importlib.util.spec_from_file_location(model_name, model_path) module = importlib.util.module_from_spec(spec) spec.loader.exec_module(module) # 查找模型类 for attr_name in dir(module): attr = getattr(module, attr_name) if (isinstance(attr, type) and issubclass(attr, BaseModel) and attr != BaseModel): model_class = attr break else: raise ValueError(f"在 {model_path} 中未找到有效的模型类") # 实例化模型 model_instance = model_class(model_name) self.loaded_models[model_name] = model_instance self.log(f"✅ 成功加载模型: {model_name}") return model_instance self.log(f"未找到模型文件: {model_path}", "warning") return None except Exception as e: self.log(f"加载模型 '{model_name}' 失败: {str(e)}", "error") return None def load_core_language_model(self, model_name: str = "minimal-model", version: str = "v1", force_download: bool = False) -> bool: """加载核心语言模型""" self.log(f"🔄 正在加载核心语言模型: {model_name}") # 首先尝试使用模型加载器 if self.model_loader: try: self.language_model = self.model_loader.get_cognitive_model() if self.language_model: model, tokenizer = self.language_model self.log(f"✅ 认知模型加载成功: {model.config.model_type}") return True except Exception as e: self.log(f"❌ 使用ModelLoader加载认知模型失败: {str(e)}", "error") # 如果模型加载器不可用或失败,尝试本地加载 try: # 下载模型配置文件 config_path = self.download_model( repo_id="THUDM/chatglm-6b", filename="config.json", revision=version, force_download=force_download ) if not config_path: raise Exception("配置文件下载失败") # 加载模型 model = self.load_model(model_name) if not model: model = BaseModel(model_name, version) model.config_path = config_path self.language_model = model self.loaded_models[model_name] = model self.log(f"✅ {model_name}-{version} 加载成功 | 配置文件: {config_path}") return True except Exception as e: self.log(f"❌ 加载模型失败: {str(e)}", "error") return False def clear_cache(self, days: int = 7) -> int: """清理过期缓存""" self.log(f"🧹 清理超过{days}天的缓存") expired_time = time.time() - days * 86400 removed_count = 0 for key in list(self.cache): file_path = self.cache[key] if os.path.exists(file_path): file_mtime = os.path.getmtime(file_path) if file_mtime < expired_time: try: os.remove(file_path) del self.cache[key] removed_count += 1 except Exception as e: self.log(f"删除缓存文件失败: {file_path} - {str(e)}", "warning") self.log(f"✅ 缓存清理完成 | 移除文件: {removed_count}个") return removed_count def get_model(self, model_name: str) -> Optional[BaseModel]: """获取指定模型""" return self.loaded_models.get(model_name) def get_loaded_models(self) -> List[BaseModel]: """获取所有已加载模型""" return list(self.loaded_models.values()) def get_current_model_info(self) -> dict: """获取当前模型信息""" if self.language_model: if isinstance(self.language_model, tuple): # 如果是通过ModelLoader加载的元组 model, tokenizer = self.language_model return { "name": model.config.model_type, "version": "unknown", "status": "active", "type": "transformers" } else: # 如果是BaseModel实例 return { "name": self.language_model.name, "version": self.language_model.version, "status": "active", "config": getattr(self.language_model, "config_path", None) } return {"status": "no_model_loaded"} def select_model(self, input_text: str, attitude_weight: float, complexity_level: int, user_preferences: dict) -> BaseModel: """ 选择最适合的模型 :param input_text: 用户输入文本 :param attitude_weight: 态度权重 :param complexity_level: 复杂度级别 :param user_preferences: 用户偏好 :return: 选择的模型实例 """ self.log( f"📊 选择模型 | 输入: '{input_text[:20]}...' | " f"态度权重: {attitude_weight} | 复杂度: {complexity_level}" ) # 简化实现:返回当前语言模型 return self.language_model def generate_text(self, prompt: str) -> str: """生成文本""" if self.language_model: if isinstance(self.language_model, tuple): # 处理通过ModelLoader加载的模型 model, tokenizer = self.language_model inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs) return tokenizer.decode(outputs[0], skip_special_tokens=True) else: # 处理BaseModel实例 return self.language_model.generate_response(prompt) return "模型未加载" def get_cache_info(self) -> dict: """获取缓存信息""" total_size = 0 file_count = 0 for root, dirs, files in os.walk(self.cache_dir): for file in files: file_path = os.path.join(root, file) total_size += os.path.getsize(file_path) file_count += 1 return { "cache_dir": self.cache_dir, "file_count": file_count, "total_size": f"{total_size / (1024 ** 2):.2f} MB" } def init_cognitive_model(self): """初始化认知模型(兼容旧版接口)""" if not self.language_model and self.default_model: self.load_core_language_model(self.default_model) # 测试代码 if __name__ == "__main__": logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' ) print("=" * 50) print("模型管理器测试") print("=" * 50) # 创建模型管理器 mm = ModelManager( model_dir="test_models", cache_dir="test_cache", default_model="text_model" ) # 测试模型下载 print("\n测试模型下载:") config_path = mm.download_model("THUDM/chatglm-6b", "config.json") print(f"配置文件路径: {config_path}") # 测试缓存功能 print("\n测试缓存功能:") same_config = mm.download_model("THUDM/chatglm-6b", "config.json") print(f"相同文件返回缓存路径: {same_config}") # 测试强制重新下载 print("\n测试强制下载:") new_config = mm.download_model("THUDM/chatglm-6b", "config.json", force_download=True) print(f"新下载路径: {new_config}") # 测试模型加载 print("\n测试模型加载:") mm.load_core_language_model() print(f"当前模型信息: {mm.get_current_model_info()}") # 测试文本生成 print("\n测试文本生成:") response = mm.generate_text("你好,世界!") print(f"模型响应: {response}") # 测试缓存信息 print("\n测试缓存信息:") cache_info = mm.get_cache_info() print(f"缓存目录: {cache_info['cache_dir']}") print(f"文件数量: {cache_info['file_count']}") print(f"总大小: {cache_info['total_size']}") # 测试缓存清理 print("\n测试缓存清理:") removed = mm.clear_cache(days=0) # 清理所有缓存 print(f"移除文件数量: {removed}") ”这个也不会,不知道你让我把“# 在导入部分添加更健壮的回退 try: from diskcache import Cache from huggingface_hub import hf_hub_download except ImportError as e: logger.error(f"❌ 关键依赖缺失: {str(e)}") logger.warning("⚠️ 使用简化缓存系统") # 创建简化缓存实现 class SimpleCache: def __init__(self, cache_dir): self.cache_dir = cache_dir os.makedirs(cache_dir, exist_ok=True) def __setitem__(self, key, value): with open(os.path.join(self.cache_dir, key), 'w') as f: f.write(value) def __getitem__(self, key): path = os.path.join(self.cache_dir, key) if os.path.exists(path): with open(path, 'r') as f: return f.read() raise KeyError(key) # 使用简化实现 Cache = SimpleCache ”放哪里? 你新建的这个我有,你确定要直接换成你写的那个吗“# E:\AI_System\core\config.py import os import sys import json import logging from pathlib import Path from dotenv import load_dotenv from prettytable import PrettyTable # 设置日志 logger = logging.getLogger('CoreConfig') logger.setLevel(logging.INFO) # 确保有基本日志处理器 if not logger.handlers: handler = logging.StreamHandler() formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') handler.setFormatter(formatter) logger.addHandler(handler) class CoreConfig: """核心配置系统 - 支持环境变量、配置文件和默认值的优先级加载""" _instance = None _config = {} @classmethod def get_instance(cls): """获取单例实例""" if cls._instance is None: cls._instance = cls() return cls._instance def __init__(self): """初始化配置系统""" # 设置基础目录 self.base_dir = Path(__file__).resolve().parent.parent self.env_prefix = "AI_SYSTEM" # 敏感字段列表(在日志和输出中掩码) self.sensitive_fields = ["DB_PASSWORD", "SECRET_KEY", "API_KEY", "ACCESS_TOKEN"] # 路径类型配置键 self.path_keys = [ "LOG_DIR", "CONFIG_DIR", "MODEL_CACHE_DIR", "MODEL_BASE_PATH", "WEB_UI_DIR", "AGENT_DIR", "CORE_DIR", "MODELS_DIR", "LOGS_DIR", "TEXT_BASE", "TEXT_CHAT", "MULTIMODAL", "IMAGE_GEN", "YI_VL", "STABLE_DIFFUSION" ] # 加载配置 self._load_config() logger.info("✅ 配置系统初始化完成") def __getattr__(self, name): """允许通过属性方式访问配置项""" if name in self._config: return self._config[name] # 提供一些常用配置的默认值 if name == "DEFAULT_MODEL": return self._config.get("TEXT_BASE", "") # 记录警告而不是直接抛出异常 logger.warning(f"访问未定义的配置项: {name}") return None def __getitem__(self, key): """通过键访问配置值""" return self._config[key] def __contains__(self, key): """检查键是否存在""" return key in self._config def _mask_sensitive_value(self, key, value): """对敏感信息进行掩码处理""" if value and key in self.sensitive_fields: return "******" return value def _log_sensitive_value(self, key, value): """在日志中安全地记录敏感信息""" if key in self.sensitive_fields: logger.info(f"🔄 环境变量覆盖: {key}=******") else: logger.info(f"🔄 环境变量覆盖: {key}={value}") def _set_defaults(self): """设置默认配置值""" # 系统路径配置 defaults = { # 目录配置 "LOG_DIR": str(self.base_dir / "logs"), "CONFIG_DIR": str(self.base_dir / "config"), "MODEL_CACHE_DIR": str(self.base_dir / "model_cache"), # 系统配置 "AGENT_NAME": "小蓝", "DEFAULT_USER": "管理员", "MAX_WORKERS": 4, "AGENT_RESPONSE_TIMEOUT": 30.0, # 模型路径配置 "MODEL_BASE_PATH": "E:/AI_Models", "TEXT_BASE": "E:/AI_Models/Qwen2-7B", "TEXT_CHAT": "E:/AI_Models/deepseek-7b-chat", "MULTIMODAL": "E:/AI_Models/deepseek-vl2", "IMAGE_GEN": "E:/AI_Models/sdxl", "YI_VL": "E:/AI_Models/yi-vl", "STABLE_DIFFUSION": "E:/AI_Models/stable-diffusion-xl-base-1", # 系统路径配置 "SYSTEM_ROOT": str(self.base_dir), "AGENT_DIR": str(self.base_dir / "agent"), "WEB_UI_DIR": str(self.base_dir / "web_ui"), "CORE_DIR": str(self.base_dir / "core"), "MODELS_DIR": str(self.base_dir / "models"), "LOGS_DIR": str(self.base_dir / "logs"), # 服务器配置 "HOST": "0.0.0.0", "FLASK_PORT": 8000, "GRADIO_PORT": 7860, # 数据配置 "DB_HOST": "localhost", "DB_PORT": 5432, "DB_NAME": "ai_system", "DB_USER": "ai_user", "DB_PASSWORD": "", # 安全配置 "SECRET_KEY": "default-secret-key-change-in-production" } for key, value in defaults.items(): self._config[key] = value logger.debug(f"设置默认值: {key}={self._mask_sensitive_value(key, value)}") def _load_config_files(self): """加载配置文件""" # 确保配置目录存在 config_dir = Path(self._config.get("CONFIG_DIR", self.base_dir / "config")) config_dir.mkdir(exist_ok=True, parents=True) # 配置加载顺序 config_files = [ config_dir / 'default.json', config_dir / 'local.json' ] for config_file in config_files: if config_file.exists(): try: with open(config_file, 'r', encoding='utf-8') as f: config_data = json.load(f) # 处理路径配置 for key, value in config_data.items(): if key in self.path_keys and isinstance(value, str): # 确保路径使用正斜杠 config_data[key] = value.replace('\\', '/') # 更新配置 self._config.update(config_data) # 掩码敏感信息用于日志 masked_data = {k: self._mask_sensitive_value(k, v) for k, v in config_data.items()} logger.info(f"📂 从 {config_file} 加载配置: {masked_data}") except Exception as e: logger.error(f"❌ 加载配置文件 {config_file} 错误: {str(e)}") else: logger.info(f"ℹ️ 配置文件不存在: {config_file},跳过") def _load_environment(self): """加载环境变量""" # 加载.env文件 env_file = self.base_dir / '.env' if env_file.exists(): try: load_dotenv(dotenv_path=str(env_file), override=True) logger.info(f"🌐 从 {env_file} 加载环境变量") except Exception as e: logger.error(f"❌ 加载环境变量失败: {str(e)}") # 覆盖环境变量中的配置 for key in list(self._config.keys()): # 先尝试带前缀的环境变量 prefixed_key = f"{self.env_prefix}_{key}" env_value = os.getenv(prefixed_key) # 如果带前缀的环境变量不存在,尝试直接使用key if env_value is None: env_value = os.getenv(key) if env_value is not None: # 尝试转换数据类型 if env_value.lower() in ['true', 'false']: env_value = env_value.lower() == 'true' elif env_value.isdigit(): env_value = int(env_value) elif env_value.replace('.', '', 1).isdigit(): try: env_value = float(env_value) except ValueError: pass # 保持字符串 # 处理路径配置 if key in self.path_keys and isinstance(env_value, str): env_value = env_value.replace('\\', '/') self._config[key] = env_value self._log_sensitive_value(key, env_value) def validate_model_paths(self): """验证所有模型路径是否存在""" model_keys = ["TEXT_BASE", "TEXT_CHAT", "MULTIMODAL", "IMAGE_GEN", "YI_VL", "STABLE_DIFFUSION"] results = {} for key in model_keys: path = self.get(key, "") if path: path_obj = Path(path) exists = path_obj.exists() # 检查路径是否有效 valid = exists if exists: # 检查是否是有效模型目录 has_files = any(path_obj.iterdir()) valid = has_files if not has_files: logger.warning(f"⚠️ 模型路径为空目录: {key} = {path}") results[key] = { "path": str(path_obj), "exists": exists, "valid": valid } if not valid: logger.warning(f"⚠️ 模型路径无效: {key} = {path}") else: results[key] = { "path": "", "exists": False, "valid": False } logger.warning(f"⚠️ 模型路径未配置: {key}") return results def _load_config(self): """加载所有配置""" # 1. 设置默认值 self._set_defaults() # 2. 加载配置文件 self._load_config_files() # 3. 加载环境变量 self._load_environment() # 4. 验证关键路径 self.validate_model_paths() def get(self, key, default=None): """获取配置值""" return self._config.get(key, default) def to_dict(self, mask_sensitive=True): """返回当前配置的字典表示""" if mask_sensitive: return {k: self._mask_sensitive_value(k, v) for k, v in self._config.items()} return self._config.copy() def get_nested(self, key_path, default=None, sep="."): """获取嵌套配置值""" keys = key_path.split(sep) value = self._config try: for key in keys: if isinstance(value, dict) and key in value: value = value[key] else: return default return value except (KeyError, TypeError): return default # 创建全局配置实例 config = CoreConfig.get_instance() # 测试代码 if __name__ == "__main__": # 设置日志 logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' ) print("=" * 50) print("配置系统测试") print("=" * 50) # 获取配置实例 config = CoreConfig.get_instance() # 打印基本配置 print(f"AGENT_NAME: {config.get('AGENT_NAME')}") print(f"SYSTEM_ROOT: {config.get('SYSTEM_ROOT')}") print(f"LOG_DIR: {config.get('LOG_DIR')}") print(f"AGENT_DIR: {config.get('AGENT_DIR')}") print(f"WEB_UI_DIR: {config.get('WEB_UI_DIR')}") print(f"DB_HOST: {config.get('DB_HOST')}") print(f"DEFAULT_MODEL: {config.DEFAULT_MODEL}") # 测试属性访问 # 验证模型路径 print("\n模型路径验证结果:") for model, info in config.validate_model_paths().items(): status = "存在 ✅" if info["exists"] else "不存在 ❌" print(f"{model:20} {status} ({info['path']})") # 使用表格显示所有配置(美化输出) print("\n当前所有配置:") table = PrettyTable() table.field_names = ["配置项", "值"] table.align["配置项"] = "l" table.align["值"] = "l" # 获取掩码后的配置 masked_config = config.to_dict(mask_sensitive=True) for key, value in masked_config.items(): # 截断过长的值 if isinstance(value, str) and len(value) > 50: value = value[:47] + "..." table.add_row([key, value]) print(table) print("\n测试完成!")” 还有 你写的“使用配置系统 在您的代码中使用配置管理器:“我原来就有”# E:\AI_System\main.py import sys import os import logging from pathlib import Path from core.config import config # 导入配置系统 def setup_logging(): """配置全局日志系统""" # 创建日志目录(使用配置中的LOG_DIR) log_dir = Path(config.LOG_DIR) log_dir.mkdir(parents=True, exist_ok=True) # 创建日志记录器 logger = logging.getLogger('Main') logger.setLevel(logging.INFO) # 创建日志格式 formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') # 文件处理器 file_handler = logging.FileHandler(log_dir / "main.log", encoding='utf-8') file_handler.setFormatter(formatter) # 控制台处理器 console_handler = logging.StreamHandler() console_handler.setFormatter(formatter) # 添加处理器 logger.addHandler(file_handler) logger.addHandler(console_handler) return logger def initialize_environment(): """初始化系统环境""" # 设置项目根目录 PROJECT_ROOT = Path(__file__).parent.absolute() sys.path.append(str(PROJECT_ROOT)) # 添加所有必要的子目录 sub_dirs = ['agent', 'core', 'utils', 'config', 'cognitive_arch', 'environment'] for sub_dir in sub_dirs: full_path = PROJECT_ROOT / sub_dir if full_path.exists() and full_path.is_dir(): sys.path.append(str(full_path)) return PROJECT_ROOT def pre_start_checks(): """启动前系统检查""" # 检查Python版本 if sys.version_info < (3, 8): raise RuntimeError("需要Python 3.8或更高版本") # 检查关键目录 required_dirs = ['web_ui', 'config', 'core'] for dir_name in required_dirs: dir_path = Path(__file__).parent / dir_name if not dir_path.exists() or not dir_path.is_dir(): raise FileNotFoundError(f"关键目录缺失: {dir_name}") # 检查关键文件 required_files = ['core/config.py', 'web_ui/server.py'] for file_name in required_files: file_path = Path(__file__).parent / file_name if not file_path.exists() or not file_path.is_file(): raise FileNotFoundError(f"关键文件缺失: {file_name}") # 检查依赖 required_modules = ['flask', 'psutil', 'transformers'] missing_modules = [] for module in required_modules: try: __import__(module) except ImportError: missing_modules.append(module) if missing_modules: raise ImportError(f"缺少关键依赖: {', '.join(missing_modules)}") def start_background_services(): """启动后台服务""" # 启动硬件监控 try: from environment.hardware_monitor import HardwareMonitor hardware_monitor = HardwareMonitor() hardware_monitor.start() logger.info("✅ 硬件监控服务已启动") return hardware_monitor except ImportError: logger.warning("⚠️ 硬件监控模块缺失,跳过启动") except Exception as e: logger.error(f"硬件监控启动失败: {str(e)}") return None def start_life_scheduler(): """启动生活调度器""" try: from cognitive_arch.life_scheduler import LifeScheduler life_scheduler = LifeScheduler() life_scheduler.start_monitoring() logger.info("✅ 生活调度服务已启动") return life_scheduler except ImportError: logger.warning("⚠️ 生活调度模块缺失,跳过启动") except Exception as e: logger.error(f"生活调度启动失败: {str(e)}") return None def run_server(): """启动Web服务器""" try: # 动态导入Web服务器模块 from web_ui.server import run_app # 运行应用 run_app(host=config.HOST, port=config.FLASK_PORT) except ImportError as e: logger.error(f"❌ 无法导入Web服务器模块: {str(e)}") logger.error("请确保web_ui目录存在且server.py文件可用") except Exception as e: logger.error(f"❌ 服务器启动失败: {str(e)}", exc_info=True) def validate_model(model_path): """验证模型文件是否完整""" logger = logging.getLogger('Main') model_dir = Path(model_path) logger.info(f"验证模型文件: {model_dir}") # 定义必需的文件列表 required_files = [ "config.json", "tokenizer.json", "tokenizer_config.json" ] # 权重文件可以是多种格式(扩展支持范围) weight_files = [ "pytorch_model.bin", "model.safetensors", "*.safetensors", # 支持分片文件 "pytorch_model-*.bin", # 支持分片文件 "model-*.safetensors", # 支持分片文件 "model.safetensors.index.json", # 索引文件 "pytorch_model.bin.index.json" # 索引文件 ] missing_files = [] corrupted_files = [] weight_file_found = False # 检查必需文件 for file in required_files: file_path = model_dir / file if not file_path.exists(): missing_files.append(file) logger.warning(f"❌ 缺失文件: {file}") else: file_size = file_path.stat().st_size # 对配置文件设置更合理的阈值 if file_size < 50: # 文件太小可能是损坏的 corrupted_files.append(f"{file} (大小: {file_size}字节)") logger.warning(f"⚠️ 文件可能损坏: {file} (大小: {file_size}字节)") # 检查权重文件 for pattern in weight_files: for file_path in model_dir.glob(pattern): if file_path.is_file(): file_size = file_path.stat().st_size file_name = file_path.name # 特殊处理索引文件 if file_name.endswith('.index.json'): # 索引文件通常较小,只检查是否为空 if file_size > 0: logger.info(f"✅ 找到索引文件: {file_name} (大小: {file_size}字节)") weight_file_found = True else: logger.warning(f"⚠️ 索引文件可能损坏: {file_name} (大小: 0字节)") corrupted_files.append(f"{file_name} (大小: 0字节)") else: # 权重文件大小检查(更宽松的阈值) if file_size < 1024: # 小于1KB可能是损坏的 logger.warning(f"⚠️ 权重文件可能太小: {file_name} (大小: {file_size}字节)") corrupted_files.append(f"{file_name} (大小: {file_size}字节)") else: weight_file_found = True logger.info(f"✅ 找到权重文件: {file_name} (大小: {file_size}字节)") # 如果找到任何权重文件或索引文件,都视为有效 if weight_file_found: # 清除因索引文件大小引起的错误标记 corrupted_files = [f for f in corrupted_files if not f.endswith('.index.json')] if not weight_file_found: missing_files.append("权重文件") logger.warning("❌ 未找到任何权重文件") if missing_files or corrupted_files: issues = { "missing": missing_files, "corrupted": corrupted_files } logger.error(f"❌ 模型验证失败: 缺失 {len(missing_files)} 个文件, {len(corrupted_files)} 个可能损坏") return False, issues logger.info("✅ 模型文件完整且有效") return True, [] def ensure_model_availability(): """确保至少有一个可用的模型""" # 获取主模型路径 main_model = config.get("TEXT_BASE") if not main_model: logger.error("❌ 配置中未定义主模型路径") return False # 验证主模型 logger.info(f"验证主模型: {main_model}") validation_result = validate_model(main_model) # 处理验证结果 if isinstance(validation_result, tuple) and len(validation_result) == 2: is_valid, issues = validation_result else: # 如果返回格式不正确,尝试处理 if isinstance(validation_result, bool): is_valid = validation_result issues = {} else: logger.error("❌ 模型验证返回格式错误") return False if is_valid: logger.info(f"✅ 主模型验证通过: {main_model}") return True logger.warning(f"⚠️ 主模型验证失败: {main_model}") if issues: logger.warning(f"问题: {issues}") # 尝试备用模型 backup_model = config.get("TEXT_CHAT") if not backup_model: logger.error("❌ 配置中未定义备用模型路径") return False logger.info(f"验证备用模型: {backup_model}") is_backup_valid, backup_issues = validate_model(backup_model) if is_backup_valid: logger.info(f"✅ 备用模型验证通过: {backup_model}") # 切换到备用模型 config.config["TEXT_BASE"] = backup_model logger.info(f"🔄 已切换到备用模型: {backup_model}") return True logger.error(f"❌ 备用模型验证失败: {backup_model}") if backup_issues: logger.error(f"问题: {backup_issues}") logger.error("❌ 所有模型均不可用,无法启动系统") return False def main(): """主入口函数""" global logger logger = setup_logging() try: logger.info("=" * 50) logger.info("🚀 启动AI系统") logger.info("=" * 50) # 启动前检查 pre_start_checks() logger.info("✅ 启动前检查通过") # 初始化环境 project_root = initialize_environment() logger.info(f"📁 项目根目录: {project_root}") # 确保模型可用 if not ensure_model_availability(): logger.error("❌ 模型验证失败,系统无法启动") sys.exit(1) # 启动后台服务 hardware_monitor = start_background_services() life_scheduler = start_life_scheduler() # 启动环境管理器 try: from environment.environment_manager import EnvironmentManager env_manager = EnvironmentManager(config) env_manager.start() logger.info("✅ 环境管理器已启动") except Exception as e: logger.error(f"环境管理器启动失败: {str(e)}") # 启动Web服务器 logger.info("🌐 启动Web服务器...") run_server() except Exception as e: logger.critical(f"‼️ 系统启动失败: {str(e)}", exc_info=True) sys.exit(1) if __name__ == "__main__": main() “
最新发布
08-28
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值