UnknownError (see above for traceback): exceptions.AttributeError: 'module' object has no attribute

本文记录了使用SegLink模型对ICDAR2015数据集进行文本检测的过程。实验中加载了预训练模型,并设置了各类参数,如GPU内存分配、置信度阈值等。然而,在运行过程中遇到了未知错误,提示模块缺少属性'cv'。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

./scripts/test.sh 0 ./models/seglink-512/model.ckpt-217867 ./datasets/ICDAR2015/Challenge4/ch4_test_images
+ set -e
+ export CUDA_VISIBLE_DEVICES=0
+ CUDA_VISIBLE_DEVICES=0
+ CHECKPOINT_PATH=./models/seglink-512/model.ckpt-217867
+ DATASET_DIR=./datasets/ICDAR2015/Challenge4/ch4_test_images
+ python test_seglink.py --checkpoint_path=./models/seglink-512/model.ckpt-217867 --gpu_memory_fraction=-1 --seg_conf_threshold=0.8 --link_conf_threshold=0.5 --dataset_dir=./datasets/ICDAR2015/Challenge4/ch4_test_images
/usr/local/lib/python2.7/dist-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
Traceback (most recent call last):
  File "/usr/lib/python2.7/logging/__init__.py", line 861, in emit
    msg = self.format(record)
  File "/usr/lib/python2.7/logging/__init__.py", line 734, in format
    return fmt.format(record)
  File "/usr/lib/python2.7/logging/__init__.py", line 465, in format
    record.message = record.getMessage()
  File "/usr/lib/python2.7/logging/__init__.py", line 329, in getMessage
    msg = msg % self.args
TypeError: not all arguments converted during string formatting
Logged from file tf_logging.py, line 116
INFO:tensorflow:Restoring parameters from ./models/seglink-512/model.ckpt-217867
2018-07-02 11:45:17.332376: W tensorflow/core/framework/op_kernel.cc:1190] Unknown: exceptions.AttributeError: 'module' object has no attribute 'cv'
Traceback (most recent call last):
  File "test_seglink.py", line 149, in <module>
    tf.app.run()
  File "/home/hy/.local/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 126, in run
    _sys.exit(main(argv))
  File "test_seglink.py", line 145, in main
    eval()
  File "test_seglink.py", line 132, in eval
    image_bboxes = sess.run([bboxes_pred], feed_dict = {image:image_data, image_shape:image_data.shape})
  File "/home/hy/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 905, in run
    run_metadata_ptr)
  File "/home/hy/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1137, in _run
    feed_dict_tensor, options, run_metadata)
  File "/home/hy/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1355, in _do_run
    options, run_metadata)
  File "/home/hy/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1374, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.UnknownError: exceptions.AttributeError: 'module' object has no attribute 'cv'
     [[Node: test/PyFunc = PyFunc[Tin=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_INT32, DT_FLOAT, DT_FLOAT], Tout=[DT_FLOAT], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/device:CPU:0"](test/strided_slice_4, test/strided_slice_5, test/strided_slice_2, test/strided_slice_3, test/PyFunc/input_4, test/PyFunc/input_5)]]

Caused by op u'test/PyFunc', defined at:
  File "test_seglink.py", line 149, in <module>
    tf.app.run()
  File "/home/hy/.local/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 126, in run
    _sys.exit(main(argv))
  File "test_seglink.py", line 145, in main
    eval()
  File "test_seglink.py", line 88, in eval
    link_conf_threshold = config.link_conf_threshold)
  File "/home/hy/2018-ocr/seglink/tf_extended/seglink.py", line 693, in tf_seglink_to_bbox
    tf.float32)
  File "/home/hy/.local/lib/python2.7/site-packages/tensorflow/python/ops/script_ops.py", line 317, in py_func
    func=func, inp=inp, Tout=Tout, stateful=stateful, eager=False, name=name)
  File "/home/hy/.local/lib/python2.7/site-packages/tensorflow/python/ops/script_ops.py", line 225, in _internal_py_func
    input=inp, token=token, Tout=Tout, name=name)
  File "/home/hy/.local/lib/python2.7/site-packages/tensorflow/python/ops/gen_script_ops.py", line 93, in _py_func
    "PyFunc", input=input, token=token, Tout=Tout, name=name)
  File "/home/hy/.local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
  File "/home/hy/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 3271, in create_op
    op_def=op_def)
  File "/home/hy/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1650, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access

UnknownError (see above for traceback): exceptions.AttributeError: 'module' object has no attribute 'cv'
     [[Node: test/PyFunc = PyFunc[Tin=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_INT32, DT_FLOAT, DT_FLOAT], Tout=[DT_FLOAT], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/device:CPU:0"](test/strided_slice_4, test/strided_slice_5, test/strided_slice_2, test/strided_slice_3, test/PyFunc/input_4, test/PyFunc/input_5)]]
-
import contextlib import copy import functools import pprint import textwrap import typing import blinker import blinker._saferef from seleniumwire.thirdparty.mitmproxy import exceptions from seleniumwire.thirdparty.mitmproxy.utils import typecheck """ The base implementation for Options. """ unset = object() class _Option: __slots__ = ("name", "typespec", "value", "_default", "choices", "help") def __init__( self, name: str, typespec: typing.Union[type, object], # object for Optional[x], which is not a type. default: typing.Any, help: str, choices: typing.Optional[typing.Sequence[str]] ) -> None: typecheck.check_option_type(name, default, typespec) self.name = name self.typespec = typespec self._default = default self.value = unset self.help = textwrap.dedent(help).strip().replace("\n", " ") self.choices = choices def __repr__(self): return "{value} [{type}]".format(value=self.current(), type=self.typespec) @property def default(self): return copy.deepcopy(self._default) def current(self) -> typing.Any: if self.value is unset: v = self.default else: v = self.value return copy.deepcopy(v) def set(self, value: typing.Any) -> None: typecheck.check_option_type(self.name, value, self.typespec) self.value = value def reset(self) -> None: self.value = unset def has_changed(self) -> bool: return self.current() != self.default def __eq__(self, other) -> bool: for i in self.__slots__: if getattr(self, i) != getattr(other, i): return False return True def __deepcopy__(self, _): o = _Option( self.name, self.typespec, self.default, self.help, self.choices ) if self.has_changed(): o.value = self.current() return o class OptManager: """ OptManager is the base class from which Options objects are derived. .changed is a blinker Signal that triggers whenever options are updated. If any handler in the chain raises an exceptions.OptionsError exception, all changes are rolled back, the exception is suppressed, and the .errored signal is notified. Optmanager always returns a deep copy of options to ensure that mutation doesn't change the option state inadvertently. """ def __init__(self): self.deferred: typing.Dict[str, str] = {} self.changed = blinker.Signal() self.errored = blinker.Signal() # Options must be the last attribute here - after that, we raise an # error for attribute assigment to unknown options. self._options: typing.Dict[str, typing.Any] = {} def add_option( self, name: str, typespec: typing.Union[type, object], default: typing.Any, help: str, choices: typing.Optional[typing.Sequence[str]] = None ) -> None: self._options[name] = _Option(name, typespec, default, help, choices) self.changed.send(self, updated={name}) @contextlib.contextmanager def rollback(self, updated, reraise=False): old = copy.deepcopy(self._options) try: yield except exceptions.OptionsError as e: # Notify error handlers self.errored.send(self, exc=e) # Rollback self.__dict__["_options"] = old self.changed.send(self, updated=updated) if reraise: raise e def subscribe(self, func, opts): """ Subscribe a callable to the .changed signal, but only for a specified list of options. The callable should accept arguments (options, updated), and may raise an OptionsError. The event will automatically be unsubscribed if the callable goes out of scope. """ for i in opts: if i not in self._options: raise exceptions.OptionsError("No such option: %s" % i) # We reuse blinker's safe reference functionality to cope with weakrefs # to bound methods. func = blinker._saferef.safe_ref(func) @functools.wraps(func) def _call(options, updated): if updated.intersection(set(opts)): f = func() if f: f(options, updated) else: self.changed.disconnect(_call) # Our wrapper function goes out of scope immediately, so we have to set # weakrefs to false. This means we need to keep our own weakref, and # clean up the hook when it's gone. self.changed.connect(_call, weak=False) def __eq__(self, other): if isinstance(other, OptManager): return self._options == other._options return False def __deepcopy__(self, memodict = None): o = OptManager() o.__dict__["_options"] = copy.deepcopy(self._options, memodict) return o __copy__ = __deepcopy__ def __getattr__(self, attr): if attr in self._options: return self._options[attr].current() else: raise AttributeError("No such option: %s" % attr) def __setattr__(self, attr, value): # This is slightly tricky. We allow attributes to be set on the instance # until we have an _options attribute. After that, assignment is sent to # the update function, and will raise an error for unknown options. opts = self.__dict__.get("_options") if not opts: super().__setattr__(attr, value) else: self.update(**{attr: value}) def keys(self): return set(self._options.keys()) def items(self): return self._options.items() def __contains__(self, k): return k in self._options def reset(self): """ Restore defaults for all options. """ for o in self._options.values(): o.reset() self.changed.send(self, updated=set(self._options.keys())) def update_known(self, **kwargs): """ Update and set all known options from kwargs. Returns a dictionary of unknown options. """ known, unknown = {}, {} for k, v in kwargs.items(): if k in self._options: known[k] = v else: unknown[k] = v updated = set(known.keys()) if updated: with self.rollback(updated, reraise=True): for k, v in known.items(): self._options[k].set(v) self.changed.send(self, updated=updated) return unknown def update_defer(self, **kwargs): unknown = self.update_known(**kwargs) self.deferred.update(unknown) def update(self, **kwargs): u = self.update_known(**kwargs) if u: raise KeyError("Unknown options: %s" % ", ".join(u.keys())) def setter(self, attr): """ Generate a setter for a given attribute. This returns a callable taking a single argument. """ if attr not in self._options: raise KeyError("No such option: %s" % attr) def setter(x): setattr(self, attr, x) return setter def toggler(self, attr): """ Generate a toggler for a boolean attribute. This returns a callable that takes no arguments. """ if attr not in self._options: raise KeyError("No such option: %s" % attr) o = self._options[attr] if o.typespec != bool: raise ValueError("Toggler can only be used with boolean options") def toggle(): setattr(self, attr, not getattr(self, attr)) return toggle def default(self, option: str) -> typing.Any: return self._options[option].default def has_changed(self, option): """ Has the option changed from the default? """ return self._options[option].has_changed() def merge(self, opts): """ Merge a dict of options into this object. Options that have None value are ignored. Lists and tuples are appended to the current option value. """ toset = {} for k, v in opts.items(): if v is not None: if isinstance(v, (list, tuple)): toset[k] = getattr(self, k) + v else: toset[k] = v self.update(**toset) def __repr__(self): options = pprint.pformat(self._options, indent=4).strip(" {}") if "\n" in options: options = "\n " + options + "\n" return "{mod}.{cls}({{{options}}})".format( mod=type(self).__module__, cls=type(self).__name__, options=options ) def set(self, *spec, defer=False): """ Takes a list of set specification in standard form (option=value). Options that are known are updated immediately. If defer is true, options that are not known are deferred, and will be set once they are added. """ vals = {} unknown = {} for i in spec: parts = i.split("=", maxsplit=1) if len(parts) == 1: optname, optval = parts[0], None else: optname, optval = parts[0], parts[1] if optname in self._options: vals[optname] = self.parse_setval(self._options[optname], optval) else: unknown[optname] = optval if defer: self.deferred.update(unknown) elif unknown: raise exceptions.OptionsError("Unknown options: %s" % ", ".join(unknown.keys())) self.update(**vals) def process_deferred(self): """ Processes options that were deferred in previous calls to set, and have since been added. """ update = {} for optname, optval in self.deferred.items(): if optname in self._options: optval = self.parse_setval(self._options[optname], optval) update[optname] = optval self.update(**update) for k in update.keys(): del self.deferred[k] def parse_setval(self, o: _Option, optstr: typing.Optional[str]) -> typing.Any: """ Convert a string to a value appropriate for the option type. """ if o.typespec in (str, typing.Optional[str]): return optstr elif o.typespec in (int, typing.Optional[int]): if optstr: try: return int(optstr) except ValueError: raise exceptions.OptionsError("Not an integer: %s" % optstr) elif o.typespec == int: raise exceptions.OptionsError("Option is required: %s" % o.name) else: return None elif o.typespec == bool: if optstr == "toggle": return not o.current() if not optstr or optstr == "true": return True elif optstr == "false": return False else: raise exceptions.OptionsError( "Boolean must be \"true\", \"false\", or have the value " "omitted (a synonym for \"true\")." ) elif o.typespec == typing.Sequence[str]: if not optstr: return [] else: return getattr(self, o.name) + [optstr] raise NotImplementedError("Unsupported option type: %s", o.typespec) def make_parser(self, parser, optname, metavar=None, short=None): """ Auto-Create a command-line parser entry for a named option. If the option does not exist, it is ignored. """ if optname not in self._options: return o = self._options[optname] def mkf(l, s): l = l.replace("_", "-") f = ["--%s" % l] if s: f.append("-" + s) return f flags = mkf(optname, short) if o.typespec == bool: g = parser.add_mutually_exclusive_group(required=False) onf = mkf(optname, None) offf = mkf("no-" + optname, None) # The short option for a bool goes to whatever is NOT the default if short: if o.default: offf = mkf("no-" + optname, short) else: onf = mkf(optname, short) g.add_argument( *offf, action="store_false", dest=optname, ) g.add_argument( *onf, action="store_true", dest=optname, help=o.help ) parser.set_defaults(**{optname: None}) elif o.typespec in (int, typing.Optional[int]): parser.add_argument( *flags, action="store", type=int, dest=optname, help=o.help, metavar=metavar, ) elif o.typespec in (str, typing.Optional[str]): parser.add_argument( *flags, action="store", type=str, dest=optname, help=o.help, metavar=metavar, choices=o.choices ) elif o.typespec == typing.Sequence[str]: parser.add_argument( *flags, action="append", type=str, dest=optname, help=o.help + " May be passed multiple times.", metavar=metavar, choices=o.choices, ) else: raise ValueError("Unsupported option type: %s", o.typespec) def dump_dicts(opts, keys: typing.List[str]=None): """ Dumps the options into a list of dict object. Return: A list like: { "anticache": { type: "bool", default: false, value: true, help: "help text"} } """ options_dict = {} keys = keys if keys else opts.keys() for k in sorted(keys): o = opts._options[k] t = typecheck.typespec_to_str(o.typespec) option = { 'type': t, 'default': o.default, 'value': o.current(), 'help': o.help, 'choices': o.choices } options_dict[k] = option return options_dict ,Traceback (most recent call last): File "C:\Users\Admin\Downloads\cursor-manager-main\.venv\Lib\site-packages\seleniumwire\thirdparty\mitmproxy\optmanager.py", line 9, in <module> import blinker._saferef ModuleNotFoundError: No module named 'blinker._saferef'
最新发布
08-19
# E:\AI_System\web_ui\server.py import sys import os import time import logging import json import traceback import threading import platform import psutil import datetime from pathlib import Path from functools import wraps from concurrent.futures import ThreadPoolExecutor from logging.handlers import RotatingFileHandler # ========== 安全工具函数 ========== def safe_path(path_str: str) -> Path: """安全处理路径字符串,确保跨平台兼容性""" try: path = Path(path_str) if not path.is_absolute(): # 如果路径是相对的,基于项目根目录解析 base_dir = Path(__file__).parent.parent.resolve() return (base_dir / path).resolve() return path.resolve() except Exception as e: logging.error(f"路径解析错误: {path_str} - {str(e)}") return Path.cwd().resolve() class SafeConsoleHandler(logging.StreamHandler): """Windows安全的控制台日志处理器""" def emit(self, record): try: msg = self.format(record) # 安全处理Unicode字符 safe_msg = msg.encode('utf-8', errors='replace').decode('utf-8') stream = self.stream stream.write(safe_msg + self.terminator) self.flush() except Exception: self.handleError(record) # ========== 配置系统初始化 ========== # 添加核心模块路径 sys.path.insert(0, str(Path(__file__).parent.parent)) # 导入核心配置 from core.config import CoreConfig as SystemConfig # 全局配置实例 system_config = SystemConfig() # ========== 安全日志系统 ========== def setup_logger() -> logging.Logger: """配置并返回安全的日志系统""" logger = logging.getLogger('WebServer') # 清除所有现有处理器 for handler in logger.handlers[:]: logger.removeHandler(handler) # 安全获取配置项 debug_mode = system_config.get('DEBUG', False) log_level = logging.DEBUG if debug_mode else logging.INFO log_file = system_config.get('logging.file', 'logs/web_server.log') max_log_size = system_config.get('logging.max_size', 10) # MB backup_count = system_config.get('logging.backup_count', 5) # 设置日志级别 logger.setLevel(log_level) # 创建安全的控制台处理器 console_handler = SafeConsoleHandler() console_handler.setLevel(log_level) # 创建安全的文件处理器 try: # 确保日志目录存在 log_path = safe_path(log_file) log_path.parent.mkdir(parents=True, exist_ok=True) file_handler = RotatingFileHandler( filename=str(log_path), maxBytes=max_log_size * 1024 * 1024, backupCount=backup_count, encoding='utf-8' ) file_handler.setLevel(logging.DEBUG) except Exception as e: logging.error(f"无法创建文件日志处理器: {str(e)}") file_handler = logging.NullHandler() # 创建安全的格式化器 formatter = logging.Formatter( '%(asctime)s - %(name)s - %(levelname)s - %(process)d - %(thread)d - %(message)s', datefmt='%Y-%m-%d %H:%M:%S' ) # 应用格式化器 console_handler.setFormatter(formatter) file_handler.setFormatter(formatter) # 添加处理器 logger.addHandler(console_handler) logger.addHandler(file_handler) # 设置根日志器 root_logger = logging.getLogger() root_logger.setLevel(log_level) for handler in root_logger.handlers[:]: root_logger.removeHandler(handler) root_logger.addHandler(console_handler) root_logger.addHandler(file_handler) # 配置第三方库日志器 third_party_loggers = ['werkzeug', 'engineio', 'socketio', 'urllib3'] for log_name in third_party_loggers: lib_logger = logging.getLogger(log_name) lib_logger.setLevel(logging.WARNING) lib_logger.propagate = False lib_logger.addHandler(file_handler) # 安全记录日志系统初始化完成 logger.info("日志系统初始化完成") logger.info("日志级别: %s", 'DEBUG' if debug_mode else 'INFO') logger.info("日志文件: %s", log_file) return logger # 初始化日志系统 logger = setup_logger() # ========== Flask应用初始化 ========== from flask import Flask, render_template, request, jsonify, send_from_directory, current_app # 安全获取模板和静态文件路径 template_dir = safe_path(system_config.get('template_dir', Path(__file__).parent / 'templates')) static_dir = safe_path(system_config.get('static_dir', Path(__file__).parent / 'static')) app = Flask( __name__, template_folder=str(template_dir), static_folder=str(static_dir), static_url_path='/static' ) # 设置Flask调试模式 app.debug = system_config.get('DEBUG', False) app.secret_key = system_config.get('SECRET_KEY', 'default_secret_key') # ========== 环境管理器 ========== class EnvironmentManager: """跨平台兼容的环境管理器""" def __init__(self): self.state = { 'temperature': 22.5, 'humidity': 45.0, 'light_level': 75, 'objects': [], 'last_updated': datetime.datetime.now().isoformat() } self.healthy = True self.lock = threading.Lock() self.running = False self.update_thread = None self.observers = [] logger.info("环境管理器初始化成功") def start(self): """启动环境状态更新线程""" if self.running: return self.running = True self.update_thread = threading.Thread(target=self._update_loop, daemon=True) self.update_thread.start() logger.info("环境管理器已启动") def stop(self): """停止环境管理器""" self.running = False if self.update_thread and self.update_thread.is_alive(): self.update_thread.join(timeout=5) logger.info("环境管理器已停止") def register_observer(self, callback): """注册状态变更观察者""" self.observers.append(callback) logger.debug("注册环境状态观察者: %s", callback.__name__) def _notify_observers(self): """通知所有观察者状态变更""" for observer in self.observers: try: observer(self.state) except Exception as e: logger.error("通知环境观察者失败: %s", str(e)) def _update_loop(self): """环境状态更新循环""" while self.running: try: with self.lock: # 模拟环境变化 self.state['temperature'] = round(20 + 5 * (time.time() % 10) / 10, 1) self.state['humidity'] = round(40 + 10 * (time.time() % 10) / 10, 1) self.state['light_level'] = round(70 + 10 * (time.time() % 10) / 10, 1) self.state['last_updated'] = datetime.datetime.now().isoformat() # 通知观察者 self._notify_observers() except Exception as e: logger.error("环境更新失败: %s", str(e)) time.sleep(1.0) def get_state(self): """获取当前环境状态""" with self.lock: return self.state.copy() def execute_action(self, action, params): """执行环境动作""" logger.info("执行环境动作: %s 参数: %s", action, params) try: if action == "adjust_temperature": return self._adjust_temperature(params) elif action == "adjust_light": return self._adjust_light(params) elif action == "add_object": return self._add_object(params) elif action == "remove_object": return self._remove_object(params) else: logger.warning("未知环境动作: %s", action) return False except Exception as e: logger.error("执行环境动作失败: %s", str(e)) return False def _adjust_temperature(self, params): """调整温度""" value = params.get('value') if value is None: logger.warning("缺少温度值参数") return False try: value = float(value) except ValueError: logger.warning("无效的温度值: %s", value) return False if 10 <= value <= 40: with self.lock: self.state['temperature'] = value return True logger.warning("温度值超出范围 (10-40): %s", value) return False def _adjust_light(self, params): """调整光照强度""" level = params.get('level') if level is None: logger.warning("缺少光照强度参数") return False try: level = float(level) except ValueError: logger.warning("无效的光照强度: %s", level) return False if 0 <= level <= 100: with self.lock: self.state['light_level'] = level return True logger.warning("光照强度超出范围 (0-100): %s", level) return False def _add_object(self, params): """添加环境对象""" obj_name = params.get('name') if not obj_name: logger.warning("缺少对象名称") return False obj_type = params.get('type', 'object') position = params.get('position', 'unknown') # 检查是否已存在 with self.lock: for obj in self.state['objects']: if obj['name'] == obj_name: logger.warning("对象已存在: %s", obj_name) return False # 添加新对象 self.state['objects'].append({ 'name': obj_name, 'type': obj_type, 'position': position, 'added_at': datetime.datetime.now().isoformat() }) return True def _remove_object(self, params): """移除环境对象""" obj_name = params.get('name') if not obj_name: logger.warning("缺少对象名称") return False # 查找并移除对象 with self.lock: for i, obj in enumerate(self.state['objects']): if obj['name'] == obj_name: del self.state['objects'][i] return True logger.warning("未找到对象: %s", obj_name) return False def is_healthy(self): """检查环境健康状态""" # 简单检查:确保所有值在合理范围内 with self.lock: temp = self.state['temperature'] light = self.state['light_level'] if temp < 10 or temp > 40: logger.warning("温度超出健康范围: %s℃", temp) return False if light < 0 or light > 100: logger.warning("光照强度超出健康范围: %s%%", light) return False return self.healthy and self.running # ========== 系统初始化器 ========== class SystemInitializer: """健壮的系统初始化器""" def __init__(self): self.start_time = time.time() self.components = { 'ai_core': None, 'hardware_manager': None, 'life_scheduler': None, 'ai_agent': None, 'environment_manager': None } # 安全获取基础目录 try: self.base_dir = safe_path(system_config.get('BASE_DIR', Path(__file__).parent.parent)) logger.info("系统初始化器创建, 基础目录: %s", self.base_dir) except Exception as e: self.base_dir = Path.cwd().resolve() logger.error("获取基础目录失败, 使用当前目录: %s | 错误: %s", self.base_dir, str(e)) def initialize_system_paths(self): """安全初始化系统路径""" try: # 添加项目根目录到 sys.path if str(self.base_dir) not in sys.path: sys.path.insert(0, str(self.base_dir)) logger.info("项目根目录: %s", self.base_dir) # 添加子目录 sub_dirs = ['agent', 'core', 'utils', 'config', 'cognitive_arch', 'environment'] for sub_dir in sub_dirs: full_path = self.base_dir / sub_dir if full_path.exists() and full_path.is_dir(): if str(full_path) not in sys.path: sys.path.insert(0, str(full_path)) logger.info("添加路径: %s", full_path) else: logger.warning("目录不存在: %s - 已跳过", full_path) except Exception as e: logger.error("系统路径初始化失败: %s", str(e)) def initialize_environment_manager(self): """初始化环境管理器""" try: self.components['environment_manager'] = EnvironmentManager() self.components['environment_manager'].start() logger.info("环境管理器初始化成功") return self.components['environment_manager'] except Exception as e: logger.error("环境管理器初始化失败: %s", str(e)) logger.error(traceback.format_exc()) logger.warning("环境交互功能将不可用") return None def initialize_ai_core(self): """初始化AI核心""" logger.info("AI核心初始化") # 简化实现 self.components['ai_core'] = type('AICore', (), { 'status': 'running', 'get_state': lambda: { "status": "running", "model": system_config.get('DEFAULT_MODEL', 'gpt-3.5-turbo'), "last_update": datetime.datetime.now().isoformat() } })() return self.components['ai_core'] def initialize_hardware_manager(self): """初始化硬件管理器""" logger.info("硬件管理器初始化") # 简化实现 self.components['hardware_manager'] = type('HardwareManager', (), { 'get_status': lambda: { "cpu_usage": psutil.cpu_percent(), "memory_usage": psutil.virtual_memory().percent, "gpu_usage": 0, "disk_usage": psutil.disk_usage('/').percent, "timestamp": datetime.datetime.now().isoformat() } })() return self.components['hardware_manager'] def initialize_life_scheduler(self): """初始化生活调度器""" logger.info("生活调度器初始化") # 简化实现 self.components['life_scheduler'] = type('LifeScheduler', (), { 'get_status': lambda: { "current_activity": "thinking", "next_activity": "learning", "energy": 85, "last_update": datetime.datetime.now().isoformat() } })() return self.components['life_scheduler'] def initialize_ai_agent(self): """初始化AI智能体""" logger.info("开始初始化AI智能体") try: # 动态导入智能体模块 from agent.autonomous_agent import AutonomousAgent # 创建智能体实例 self.components['ai_agent'] = AutonomousAgent() # 将环境管理器传递给智能体(如果存在) if self.components['environment_manager']: # 检查智能体是否有设置环境的方法 if hasattr(self.components['ai_agent'], 'set_environment'): self.components['ai_agent'].set_environment( self.components['environment_manager'] ) logger.info("已将环境管理器连接到智能体") else: logger.warning("智能体没有set_environment方法,无法连接环境管理器") # 启动智能体后台任务 if hasattr(self.components['ai_agent'], 'start'): self.components['ai_agent'].start() logger.info("智能体后台任务已启动") else: logger.warning("智能体没有start方法,无法启动后台任务") logger.info("AI智能体初始化完成") return True except ImportError: logger.warning("无法导入AutonomousAgent,使用模拟智能体") self._create_mock_agent() return False except Exception as e: logger.error("AI智能体初始化失败: %s", str(e)) logger.error(traceback.format_exc()) self._create_mock_agent() return False def _create_mock_agent(self): """创建模拟智能体""" logger.warning("使用模拟智能体作为回退方案") self.components['ai_agent'] = type('MockAgent', (), { 'process_input': lambda self, input, user_id: f"智能体初始化失败,使用模拟模式: 收到消息 '{input}'" })() def start_evolution_monitor(self): """启动进化监视器""" logger.info("进化监视器启动") # 简化实现 def monitor(): while True: logger.debug("进化监视器运行中...") time.sleep(60) threading.Thread(target=monitor, daemon=True).start() def initialize_all(self): """初始化所有系统组件""" logger.info("=" * 50) logger.info("开始初始化AI系统") logger.info("=" * 50) self.initialize_system_paths() self.initialize_ai_core() self.initialize_hardware_manager() self.initialize_life_scheduler() self.initialize_ai_agent() self.initialize_environment_manager() self.start_evolution_monitor() logger.info("所有系统组件初始化完成") return self.components # ========== WebSocket处理 ========== def setup_websocket_handlers(socketio, env_manager): if not socketio or not env_manager: return @socketio.on('connect') def handle_connect(): logger.info('客户端已连接') socketio.emit('system_status', {'status': 'ready'}) # 注册环境状态观察者 env_manager.register_observer(lambda state: socketio.emit('environment_update', state)) @socketio.on('disconnect') def handle_disconnect(): logger.info('客户端已断开连接') @socketio.on('user_message') def handle_user_message(data): user_id = data.get('user_id', 'guest') message = data.get('message', '') logger.info("收到来自 %s 的消息: %s", user_id, message) # 使用线程池处理消息 def process_message(): try: components = current_app.config['SYSTEM_COMPONENTS'] if components['ai_agent']: return components['ai_agent'].process_input(message, user_id) else: return f"已收到您的消息: {message}" except Exception as e: logger.error("消息处理失败: %s", str(e)) return "处理消息时出错" future = ThreadPoolExecutor().submit(process_message) try: response = future.result(timeout=10) socketio.emit('agent_response', { 'user_id': user_id, 'response': response }) except TimeoutError: socketio.emit('agent_response', { 'user_id': user_id, 'response': "处理超时,请重试" }) # ========== 路由注册 ========== def register_routes(app): @app.route('/') def home(): """根路由显示欢迎页面""" current_time = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S") return render_template('index.html', current_time=current_time, host=request.host, version="1.0") # 环境路由 @app.route('/environment') def environment_view(): return render_template('environment_view.html') @app.route('/api/environment/state', methods=['GET']) def get_environment_state(): env_manager = app.config['SYSTEM_COMPONENTS'].get('environment_manager') if not env_manager: return jsonify({"success": False, "error": "环境管理器未初始化"}), 503 try: state = env_manager.get_state() return jsonify(state) except Exception as e: app.logger.error("获取环境状态失败: %s", traceback.format_exc()) return jsonify({"success": False, "error": str(e)}), 500 @app.route('/api/environment/action', methods=['POST']) def execute_environment_action(): env_manager = app.config['SYSTEM_COMPONENTS'].get('environment_manager') if not env_manager: return jsonify({"success": False, "error": "环境管理器未初始化"}), 503 try: data = request.json action = data.get('action') params = data.get('params', {}) if not action: return jsonify({"success": False, "error": "缺少动作参数"}), 400 success = env_manager.execute_action(action, params) return jsonify({"success": success, "action": action}) except Exception as e: app.logger.error("执行环境动作失败: %s", traceback.format_exc()) return jsonify({"success": False, "error": str(e)}), 500 # 静态文件路由 @app.route('/static/<path:filename>') def static_files(filename): return send_from_directory(app.static_folder, filename) # 健康检查路由 @app.route('/health') def health_check(): components = app.config['SYSTEM_COMPONENTS'] status = { "ai_core": components['ai_core'] is not None, "hardware_manager": components['hardware_manager'] is not None, "life_scheduler": components['life_scheduler'] is not None, "ai_agent": components['ai_agent'] is not None, "environment_manager": components['environment_manager'] and components['environment_manager'].is_healthy(), "timestamp": datetime.datetime.now().isoformat() } return jsonify(status) # 系统状态路由 @app.route('/status') def status(): components = app.config['SYSTEM_COMPONENTS'] system_info = { "uptime": time.time() - app.config['START_TIME'], "ai_core_status": components['ai_core'].get_state() if components['ai_core'] else "uninitialized", "hardware_status": components['hardware_manager'].get_status() if components[ 'hardware_manager'] else "uninitialized", "life_scheduler_status": components['life_scheduler'].get_status() if components[ 'life_scheduler'] else "uninitialized", "environment_status": components['environment_manager'].get_state() if components[ 'environment_manager'] else "uninitialized", "platform": platform.platform(), "python_version": sys.version, "memory_usage": psutil.virtual_memory().percent, "cpu_usage": psutil.cpu_percent(), "thread_count": threading.active_count(), "process_id": os.getpid(), "timestamp": datetime.datetime.now().isoformat() } return jsonify(system_info) # ========== 错误处理 ========== def register_error_handlers(app): @app.errorhandler(404) def page_not_found(e): return render_template('404.html'), 404 @app.errorhandler(500) def internal_server_error(e): return render_template('500.html'), 500 @app.errorhandler(Exception) def handle_general_exception(e): logger.error("未处理异常: %s", str(e)) logger.error(traceback.format_exc()) return render_template('error.html', error=str(e)), 500 # ========== Web应用工厂 ========== def create_app(): # 创建Flask应用 try: template_dir = safe_path(system_config.get('template_dir', Path(__file__).parent / 'templates')) static_dir = safe_path(system_config.get('static_dir', Path(__file__).parent / 'static')) except Exception as e: logger.error("路径配置错误: %s", str(e)) template_dir = Path(__file__).parent / 'templates' static_dir = Path(__file__).parent / 'static' app = Flask( __name__, template_folder=str(template_dir), static_folder=str(static_dir), static_url_path='/static' ) app.secret_key = system_config.get('SECRET_KEY', 'default_secret_key') # 初始化系统组件 try: system_initializer = SystemInitializer() components = system_initializer.initialize_all() app.config['SYSTEM_COMPONENTS'] = components app.config['START_TIME'] = system_initializer.start_time except Exception as e: logger.error("系统初始化失败: %s", str(e)) app.config['SYSTEM_COMPONENTS'] = {} app.config['START_TIME'] = time.time() # 配置SocketIO socketio = None try: from flask_socketio import SocketIO socketio = SocketIO( app, async_mode=system_config.get('ASYNC_MODE', 'threading'), logger=logger.getEffectiveLevel() <= logging.DEBUG, engineio_logger=logger.getEffectiveLevel() <= logging.DEBUG ) env_manager = app.config['SYSTEM_COMPONENTS'].get('environment_manager') if env_manager: setup_websocket_handlers(socketio, env_manager) app.config['SOCKETIO'] = socketio except ImportError: logger.error("未安装flask-socketio,WebSocket功能不可用") except Exception as e: logger.error("SocketIO初始化失败: %s", str(e)) # 注册路由和错误处理 register_routes(app) register_error_handlers(app) return app, socketio # ========== 主程序入口 ========== if __name__ == '__main__': try: app, socketio = create_app() # 服务器配置 host = system_config.get('server.host', '0.0.0.0') port = system_config.get('server.port', 5000) env = os.environ.get('ENV', 'development') # 启动服务器 if env == 'production': # 生产环境使用Waitress服务器 from waitress import serve logger.info("生产服务器启动: http://%s:%s", host, port) serve(app, host=host, port=port, threads=8) else: # 开发环境使用内置服务器 logger.info("开发服务器启动: http://%s:%s", host, port) if socketio: socketio.run( app, host=host, port=port, debug=system_config.get('DEBUG', False), use_reloader=False ) else: app.run( host=host, port=port, debug=system_config.get('DEBUG', False), use_reloader=False ) except Exception as e: logger.critical("服务器启动失败: %s", str(e)) logger.critical(traceback.format_exc()) # agent/autonomous_agent.py import os import sys import time import json import logging import traceback import threading import platform import psutil from pathlib import Path from typing import Any, Dict, Optional, Callable from concurrent.futures import ThreadPoolExecutor, Future # 确保项目根目录在 sys.path 中 BASE_DIR = Path(__file__).resolve().parent.parent.parent # 指向 E:\AI_System if str(BASE_DIR) not in sys.path: sys.path.insert(0, str(BASE_DIR)) # 导入核心模块 from core.config import system_config from core.exceptions import DependencyError, SubsystemFailure, ConfigurationError from core.dependency_manager import DependencyManager from core.metrics import MetricsCollector from core.circuit_breaker import CircuitBreaker from core.subsystem_registry import SubsystemRegistry # 全局线程池 executor = ThreadPoolExecutor(max_workers=system_config.MAX_WORKERS) class AutonomousAgent: def __init__(self): """重构后的自主智能体核心类,负责协调所有子系统""" self.logger = self._setup_logger() self.logger.info("🚀 初始化自主智能体核心模块...") self._running = False self._background_thread = None # 初始化状态跟踪 self.initialization_steps = [] self._last_env_check = 0 self._initialization_time = time.time() self.metrics = MetricsCollector() # 熔断器管理器 self.circuit_breakers = {} # 子系统注册表 self.subsystem_registry = SubsystemRegistry() # 环境管理器(外部设置) self.environment = None try: # 初始化步骤 self._record_step("验证配置") self._validate_configuration() self._record_step("加载环境变量") self._load_environment() self._record_step("验证环境") self.verify_environment() self._record_step("初始化核心组件") self._initialize_core_components() self._record_step("初始化子系统") self._initialize_subsystems() self.logger.info(f"✅ 自主智能体初始化完成 (耗时: {time.time() - self._initialization_time:.2f}秒)") self.logger.info(f"初始化步骤: {', '.join(self.initialization_steps)}") except Exception as e: self.logger.exception(f"❌ 智能体初始化失败: {str(e)}") self.logger.error(f"堆栈跟踪:\n{traceback.format_exc()}") raise RuntimeError(f"智能体初始化失败: {str(e)}") from e def _setup_logger(self) -> logging.Logger: """配置日志记录器""" logger = logging.getLogger('AutonomousAgent') logger.setLevel(system_config.LOG_LEVEL) # 创建控制台处理器 console_handler = logging.StreamHandler() console_handler.setLevel(system_config.LOG_LEVEL) # 创建文件处理器 log_file = Path(system_config.LOG_DIR) / 'autonomous_agent.log' log_file.parent.mkdir(parents=True, exist_ok=True) file_handler = logging.FileHandler(log_file, encoding='utf-8') file_handler.setLevel(system_config.LOG_LEVEL) # 创建格式化器 formatter = logging.Formatter( '%(asctime)s [%(levelname)s] %(name)s: %(message)s', datefmt='%Y-%m-%d %H:%M:%S' ) console_handler.setFormatter(formatter) file_handler.setFormatter(formatter) # 添加处理器 logger.addHandler(console_handler) logger.addHandler(file_handler) logger.propagate = False return logger def _validate_configuration(self): """验证关键配置项""" required_configs = [ 'LOG_DIR', 'CONFIG_DIR', 'MODEL_CACHE_DIR', 'MAX_WORKERS', 'AGENT_RESPONSE_TIMEOUT' ] missing = [] for config_key in required_configs: if not hasattr(system_config, config_key): missing.append(config_key) if missing: raise ConfigurationError(f"缺失关键配置项: {', '.join(missing)}") def _record_step(self, step_name: str): """记录初始化步骤""" self.initialization_steps.append(step_name) self.logger.info(f"⏳ 步骤 {len(self.initialization_steps)}: {step_name}") def _load_environment(self): """加载环境变量""" env_path = system_config.CONFIG_DIR / ".env" if env_path.exists(): try: from dotenv import load_dotenv load_dotenv(env_path) self.logger.info(f"✅ 已加载环境变量文件: {env_path}") except ImportError: self.logger.warning("dotenv包未安装,跳过环境变量加载") else: self.logger.warning(f"⚠️ 环境变量文件不存在: {env_path}") def set_environment(self, env_manager): """设置环境管理器引用""" self.environment = env_manager self.logger.info("✅ 已连接环境管理器") # 注册环境监控任务 if self.environment: self.subsystem_registry.register_task( "环境监控", self._monitor_environment, interval=system_config.get('ENVIRONMENT_MONITOR_INTERVAL', 5.0) ) def start(self): """启动智能体后台任务""" if not self._running: self._start_background_tasks() self.logger.info("🏁 智能体后台任务已启动") else: self.logger.warning("智能体已在运行中") def _start_background_tasks(self): """启动后台任务线程""" if self._running: return self._running = True self._background_thread = threading.Thread( target=self._background_task_loop, daemon=True, name="AutonomousAgentBackgroundTasks" ) self._background_thread.start() self.logger.info("✅ 后台任务线程已启动") def _background_task_loop(self): """后台任务循环""" self.logger.info("🔄 后台任务循环启动") while self._running: try: start_time = time.time() # 执行注册的周期性任务 self.subsystem_registry.run_periodic_tasks() # 动态调整睡眠时间 task_time = time.time() - start_time sleep_time = max(0.1, system_config.AGENT_TASK_INTERVAL - task_time) time.sleep(sleep_time) except Exception as e: self.logger.error(f"后台任务错误: {str(e)}") self.metrics.record_error('background_task') time.sleep(30) # 错误后等待更长时间 def verify_environment(self): """验证运行环境是否满足要求""" # 检查必需模块 required_modules = [ 'os', 'sys', 'logging', 'flask', 'werkzeug', 'numpy', 'transformers', 'torch', 'psutil' ] missing = [] for mod in required_modules: try: __import__(mod) except ImportError: missing.append(mod) # 处理缺失项 if missing: error_msg = f"环境验证失败,缺失: {', '.join(missing)}" self.logger.error(error_msg) raise DependencyError(error_msg) self.logger.info("✅ 环境验证通过") def _log_environment_status(self): """记录环境状态信息""" try: # 获取系统信息 sys_info = { "os": platform.system(), "os_version": platform.version(), "cpu": platform.processor(), "cpu_cores": psutil.cpu_count(logical=False), "memory_total": round(psutil.virtual_memory().total / (1024 ** 3), 1), "memory_used": round(psutil.virtual_memory().used / (1024 ** 3), 1), "disk_total": round(psutil.disk_usage('/').total / (1024 ** 3), 1), "disk_used": round(psutil.disk_usage('/').used / (1024 ** 3), 1), } self.logger.info( f"📊 系统状态: OS={sys_info['os']} {sys_info['os_version']}, " f"CPU={sys_info['cpu']} ({sys_info['cpu_cores']}核), " f"内存={sys_info['memory_used']}/{sys_info['memory_total']}GB, " f"磁盘={sys_info['disk_used']}/{sys_info['disk_total']}GB" ) except Exception as e: self.logger.error(f"环境状态获取失败: {str(e)}") self.metrics.record_error('environment_status') def _initialize_core_components(self): """初始化不依赖其他组件的核心组件""" self._log_environment_status() # 初始化熔断器 self._initialize_circuit_breakers() # 注册核心任务 self.subsystem_registry.register_task( "子系统心跳检查", self._check_subsystem_heartbeats, interval=system_config.get('HEARTBEAT_INTERVAL', 60.0) ) self.subsystem_registry.register_task( "子系统恢复", self._recover_failed_subsystems, interval=system_config.get('RECOVERY_INTERVAL', 300.0) ) def _initialize_circuit_breakers(self): """为所有子系统初始化熔断器""" subsystems = [ '健康系统', '模型管理器', '记忆系统', '情感系统', '认知架构', '通信系统' ] for subsystem in subsystems: breaker = CircuitBreaker( failure_threshold=system_config.get('CIRCUIT_BREAKER_THRESHOLD', 5), recovery_timeout=system_config.get('CIRCUIT_BREAKER_TIMEOUT', 300) ) self.circuit_breakers[subsystem] = breaker self.logger.info(f"⚡ 为 {subsystem} 初始化熔断器") def _initialize_subsystems(self): """初始化所有子系统""" # 定义子系统初始化顺序 subsystems = [ ('健康系统', self._create_health_system, {}), ('模型管理器', self._create_model_manager, {}), ('记忆系统', self._create_memory_system, {}), ('情感系统', self._create_affective_system, {}), ('认知架构', self._create_cognitive_architecture, {}), ('通信系统', self._create_communication_system, {}) ] # 注册子系统依赖关系 dependencies = { '通信系统': ['认知架构'], '情感系统': ['健康系统', '记忆系统'], '认知架构': ['记忆系统'] } for name, creator_func, kwargs in subsystems: try: # 检查依赖是否满足 if name in dependencies: missing_deps = [dep for dep in dependencies[name] if not self.subsystem_registry.get_subsystem(dep)] if missing_deps: self.logger.warning(f"⚠️ 子系统 {name} 缺少依赖: {', '.join(missing_deps)}") # 尝试自动初始化缺失依赖 for dep in missing_deps: self._initialize_dependency(dep) # 创建实例 instance = creator_func(**kwargs) self.subsystem_registry.register_subsystem(name, instance) # 注册子系统任务 if hasattr(instance, 'periodic_task'): self.subsystem_registry.register_task( f"{name}更新", instance.periodic_task, interval=system_config.get(f'{name}_INTERVAL', 60.0) ) self.logger.info(f"✅ {name}初始化完成") except Exception as e: self.logger.error(f"❌ {name}初始化失败: {str(e)}") self.metrics.record_error(f'subsystem_init_{name.lower()}') def _initialize_dependency(self, subsystem_name: str): """初始化依赖子系统""" creators = { '健康系统': self._create_health_system, '模型管理器': self._create_model_manager, '记忆系统': self._create_memory_system, '情感系统': self._create_affective_system, '认知架构': self._create_cognitive_architecture, '通信系统': self._create_communication_system } if subsystem_name in creators: try: instance = creators[subsystem_name]() self.subsystem_registry.register_subsystem(subsystem_name, instance) self.logger.info(f"✅ 依赖子系统 {subsystem_name} 初始化完成") except Exception as e: self.logger.error(f"❌ 依赖子系统 {subsystem_name} 初始化失败: {str(e)}") raise # 各子系统实现(增强功能) def _create_health_system(self): class HealthSystem: def __init__(self): self.status = "healthy" self.metrics = {} self.logger = logging.getLogger('HealthSystem') def periodic_task(self): """更新健康状态""" try: # 获取系统状态 cpu_usage = psutil.cpu_percent() mem_usage = psutil.virtual_memory().percent disk_usage = psutil.disk_usage('/').percent # 更新状态 self.status = "healthy" if cpu_usage < 90 and mem_usage < 90 else "warning" self.metrics = { "cpu_usage": cpu_usage, "mem_usage": mem_usage, "disk_usage": disk_usage, "timestamp": time.time() } self.logger.debug(f"健康状态更新: {self.status}") except Exception as e: self.logger.error(f"健康系统更新失败: {str(e)}") def record_environment_status(self, env_data): """记录环境状态""" self.metrics['environment'] = env_data def get_status(self): return { "status": self.status, "metrics": self.metrics } return HealthSystem() def _create_model_manager(self): class ModelManager: def __init__(self): self.loaded_models = {} self.logger = logging.getLogger('ModelManager') def load_model(self, model_name): """加载模型""" if model_name not in self.loaded_models: # 模拟模型加载 self.logger.info(f"加载模型: {model_name}") self.loaded_models[model_name] = { "status": "loaded", "load_time": time.time() } return True return False def periodic_task(self): """模型管理器周期性任务""" # 检查模型状态 for model_name, model_info in list(self.loaded_models.items()): # 模拟模型验证 if time.time() - model_info['load_time'] > 86400: # 24小时 self.logger.info(f"重新加载模型: {model_name}") model_info['load_time'] = time.time() def get_status(self): return { "loaded_models": list(self.loaded_models.keys()), "count": len(self.loaded_models) } return ModelManager() def _create_memory_system(self): class MemorySystem: def __init__(self): self.memories = [] self.last_consolidation = time.time() self.logger = logging.getLogger('MemorySystem') def periodic_task(self): """巩固记忆""" try: # 保留最近100条记忆 if len(self.memories) > 100: self.memories = self.memories[-100:] self.last_consolidation = time.time() self.logger.debug(f"记忆巩固完成,当前记忆数: {len(self.memories)}") except Exception as e: self.logger.error(f"记忆巩固失败: {str(e)}") def add_memory(self, memory): """添加记忆""" self.memories.append({ "content": memory, "timestamp": time.time() }) def get_status(self): return { "memory_count": len(self.memories), "last_consolidation": self.last_consolidation } return MemorySystem() def _create_affective_system(self): class AffectiveSystem: def __init__(self): self.mood = "neutral" self.energy = 100 self.logger = logging.getLogger('AffectiveSystem') def periodic_task(self): """情感成长""" try: # 根据时间恢复能量 self.energy = min(100, self.energy + 1) self.logger.debug(f"情感更新: 能量={self.energy}, 情绪={self.mood}") except Exception as e: self.logger.error(f"情感系统更新失败: {str(e)}") def update_mood(self, interaction): """根据交互更新情绪""" if "positive" in interaction: self.mood = "happy" elif "negative" in interaction: self.mood = "sad" def get_status(self): return { "mood": self.mood, "energy": self.energy } return AffectiveSystem() def _create_cognitive_architecture(self): class CognitiveArchitecture: def __init__(self): self.current_task = None self.task_history = [] self.logger = logging.getLogger('CognitiveArchitecture') def start_task(self, task): """开始新任务""" self.logger.info(f"开始任务: {task}") self.current_task = task self.task_history.append({ "task": task, "start_time": time.time(), "status": "in_progress" }) def complete_task(self, result): """完成任务""" if self.current_task: for task in reversed(self.task_history): if task["task"] == self.current_task and task["status"] == "in_progress": task["status"] = "completed" task["result"] = result task["end_time"] = time.time() self.logger.info(f"完成任务: {task['task']}") break self.current_task = None def periodic_task(self): """认知架构周期性任务""" # 清理过时任务 now = time.time() self.task_history = [t for t in self.task_history if t['status'] == 'completed' or (now - t['start_time']) < 3600] # 保留1小时内进行中的任务 def get_status(self): return { "current_task": self.current_task, "task_count": len(self.task_history), "completed_tasks": sum(1 for t in self.task_history if t["status"] == "completed") } return CognitiveArchitecture() def _create_communication_system(self): class CommunicationSystem: def __init__(self): self.message_queue = [] self.processed_count = 0 self.logger = logging.getLogger('CommunicationSystem') def process_input(self, user_input: str, user_id: str = "default") -> str: """处理用户输入""" try: # 模拟处理逻辑 response = f"已处理您的消息: '{user_input}' (用户: {user_id})" # 记录处理 self.processed_count += 1 self.logger.info(f"处理消息: '{user_input[:30]}...' (用户: {user_id})") return response except Exception as e: self.logger.error(f"消息处理失败: {str(e)}") return "处理消息时出错" def periodic_task(self): """通信系统周期性任务""" # 清理消息队列 if len(self.message_queue) > 100: self.message_queue = self.message_queue[-100:] self.logger.debug("清理消息队列") def check_heartbeat(self): """心跳检查""" return True def get_status(self): return { "queue_size": len(self.message_queue), "processed_count": self.processed_count } return CommunicationSystem() def process_input(self, user_input: str, user_id: str = "default") -> Dict[str, Any]: """处理用户输入(通过通信系统)""" # 获取通信系统 comm_system = self.subsystem_registry.get_subsystem('通信系统') if not comm_system: self.logger.error("通信系统未初始化,使用回退处理") self.metrics.record_error('communication_system_inactive') return {"response": "系统正在维护中,请稍后再试"} # 检查熔断器状态 breaker = self.circuit_breakers.get('通信系统') if breaker and breaker.is_open(): self.logger.warning("通信系统熔断器已打开") self.metrics.record_error('communication_circuit_open') return {"response": "系统繁忙,请稍后再试"} try: # 使用熔断器包装调用 def process_wrapper(): return comm_system.process_input(user_input, user_id) if breaker: response = breaker.call(process_wrapper) else: response = process_wrapper() # 使用线程池异步处理 future = executor.submit(lambda: response) result = future.result(timeout=system_config.AGENT_RESPONSE_TIMEOUT) # 记录成功 self.metrics.record_success('process_input') return {"response": result} except TimeoutError: self.logger.warning("处理输入超时") self.metrics.record_timeout('process_input') if breaker: breaker.record_failure() return {"error": "处理超时,请重试"} except Exception as e: self.logger.error(f"处理输入失败: {str(e)}") self.metrics.record_error('process_input') if breaker: breaker.record_failure() return {"error": "处理失败,请稍后再试"} def _monitor_environment(self): """监控环境状态""" try: if self.environment and hasattr(self.environment, 'get_state'): # 使用真实环境管理器获取状态 env_state = self.environment.get_state() self.logger.info( f"🌡️ 环境监控: 温度={env_state.get('temperature', '未知')}℃, " f"湿度={env_state.get('humidity', '未知')}%, " f"光照={env_state.get('light_level', '未知')}%" ) # 记录到健康系统(如果可用) health_system = self.subsystem_registry.get_subsystem('健康系统') if health_system and hasattr(health_system, 'record_environment_status'): health_system.record_environment_status(env_state) else: # 使用内置监控 cpu_usage = psutil.cpu_percent() mem_usage = psutil.virtual_memory().percent disk_usage = psutil.disk_usage('/').percent self.logger.info( f"📊 系统监控: CPU={cpu_usage}%, " f"内存={mem_usage}%, " f"磁盘={disk_usage}%" ) # 记录到健康系统 health_system = self.subsystem_registry.get_subsystem('健康系统') if health_system and hasattr(health_system, 'record_environment_status'): health_system.record_environment_status({ "cpu_usage": cpu_usage, "mem_usage": mem_usage, "disk_usage": disk_usage }) except Exception as e: self.logger.error(f"环境监控失败: {str(e)}") self.metrics.record_error('environment_monitoring') def _check_subsystem_heartbeats(self): """检查子系统心跳""" for name, subsystem in self.subsystem_registry.subsystems.items(): if hasattr(subsystem, 'check_heartbeat'): try: if not subsystem.check_heartbeat(): self.logger.warning(f"⚠️ 子系统 {name} 心跳检测失败") self._handle_subsystem_error(name) else: self.logger.debug(f"✅ 子系统 {name} 心跳正常") except Exception as e: self.logger.error(f"子系统 {name} 心跳检查异常: {str(e)}") self._handle_subsystem_error(name) self.metrics.record_error(f'heartbeat_{name.lower()}') def _handle_subsystem_error(self, name: str): """处理子系统错误""" breaker = self.circuit_breakers.get(name) if breaker: breaker.record_failure() if breaker.is_open(): self.logger.critical(f"🚨 子系统 {name} 因连续错误被熔断!") self.metrics.record_event('circuit_breaker', name) def _recover_failed_subsystems(self): """尝试恢复失败的子系统""" for name, breaker in self.circuit_breakers.items(): if breaker.is_open() and breaker.should_try_recovery(): self.logger.info(f"🔄 尝试恢复子系统: {name}") try: # 尝试重新初始化子系统 self._reinitialize_subsystem(name) breaker.record_success() self.logger.info(f"✅ 子系统 {name} 恢复成功") self.metrics.record_event('subsystem_recovered', name) except Exception as e: self.logger.error(f"子系统 {name} 恢复失败: {str(e)}") breaker.record_failure() self.metrics.record_error(f'recovery_{name.lower()}') def _reinitialize_subsystem(self, name: str): """重新初始化子系统""" creators = { '健康系统': self._create_health_system, '模型管理器': self._create_model_manager, '记忆系统': self._create_memory_system, '情感系统': self._create_affective_system, '认知架构': self._create_cognitive_architecture, '通信系统': self._create_communication_system } if name in creators: instance = creators[name]() self.subsystem_registry.register_subsystem(name, instance) else: raise SubsystemFailure(f"未知子系统: {name}") def get_status(self) -> Dict[str, Any]: """获取智能体状态报告""" status_data = { "uptime": time.time() - self._initialization_time, "running": self._running, "metrics": self.metrics.get_metrics(), "subsystems": {} } # 添加子系统状态 for name, subsystem in self.subsystem_registry.subsystems.items(): if hasattr(subsystem, 'get_status'): status_data['subsystems'][name] = subsystem.get_status() # 添加熔断器状态 status_data['circuit_breakers'] = {} for name, breaker in self.circuit_breakers.items(): status_data['circuit_breakers'][name] = breaker.get_status() return status_data def shutdown(self): """关闭智能体""" self.logger.info("🛑 正在关闭智能体...") self._running = False # 停止线程池 executor.shutdown(wait=False) # 等待后台线程 if self._background_thread and self._background_thread.is_alive(): self._background_thread.join(timeout=5.0) if self._background_thread.is_alive(): self.logger.warning("后台线程未正常退出") self.logger.info("✅ 智能体已关闭")
08-13
2025-06-18 19:46:10,831 - INFO - Using Any for unsupported type: typing.Sequence[~T] 2025-06-18 19:46:11,071 - INFO - No module named google.cloud.bigquery_storage_v1. As a result, the ReadFromBigQuery transform *CANNOT* be used with `method=DIRECT_READ`. 2025-06-18 19:46:13,954 - ERROR - Error executing catalog: CREATE CATALOG IF NOT EXISTS hive_catalog WITH ( 'type' = 'hive', 'hive-conf-dir' = '/opt/hive/conf' ) An error occurred while calling o100.executeSql. : org.apache.flink.table.api.SqlParserException: SQL parse failed. Encountered "NOT" at line 2, column 23. Was expecting one of: <EOF> "WITH" ... ";" ... at org.apache.flink.table.planner.parse.CalciteParser.parseSqlList(CalciteParser.java:82) at org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:102) at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:758) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.flink.api.python.shaded.py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at org.apache.flink.api.python.shaded.py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374) at org.apache.flink.api.python.shaded.py4j.Gateway.invoke(Gateway.java:282) at org.apache.flink.api.python.shaded.py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at org.apache.flink.api.python.shaded.py4j.commands.CallCommand.execute(CallCommand.java:79) at org.apache.flink.api.python.shaded.py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.calcite.sql.parser.SqlParseException: Encountered "NOT" at line 2, column 23. Was expecting one of: <EOF> "WITH" ... ";" ... at org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.convertException(FlinkSqlParserImpl.java:490) at org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.normalizeException(FlinkSqlParserImpl.java:254) at org.apache.calcite.sql.parser.SqlParser.handleException(SqlParser.java:145) at org.apache.calcite.sql.parser.SqlParser.parseStmtList(SqlParser.java:200) at org.apache.flink.table.planner.parse.CalciteParser.parseSqlList(CalciteParser.java:77) ... 13 more Caused by: org.apache.flink.sql.parser.impl.ParseException: Encountered "NOT" at line 2, column 23. Was expecting one of: <EOF> "WITH" ... ";" ... at org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.generateParseException(FlinkSqlParserImpl.java:46382) at org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.jj_consume_token(FlinkSqlParserImpl.java:46190) at org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.SqlStmtList(FlinkSqlParserImpl.java:3522) at org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.parseSqlStmtList(FlinkSqlParserImpl.java:306) at org.apache.calcite.sql.parser.SqlParser.parseStmtList(SqlParser.java:198) ... 14 more Traceback (most recent call last): File "/home/hadoop/PycharmProjects/SparkProject/src/flinkCDC.py", line 237, in <module> main() File "/home/hadoop/PycharmProjects/SparkProject/src/flinkCDC.py", line 97, in main t_env.execute_sql("CREATE CATALOG IF NOT EXISTS default_catalog") File "/home/hadoop/桌面/pyflink/lib/python3.8/site-packages/pyflink/table/table_environment.py", line 837, in execute_sql return TableResult(self._j_tenv.executeSql(stmt)) File "/home/hadoop/桌面/pyflink/lib/python3.8/site-packages/py4j/java_gateway.py", line 1322, in __call__ return_value = get_return_value( File "/home/hadoop/桌面/pyflink/lib/python3.8/site-packages/pyflink/util/exceptions.py", line 146, in deco return f(*a, **kw) File "/home/hadoop/桌面/pyflink/lib/python3.8/site-packages/py4j/protocol.py", line 326, in get_return_value raise Py4JJavaError( py4j.protocol.Py4JJavaError: An error occurred while calling o100.executeSql. : org.apache.flink.table.api.SqlParserException: SQL parse failed. Encountered "NOT" at line 1, column 19. Was expecting one of: <EOF> "WITH" ... ";" ... at org.apache.flink.table.planner.parse.CalciteParser.parseSqlList(CalciteParser.java:82) at org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:102) at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:758) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.flink.api.python.shaded.py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at org.apache.flink.api.python.shaded.py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374) at org.apache.flink.api.python.shaded.py4j.Gateway.invoke(Gateway.java:282) at org.apache.flink.api.python.shaded.py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at org.apache.flink.api.python.shaded.py4j.commands.CallCommand.execute(CallCommand.java:79) at org.apache.flink.api.python.shaded.py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.calcite.sql.parser.SqlParseException: Encountered "NOT" at line 1, column 19. Was expecting one of: <EOF> "WITH" ... ";" ... at org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.convertException(FlinkSqlParserImpl.java:490) at org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.normalizeException(FlinkSqlParserImpl.java:254) at org.apache.calcite.sql.parser.SqlParser.handleException(SqlParser.java:145) at org.apache.calcite.sql.parser.SqlParser.parseStmtList(SqlParser.java:200) at org.apache.flink.table.planner.parse.CalciteParser.parseSqlList(CalciteParser.java:77) ... 13 more Caused by: org.apache.flink.sql.parser.impl.ParseException: Encountered "NOT" at line 1, column 19. Was expecting one of: <EOF> "WITH" ... ";" ... at org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.generateParseException(FlinkSqlParserImpl.java:46382) at org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.jj_consume_token(FlinkSqlParserImpl.java:46190) at org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.SqlStmtList(FlinkSqlParserImpl.java:3522) at org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.parseSqlStmtList(FlinkSqlParserImpl.java:306) at org.apache.calcite.sql.parser.SqlParser.parseStmtList(SqlParser.java:198) ... 14 more
06-19
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值