google app engine的Text字段的延迟加载

Google App Engine Text字段加载问题
本文介绍在Google App Engine中使用Text字段遇到的问题及解决方案。由于版本更新导致Text字段延迟加载,在关闭JDO连接前未获取字段内容时将返回null。通过简单修改代码即可解决。

要在google app engine的JDO存储中,放很长的文本(500以上)的时候,String就不行了,字段应该定义为com.google.appengine.api.datastore.Text

 

以前都是随心所欲的set/get,没出过问题。今天为了清理缓存,把项目重新上传了一次,啥修改都没有,访问之。。。崩了。

 

本地debug,取出的com.google.appengine.api.datastore.Text字段是null,Google之。。。

 

似乎是app engine升了版本,Text字段会自动延迟加载,that means,如果你在关闭JDO连接之前,没有get过这个字段,那就没机会了,它就是null了。其实。。。有能力解决性能问题的人,没有延迟加载也无所谓;没这个能力的,延迟加载只是障碍。。所以无论我是那种,都该讨厌它。。。

 

修改代码,获得对象后做一次无谓的get再关闭连接,收工。

 

 

""" 【AI语音助手】主程序入口 集成语音识别、Qwen 意图理解、TTS 与动作执行 ✅ 使用 AI 动态控制下一轮监听超时时间(expect_follow_up) """ import sys import threading import time import logging # --- 导入日志工具 --- from Progress.utils.logger_config import setup_logger from Progress.utils.logger_utils import log_call, log_time, log_step, log_var # --- 显式导入各模块核心实例 --- from Progress.app.voice_recognizer import recognizer from Progress.app.qwen_assistant import assistant from Progress.app.text_to_speech import tts_engine from Progress.app.system_controller import executor from database import config from api_server import create_api_server # 新方式 # 创建 API 服务(但不绑定具体实例) api_app, init_api_deps = create_api_server() def run_api_server(host='127.0.0.1', port=5000): def start(): # ✅ 在这里才注入所有依赖 init_api_deps( ass=assistant, exec=executor, tts=tts_engine, rec=recognizer ) api_app.run(host=host, port=port, debug=False, threaded=True, use_reloader=False) thread = threading.Thread(target=start, daemon=True) thread.start() print(f"🌐 API 服务器已启动:http://{host}:{port}") # --- 初始化全局日志器 --- logger = logging.getLogger("ai_assistant") @log_step("处理一次语音交互(AI动态控制等待)") @log_time def handle_single_interaction(): # ✅ 显式传入动态超时 text = recognizer.listen_and_recognize(timeout=recognizer.current_timeout) if not text: logger.info("🔇 未检测到语音") return False logger.info(f"🗣️ 用户说: '{text}'") decision = assistant.process_voice_command(text) expect_follow_up = decision.get("expect_follow_up", False) # 3. 构造回复语句 result = executor.execute_task_plan(decision) if result["success"]: ai_reply = str(result["message"]) logger.info(f"✅ 操作成功: {result['operation']} -> {ai_reply}") else: error_msg = result["message"] ai_reply = f"抱歉,{error_msg if '抱歉' not in error_msg else error_msg[3:]}" logger.warning(f"❌ 执行失败: {error_msg}") # 🔁 动态设置下一次识别的等待策略 if expect_follow_up: recognizer.current_timeout = 8 logger.debug(f"🧠 AI 预期后续提问,已设置下次等待时间为 {recognizer.current_timeout}s") else: recognizer.current_timeout = 3 logger.debug(f"🔚 AI 认为对话结束,已设置下次等待时间为 {recognizer.current_timeout}s") logger.info(f"🤖 回复: {ai_reply}") tts_engine.speak(ai_reply) return result.get("should_exit", False) @log_step("启动 AI 语音助手") @log_time def main(): logger.info("🚀 正在启动 AI 语音助手系统...") run_api_server(host='127.0.0.1', port=5000) try: tts_engine.start() log_call("✅ 所有模块初始化完成,进入监听循环") log_call("\n" + "—" * 50) log_call("🎙️ 语音助手已就绪") log_call("💡 说出你的命令,例如:'打开浏览器'、'写一篇春天的文章'") log_call("🛑 说出‘退出’、‘关闭’、‘停止’或‘拜拜’来结束程序") log_call("—" * 50 + "\n") while True: try: should_exit = handle_single_interaction() if should_exit: break # 退出主循环 except KeyboardInterrupt: logger.info("🛑 用户主动中断 (Ctrl+C)") break except Exception as e: logger.exception("⚠️ 单次交互过程中发生异常,已降级处理") error_msg = "抱歉,我在处理刚才的操作时遇到了一点问题。" logger.info(f"🗣️ 回复: {error_msg}") tts_engine.speak(error_msg) time.sleep(0.5) # 清理资源 tts_engine.stop() logger.info("👋 语音助手已安全退出") except KeyboardInterrupt: logger.info("🛑 用户通过 Ctrl+C 中断程序") print("\n👋 再见!") except Exception as e: logger.exception("❌ 主程序运行时发生未预期异常") print(f"\n🚨 程序异常终止:{e}") sys.exit(1) if __name__ == "__main__": if not logging.getLogger().handlers: setup_logger(name="ai_assistant", log_dir="logs", level=logging.INFO) main() """ 【通义千问 Qwen】API集成模块 用于意图理解和任务处理(支持 expect_follow_up 字段) """ import json import re import logging import dashscope from dashscope import Generation from database.config import config from Progress.utils.logger_utils import log_time, log_step, log_var from Progress.utils.logger_config import setup_logger # --- 初始化日志器 --- logger = logging.getLogger("ai_assistant") DASHSCOPE_API_KEY = config.get("ai_model","api_key") DASHSCOPE_MODEL = config.get("ai_model","model") class QWENAssistant: def __init__(self): if not DASHSCOPE_API_KEY: raise ValueError("缺少 DASHSCOPE_API_KEY,请检查配置文件") dashscope.api_key = DASHSCOPE_API_KEY self.model_name = DASHSCOPE_MODEL or 'qwen-max' logger.info(f"✅ QWENAssistant 初始化完成,使用模型: {self.model_name}") self.conversation_history = [] self.system_prompt = """ 你是一个智能语音控制助手,能够理解用户的自然语言指令,并将其转化为可执行的任务计划。 你的职责是: - 准确理解用户意图; - 若涉及多个动作,需拆解为【执行计划】; - 输出一个严格符合规范的 JSON 对象,供系统解析执行; - 所有回复必须使用中文(仅限于 response_to_user 字段); 🎯 输出格式要求(必须遵守): { "intent": "system_control", // 意图类型:"system_control" "task_type": "start_background_tasks",// 任务类型的简要描述(动态生成) "execution_plan": [ // 执行步骤列表 { "operation": "函数名", "parameters": { ... }, "description": "该步骤的目的说明" } ], "response_to_user": "你要对用户说的话(用中文)", "requires_confirmation": false, "mode": "parallel", "expect_follow_up": true // 🔥 新增字段:是否预期用户会继续提问? } 📌 已知 operation 列表: - play_music() - stop_music() - pause_music() - resume_music() - open_application(app_name: str) - create_file(file_name: str, content: str) - read_file(file_name: str) - write_file(file_name: str, content: str) - set_reminder(reminder_time: str, message: str) - exit() 📌 规则说明: 1. intent="chat" 仅用于闲聊、问天气等非操作类请求。 2. execution_plan 必须与用户需求直接相关,禁止虚构或添加无关操作。 3. mode: 并行(parallel)/串行(serial),按依赖关系选择。 4. requires_confirmation: 删除、覆盖文件等高风险操作设为 true。 5. expect_follow_up: ⚠️ 新增关键字段! 🔥 关于 expect_follow_up 的判断标准: - 用户正在进行多步操作(如“帮我写一篇文章” → 可能接着说“保存到桌面”)→ True - 用户提出开放式问题(如“介绍一下人工智能”)→ True - 用户表达未完成感(如“还有呢?”、“然后呢?”、“接下来怎么办”)→ True - 明确结束语句(如“关闭程序”、“不用了”、“谢谢”)→ False - 单条命令已完成闭环(如“打开记事本”)且无延伸迹象 → False 💡 示例: 用户:“我想学习 Python” → expect_follow_up = True (用户可能继续问怎么学、推荐书籍等) 用户:“播放音乐” → expect_follow_up = True (可能会切歌、暂停) 用户:“退出” → expect_follow_up = False ⚠️ 重要警告: - 绝不允许省略任何字段; - 不得输出额外文本(如注释、解释); - 不允许使用未知 operation; - 必须返回纯 JSON。 现在,请根据用户的最新指令生成对应的 JSON 响应。 """ @log_time @log_step("处理语音指令") def process_voice_command(self, voice_text): log_var("原始输入", voice_text) if not voice_text.strip(): return self._create_fallback_response("我没有听清楚,请重新说话。", expect_follow_up=False) self.conversation_history.append({"role": "user", "content": voice_text}) try: messages = [{"role": "system", "content": self.system_prompt}] messages.extend(self.conversation_history[-10:]) # 最近10轮上下文 response = Generation.call( model=self.model_name, messages=messages, temperature=0.5, top_p=0.8, max_tokens=1024 ) if response.status_code != 200: logger.error(f"Qwen API 调用失败: {response.status_code}, {response.message}") return self._create_fallback_response(f"服务暂时不可用: {response.message}", expect_follow_up=False) ai_output = response.output['text'].strip() log_var("模型输出", ai_output) self.conversation_history.append({"role": "assistant", "content": ai_output}) # === 解析并验证 JSON === parsed = self._extract_and_validate_json(ai_output) if parsed: return parsed else: # 降级响应:假设只是普通聊天 clean_text = re.sub(r'json[\s\S]*?|', '', ai_output).strip() return self._create_fallback_response(clean_text, expect_follow_up=True) except Exception as e: logger.exception("处理语音指令时发生异常") return self._create_fallback_response("抱歉,我遇到了一些技术问题,请稍后再试。", expect_follow_up=False) def _extract_and_validate_json(self, text: str): """从文本中提取 JSON 并验证结构(含 expect_follow_up)""" try: data = json.loads(text) return self._validate_plan_structure(data) except json.JSONDecodeError: pass # 尝试正则提取第一个大括号内容 match = re.search(r'\{[\s\S]*\}', text) if not match: return None try: data = json.loads(match.group()) return self._validate_plan_structure(data) except: return None def _validate_plan_structure(self, data: dict): """验证结构并补全字段""" required_top_level = ["intent", "task_type", "execution_plan", "response_to_user", "requires_confirmation"] for field in required_top_level: if field not in data: logger.warning(f"缺少必要字段: {field}") return None valid_operations = { "play_music", "stop_music", "pause_music", "resume_music", "open_application", "create_file", "read_file", "write_file", "set_reminder", "exit" } for step in data["execution_plan"]: op = step.get("operation") params = step.get("parameters", {}) if not op or op not in valid_operations: logger.warning(f"无效操作: {op}") return None if not isinstance(params, dict): logger.warning(f"parameters 必须是对象: {params}") return None # 补全默认值 if "mode" not in data: data["mode"] = "parallel" if "expect_follow_up" not in data: # 启发式补全 ending_words = ['退出', '关闭', '停止', '拜拜', '再见', '不用了', '谢谢'] is_ending = any(word in data.get("response_to_user", "") for word in ending_words) data["expect_follow_up"] = not is_ending return data def _create_fallback_response(self, message: str, expect_follow_up: bool): """降级响应,包含 expect_follow_up 字段""" return { "intent": "chat", "task_type": "reply", "response_to_user": message, "requires_confirmation": False, "execution_plan": [], "mode": "serial", "expect_follow_up": expect_follow_up # 👈 新增 } @log_time def generate_text(self, prompt, task_type="general"): log_var("任务类型", task_type) log_var("提示词长度", len(prompt)) try: system_prompt = f"你是专业文本生成助手。\n任务类型:{task_type}\n要求:{prompt}" response = Generation.call( model=self.model_name, messages=[ {"role": "system", "content": system_prompt}, {"role": "user", "content": prompt} ], temperature=0.8, max_tokens=2000 ) if response.status_code == 200: result = response.output['text'] log_var("生成结果长度", len(result)) return result else: error_msg = f"文本生成失败: {response.message}" logger.error(error_msg) return error_msg except Exception as e: logger.exception("文本生成出错") return f"抱歉,生成文本时遇到错误:{str(e)}" @log_time def summarize_text(self, text): log_var("待总结文本长度", len(text)) try: prompt = f"请总结以下文本的主要内容:\n\n{text}" response = Generation.call( model=self.model_name, messages=[{"role": "user", "content": prompt}], temperature=0.3, max_tokens=500 ) if response.status_code == 200: result = response.output['text'] log_var("总结结果长度", len(result)) return result else: error_msg = f"总结失败: {response.message}" logger.error(error_msg) return error_msg except Exception as e: logger.exception("文本总结出错") return f"抱歉,总结文本时遇到错误:{str(e)}" @log_time def translate_text(self, text, target_language="英文"): log_var("目标语言", target_language) log_var("原文长度", len(text)) try: prompt = f"请将以下文本翻译成{target_language}:\n\n{text}" response = Generation.call( model=self.model_name, messages=[{"role": "user", "content": prompt}], temperature=0.3, max_tokens=1000 ) if response.status_code == 200: result = response.output['text'] log_var("翻译结果长度", len(result)) return result else: error_msg = f"翻译失败: {response.message}" logger.error(error_msg) return error_msg except Exception as e: logger.exception("文本翻译出错") return f"抱歉,翻译文本时遇到错误:{str(e)}" # 实例化全局助手 assistant = QWENAssistant() """ 【系统控制模块】System Controller 提供音乐播放、文件操作、应用启动、定时提醒等本地系统级功能 """ import inspect import os import subprocess import platform import threading import time import psutil import pygame from datetime import datetime import logging import schedule from typing import Optional, Dict, Any, List from concurrent.futures import ThreadPoolExecutor, as_completed from database.config import config from Progress.utils.ai_tools import FUNCTION_SCHEMA, ai_callable from Progress.utils.logger_utils import log_time, log_step, log_var, log_call from Progress.utils.logger_config import setup_logger from dataclasses import dataclass @dataclass class TaskResult: success: bool message: str operation: str data: dict = None timestamp: float = None def to_dict(self) -> dict: return { "success": self.success, "message": self.message, "operation": self.operation, "data": self.data or {} } # 终结型任务白名单(只能出现在最后) TERMINAL_OPERATIONS = {"exit"} RESOURCE_PATH = config.get("paths","resource_path") DEFAULT_MUSIC_PATH = os.path.join(RESOURCE_PATH, config.get("paths","resources","music_path")) DEFAULT_DOCUMENT_PATH = os.path.join(RESOURCE_PATH, config.get("paths","resources","document_path")) logger = logging.getLogger("ai_assistant") class SystemController: def __init__(self): self.api_server_started = False self.system = platform.system() self.music_player = None self._init_music_player() self.task_counter = 0 self.scheduled_tasks = {} @ai_callable(...) def start_api_server(self, port: int = 5000, host: str = "127.0.0.1"): if self.api_server_started: return True, f"⚠️ API 服务已在 {host}:{port} 运行" try: # 调用全局函数(由 main 注册) from api_server import run_api_server run_api_server(host=host, port=port) self.api_server_started = True return True, f"✅ 已启动 API 服务:http://{host}:{port}" except Exception as e: return False, f"❌ 启动失败: {str(e)}" @log_step("初始化音乐播放器") @log_time def _init_music_player(self): try: pygame.mixer.init() self.music_player = pygame.mixer.music logger.info("✅ 音乐播放器初始化成功") except Exception as e: logger.exception("❌ 音乐播放器初始化失败") self.music_player = None @log_step("播放音乐") @log_time @ai_callable( description="播放音乐文件或指定歌手的歌曲", params={"artist": "歌手名称"}, intent="music", action="play", concurrent=True ) def play_music(self): target_path = DEFAULT_MUSIC_PATH if not os.path.exists(target_path): msg = f"📁 路径不存在: {target_path}" logger.warning(msg) return False, msg music_files = self._find_music_files(target_path) if not music_files: msg = "🎵 未找到支持的音乐文件" logger.info(msg) return False, msg try: self.music_player.load(music_files[0]) self.music_player.play(-1) success_msg = f"🎶 正在播放: {os.path.basename(music_files[0])}" logger.info(success_msg) return True, success_msg except Exception as e: logger.exception("💥 播放音乐失败") return False, f"播放失败: {str(e)}" @ai_callable( description="停止当前播放的音乐", params={}, intent="music", action="stop" ) def stop_music(self): try: if self.music_player and pygame.mixer.get_init(): self.music_player.stop() logger.info("⏹️ 音乐已停止") return True, "音乐已停止" except Exception as e: logger.exception("❌ 停止音乐失败") return False, f"停止失败: {str(e)}" @ai_callable( description="暂停当前正在播放的音乐。", params={}, intent="muxic", action="pause" ) def pause_music(self): """暂停音乐""" try: self.music_player.pause() return True, "音乐已暂停" except Exception as e: return False, f"暂停音乐失败: {str(e)}" @ai_callable( description="恢复播放当前正在播放的音乐。", params={}, intent="music", action="resume" ) def resume_music(self): """恢复音乐""" try: self.music_player.unpause() return True, "音乐已恢复" except Exception as e: return False, f"恢复音乐失败: {str(e)}" @ai_callable( description="打开应用程序或浏览器访问网址", params={"app_name": "应用名称(如 记事本、浏览器)", "url": "网页地址"}, intent="system", action="open_app", concurrent=True ) def open_application(self, app_name: str, url: str = None): def _run(): """ AI 调用入口:打开指定应用程序 参数由 AI 解析后传入 """ # === 别名映射表 === alias_map = { # 浏览器相关 "浏览器": "browser", "browser": "browser", "chrome": "browser", "google chrome": "browser", "谷歌浏览器": "browser", "edge": "browser", "firefox": "browser", "safari": "browser", # 文本编辑器 "记事本": "text_editor", "notepad": "text_editor", "text_editer": "text_editor", "文本编辑器": "text_editor", # 文件管理器 "文件管理器": "explorer", "explorer": "explorer", "finder": "explorer", # 计算器 "计算器": "calc", "calc": "calc", "calculator": "calc", # 终端 "终端": "terminal", "terminal": "terminal", "cmd": "terminal", "powershell": "terminal", "shell": "terminal", "命令行": "terminal" } app_key = alias_map.get(app_name.strip()) if not app_key: error_msg = f"🚫 不支持的应用: {app_name}。支持的应用有:浏览器、记事本、计算器、终端、文件管理器等。" logger.warning(error_msg) return False, error_msg try: if app_key == "browser": target_url = url or "https://www.baidu.com" success, msg = self._get_browser_command(target_url) logger.info(f"🌐 {msg}") return success, msg else: # 获取对应命令生成函数 cmd_func_name = f"_get_{app_key}_command" cmd_func = getattr(self, cmd_func_name, None) if not cmd_func: return False, f"❌ 缺少命令生成函数: {cmd_func_name}" cmd = cmd_func() subprocess.Popen(cmd, shell=True) success_msg = f"🚀 已发送指令打开 {app_name}" logger.info(success_msg) return True, success_msg except Exception as e: logger.exception(f"💥 启动应用失败: {app_name}") return False, f"启动失败: {str(e)}" thread = threading.Thread(target=_run,daemon=True) thread.start() return True,f"正在尝试打开{app_name}..." @ai_callable( description="创建一个新文本文件并写入内容", params={"file_name": "文件名称", "content": "要写入的内容"}, intent="file", action="create", concurrent=True ) def create_file(self, file_name, content=""): def _run(): file_path = DEFAULT_DOCUMENT_PATH + "/" + file_name try: os.makedirs(os.path.dirname(file_path), exist_ok=True) with open(file_path, 'w', encoding='utf-8') as f: f.write(content) return True, f"文件已创建: {file_path}" except Exception as e: logger.exception("❌ 创建文件失败") return False, f"创建失败: {str(e)}" thread = threading.Thread(target=_run, daemon=True) thread.start() return True, f"正在尝试创建文件并写入文本..." @ai_callable( description="读取文本文件内容", params={"file_name": "文件名称"}, intent="file", action="read", concurrent=True ) def read_file(self, file_name): def _run(): file_path = DEFAULT_DOCUMENT_PATH + "/" + file_name """读取文件""" try: with open(file_path, 'r', encoding='utf-8') as f: content = f.read() return True, content except Exception as e: return False, f"读取文件失败: {str(e)}" thread = threading.Thread(target=_run,daemon=True) thread.start() return True,f"正在尝试读取文件..." @ai_callable( description="读取文本文件内容", params={"file_name": "文件名称","content":"写入的内容"}, intent="file", action="write", concurrent=True ) def write_file(self, file_name, content): def _run(): """写入文件""" try: with open(DEFAULT_DOCUMENT_PATH+"/"+file_name, 'w', encoding='utf-8') as f: f.write(content) return True, f"文件已保存: {file_name}" except Exception as e: return False, f"写入文件失败: {str(e)}" thread = threading.Thread(target=_run,daemon=True) thread.start() return True,f"正在尝试向{file_name}写入文本..." @ai_callable( description="获取当前系统信息,包括操作系统、CPU、内存等。", params={}, intent="system", action="get_system_info", concurrent=True ) def get_system_info(self): def _run(): """获取系统信息""" try: info = { "操作系统": platform.system(), "系统版本": platform.version(), "处理器": platform.processor(), "内存使用率": f"{psutil.virtual_memory().percent}%", "CPU使用率": f"{psutil.cpu_percent()}%", "磁盘使用率": f"{psutil.disk_usage('/').percent}%" } return True, info except Exception as e: return False, f"获取系统信息失败: {str(e)}" thread = threading.Thread(target=_run,daemon=True) thread.start() return True,f"正在尝试获取系统信息..." @ai_callable( description="设置一个定时提醒", params={"message": "提醒内容", "delay_minutes": "延迟分钟数"}, intent="system", action="set_reminder", concurrent=True ) def set_reminder(self, message, delay_minutes): def _run(): """设置提醒""" try: self.task_counter += 1 task_id = f"reminder_{self.task_counter}" def reminder_job(): print(f"提醒: {message}") # 这里可以添加通知功能 schedule.every(delay_minutes).minutes.do(reminder_job) self.scheduled_tasks[task_id] = { "message": message, "delay": delay_minutes, "created": datetime.now() } return True, f"提醒已设置: {delay_minutes}分钟后提醒 - {message}" except Exception as e: return False, f"设置提醒失败: {str(e)}" thread = threading.Thread(target=_run,daemon=True) thread.start() return True,f"正在设置提醒..." @ai_callable( description="退出应用", params={}, intent="system", action="exit", concurrent=False ) def exit(self): logger.info("🛑 用户请求退出,准备关闭语音助手...") return True,"正在关闭语音助手" @ai_callable( description="并发执行多个任务", params={"tasks": "任务列表,每个包含operation和arguments"}, intent="system", action="execute_concurrent", concurrent=True ) def _run_parallel_tasks(self, tasks: list): def _run_single(task): op = task.get("operation") args = task.get("arguments",{}) func = getattr(self,op,None) if func and callable(func): try: func(**args) except Exception as e: logger.error(f"执行任务{op}失败:{e}") for task in tasks: thread = threading.Thread(target=_run_single,args=(task,),daemon=True) thread.start() return True,f"已并发执行{len(tasks)}个任务" def run_scheduled_tasks(self): """运行定时任务""" schedule.run_pending() def _find_music_files(self, directory): """查找音乐文件""" music_extensions = ['.mp3', '.wav', '.flac', '.m4a', '.ogg'] music_files = [] try: for root, dirs, files in os.walk(directory): for file in files: if any(file.lower().endswith(ext) for ext in music_extensions): music_files.append(os.path.join(root, file)) except Exception as e: print(f"搜索音乐文件失败: {e}") return music_files def _get_text_editor_command(self): """获取文本编辑器启动命令""" if self.system == "Windows": return "notepad" elif self.system == "Darwin": # macOS return "open -a TextEdit" else: # Linux return "gedit" def _get_explorer_command(self): """获取文件管理器启动命令""" if self.system == "Windows": return "explorer" elif self.system == "Darwin": # macOS return "open -a Finder" else: # Linux return "nautilus" def _get_calc_command(self): """获取计算器启动命令""" if self.system == "Windows": return "calc" elif self.system == "Darwin": # macOS return "open -a Calculator" else: # Linux return "gnome-calculator" def _get_terminal_command(self): """获取终端启动命令""" if self.system == "Windows": return "cmd" elif self.system == "Darwin": # macOS return "open -a Terminal" else: # Linux return "gnome-terminal" def _get_browser_command(self, url="https://www.baidu.com"): try: import webbrowser if webbrowser.open(url): logger.info(f"🌐 已使用默认浏览器打开: {url}") return True, f"正在打开浏览器访问: {url}" else: return False, "无法打开浏览器" except Exception as e: logger.error(f"❌ 浏览器打开异常: {e}") return False, str(e) class TaskOrchestrator: def __init__(self, system_controller): self.system_controller = system_controller self.function_map = self._build_function_map() self.running_scheduled_tasks = False self.last_result = None logger.info(f"🔧 任务编排器已加载 {len(self.function_map)} 个可调用函数") def _build_function_map(self) -> Dict[str, callable]: """构建函数名 → 方法对象的映射""" mapping = {} for item in FUNCTION_SCHEMA: func_name = item["name"] func = getattr(self.system_controller, func_name, None) if func and callable(func): mapping[func_name] = func else: logger.warning(f"⚠️ 未找到或不可调用: {func_name}") return mapping def _convert_arg_types(self, func: callable, args: dict) -> dict: """ 尝试将参数转为函数期望的类型(简单启发式) 注意:Python 没有原生参数类型签名,这里做基础转换 """ converted = {} sig = inspect.signature(func) for name, param in sig.parameters.items(): value = args.get(name) if value is None: continue # 简单类型推断(基于默认值) ann = param.annotation if isinstance(ann, type): try: if ann == int and not isinstance(value, int): converted[name] = int(value) elif ann == float and not isinstance(value, float): converted[name] = float(value) else: converted[name] = value except (ValueError, TypeError): converted[name] = value # 保持原始值,让函数自己处理 else: converted[name] = value return converted def _start_scheduled_task_loop(self): """后台线程运行定时任务""" def run_loop(): while self.running_scheduled_tasks: schedule.run_pending() time.sleep(1) if not self.running_scheduled_tasks: self.running_scheduled_tasks = True thread = threading.Thread(target=run_loop, daemon=True) thread.start() logger.info("⏰ 已启动定时任务监听循环") def run_single_step(self,step: dict) -> TaskResult: op = step.get("operation") params = step.get("parameters", {}) func = self.function_map.get(op) if not func: msg = f"不支持的操作: {op}" logger.warning(f"⚠️ {msg}") return TaskResult(False, msg, op) try: safe_params = self._convert_arg_types(func, params) result = func(**safe_params) if isinstance(result, tuple): success, message = result return TaskResult(bool(success), str(message), op) return TaskResult(True, str(result), op) except Exception as e: logger.exception(f"执行 {op} 失败") return TaskResult(False, str(e), op) @log_step("执行多任务计划") @log_time def execute_task_plan(self, plan: dict = None) -> Dict[str, Any]: execution_plan = plan.get("execution_plan", []) mode = plan.get("mode", "parallel").lower() response_to_user = plan.get("response_to_user", "任务已提交。") if not execution_plan: return { "success": True, "message": response_to_user, "operation": "task_plan" } # === 阶段 1: 分离普通任务与终结任务 === normal_steps = [] terminal_step = None for step in execution_plan: op = step.get("operation") if op in TERMINAL_OPERATIONS: if terminal_step is not None: logger.warning(f"⚠️ 多个终结任务发现,仅保留最后一个: {op}") terminal_step = step else: normal_steps.append(step) # 存储所有结果(全部为 TaskResult 对象) all_results: List[TaskResult] = [] all_success = True # === 阶段 2: 执行普通任务 === if normal_steps: if mode == "parallel": with ThreadPoolExecutor() as executor: future_to_step = { executor.submit(self.run_single_step, step): step for step in normal_steps } for future in as_completed(future_to_step): res: TaskResult = future.result() all_results.append(res) if not res.success: all_success = False else: # serial for step in normal_steps: res: TaskResult = self.run_single_step(step) all_results.append(res) if not res.success: all_success = False break # === 阶段 3: 执行终结任务(仅当前面成功)=== final_terminal_result: Optional[TaskResult] = None should_exit_flag = False if terminal_step and all_success: final_terminal_result = self.run_single_step(terminal_step) all_results.append(final_terminal_result) if not final_terminal_result.success: all_success = False elif final_terminal_result.operation == "exit": should_exit_flag = True # ← 只在这里标记 # === 构造最终响应 === messages = [r.message for r in all_results if r.message] final_message = " | ".join(messages) if messages else response_to_user response = { "success": all_success, "message": final_message.strip(), "operation": "task_plan", "input": plan, "step_results": [r.to_dict() for r in all_results], # ✅ 统一输出格式 "data": { "plan_mode": mode, "terminal_executed": terminal_step is not None, "result_count": len(all_results) } } # ✅ 在顶层添加控制流标志(由业务逻辑决定) if should_exit_flag: response["should_exit"] = True self.last_result = response return response controller = SystemController() executor = TaskOrchestrator(controller) import threading import queue import comtypes.client import pyttsx3 from Progress.app.voice_recognizer import recognizer class TextToSpeechEngine: def __init__(self): self.queue = queue.Queue() self._running = False self._thread = None self.speaker = None # 只用于主线程占位或控制 def start(self): """启动TTS引擎""" if self._running: return self._running = True self._thread = threading.Thread(target=self._worker, daemon=True) self._thread.start() print("🔊 TTS 引擎已启动") def _worker(self): """工作线程:负责所有TTS操作""" print("🎧 TTS 工作线程运行中...") # ✅ 关键:在子线程中初始化 COM 并创建 speaker comtypes.CoInitialize() # 初始化当前线程为单线程套间 (STA) try: self.speaker = comtypes.client.CreateObject("SAPI.SpVoice") except Exception as e: print(f"❌ 初始化 TTS 失败: {e}") comtypes.CoUninitialize() return while self._running: try: text = self.queue.get(timeout=1) if text is None: break print(f"📢 正在播报: {text}") try: self.speaker.Speak(text, 1) except Exception as e: print(f"🗣️ 播报失败: {e}") self.queue.task_done() except queue.Empty: continue except Exception as e: print(f"❌ 处理任务时出错: {e}") # 清理 self.speaker = None comtypes.CoUninitialize() # 显式反初始化 print("🔚 TTS 工作线程退出") def speak(self,text: str): # 通知语音识别器:我要开始说了 recognizer.set_tts_playing(True) try: engine = pyttsx3.init() engine.say(text) engine.runAndWait() # 必须阻塞等待完成 finally: # 说完后通知可以继续听 recognizer.set_tts_playing(False) def stop(self): """安全关闭""" print("🔇 开始关闭 TTS 引擎...") self._running = False self.queue.put(None) # 发送停止信号 if self._thread and self._thread.is_alive(): self._thread.join(timeout=3) print("✅ TTS 引擎已关闭") tts_engine = TextToSpeechEngine() tts_engine.start() # if __name__ == "__main__": # tts_engine = TextToSpeechEngine() # tts_engine.start() # try: # tts_engine.speak("你好,我是AI助手。") # tts_engine.speak("这是第二次说话,应该能正常播放。") # tts_engine.speak("第三次测试,看看是不是还能响。") # time.sleep(10) # 给足够时间完成所有语音 # finally: # tts_engine.stop() """ 【语音识别模块】Speech Recognition (Offline) 使用麦克风进行实时语音识别,基于 Vosk 离线模型 支持单次识别 & 持续监听模式 音量可视化、模型路径检查、资源安全释放 """ import threading import time import logging import json import os from typing import Any, Dict from vosk import Model, KaldiRecognizer import pyaudio from database.config import config from Progress.utils.logger_utils import log_time, log_step, log_var, log_call from Progress.utils.logger_config import setup_logger VOSK_MODEL_PATH = "./vosk-model-small-cn-0.22" # --- 初始化日志器 --- logger = logging.getLogger("ai_assistant") class SpeechRecognizer: def __init__(self): # === Step 1: 初始化所有字段(避免 AttributeError)=== self.model = None self.audio = None self.is_listening = False self.callback = None # 用户注册的回调函数:callback(text) self._last_text = "" self._listen_thread = None self.sample_rate = 16000 # Vosk 要求采样率 16kHz self.chunk_size = 1600 # 推荐帧大小(对应 ~100ms) # 🔒 TTS 播放状态标志(由外部控制) self._is_tts_playing = False self._tts_lock = threading.Lock() # 配置相关 self._raw_config: Dict[str, Any] = {} self._voice_cfg = None # 先设为 None,等 _load_config 后赋值 # === Step 2: 初始化参数(这些值来自 JSON,并受边界保护)=== self._current_timeout =config.get("voice_recognition","timeout","initial") self._min_volume_threshold = config.get("voice_recognition","volume_threshold","base") self._post_speech_short_wait = config.get("voice_recognition","post_speech_short_wait","value") self._post_speech_long_wait = config.get("voice_recognition","post_speech_long_wait","value") self.long_speech_threshold = config.get("voice_recognition","long_speech_threshold","value") # === Step 3: 初始化外部资源(依赖配置和路径)=== self._load_model() self._init_audio_system() # === Step 4: 日志输出 === logger.info("✅ 语音识别器初始化完成") self._log_current_settings() # --- current_timeout 带边界 --- @property def current_timeout(self) -> float: return self._current_timeout @current_timeout.setter def current_timeout(self, value: float): old = self._current_timeout min_val = config.get("voice_recognition","timeout","min") max_val = config.get("voice_recognition","timeout","max") if value < min_val: self._current_timeout = min_val logger.warning(f"⏱️ 超时时间 {value}s 过短 → 已限制为最小值 {min_val}s") elif value > max_val: self._current_timeout = max_val logger.warning(f"⏱️ 超时时间 {value}s 过长 → 已限制为最大值 {max_val}s") else: self._current_timeout = float(value) logger.debug(f"🔊 监听超时更新: {old:.1f} → {self._current_timeout:.1f}s") # --- volume threshold --- @property def min_volume_threshold(self) -> int: return self._min_volume_threshold @min_volume_threshold.setter def min_volume_threshold(self, value: int): old = self._min_volume_threshold min_val = config.get("voice_recognition","volume_threshold","min") max_val = config.get("voice_recognition","volume_threshold","max") if value < min_val: self._min_volume_threshold = min_val logger.warning(f"🎚️ 音量阈值 {value} 过低 → 已修正为 {min_val}") elif value > max_val: self._min_volume_threshold = max_val logger.warning(f"🎚️ 音量阈值 {value} 过高 → 已修正为 {max_val}") else: self._min_volume_threshold = int(value) logger.debug(f"🎤 音量阈值更新: {old} → {self._min_volume_threshold}") # --- post speech short wait --- @property def post_speech_short_wait(self) -> float: return self._post_speech_short_wait @post_speech_short_wait.setter def post_speech_short_wait(self, value: float): old = self._post_speech_short_wait min_val = config.get("voice_recognition","post_speech_short_wait","min") max_val = config.get("voice_recognition","post_speech_short_wait","max") if value < min_val: self._post_speech_short_wait = min_val logger.warning(f"⏸️ 短句等待 {value}s 太短 → 改为 {min_val}s") elif value > max_val: self._post_speech_short_wait = max_val logger.warning(f"⏸️ 短句等待 {value}s 太长 → 改为 {max_val}s") else: self._post_speech_short_wait = float(value) logger.debug(f"⏳ 短句静默等待: {old:.1f} → {self._post_speech_short_wait:.1f}s") # --- post speech long wait --- @property def post_speech_long_wait(self) -> float: return self._post_speech_long_wait @post_speech_long_wait.setter def post_speech_long_wait(self, value: float): old = self._post_speech_long_wait min_val = config.get("voice_recognition","post_speech_long_wait","min") max_val = config.get("voice_recognition","post_speech_long_wait","max") if value < min_val: self._post_speech_long_wait = min_val logger.warning(f"⏸️ 长句等待 {value}s 太短 → 改为 {min_val}s") elif value > max_val: self._post_speech_long_wait = max_val logger.warning(f"⏸️ 长句等待 {value}s 太长 → 改为 {max_val}s") else: self._post_speech_long_wait = float(value) logger.debug(f"⏳ 长句静默等待: {old:.1f} → {self._post_speech_long_wait:.1f}s") def _log_current_settings(self): logger.info("🔧 当前语音识别参数:") logger.info(f" - 初始超时: {self.current_timeout}s") logger.info(f" - 音量阈值: {self.min_volume_threshold}") logger.info(f" - 短句等待: {self.post_speech_short_wait}s") logger.info(f" - 长句等待: {self.post_speech_long_wait}s") logger.info(f" - 长句阈值: {self.long_speech_threshold}s") @property def is_tts_playing(self) -> bool: with self._tts_lock: return self._is_tts_playing def set_tts_playing(self, status: bool): """供 TTS 模块调用:通知当前是否正在播放""" with self._tts_lock: self._is_tts_playing = status if not status: logger.debug("🟢 TTS 播放结束,语音识别恢复") @log_step("加载 Vosk 离线模型") @log_time def _load_model(self): """加载本地 Vosk 模型""" if not os.path.exists(VOSK_MODEL_PATH): raise FileNotFoundError(f"❌ Vosk 模型路径不存在: {VOSK_MODEL_PATH}\n","请从 https://alphacephei.com/vosk/models 下载中文小模型并解压至此路径") try: logger.info(f"📦 正在加载模型: {VOSK_MODEL_PATH}") self.model = Model(VOSK_MODEL_PATH) log_call("✅ 模型加载成功") except Exception as e: logger.critical(f"🔴 加载 Vosk 模型失败: {e}") raise RuntimeError("Failed to load Vosk model") from e @log_step("初始化音频系统") @log_time def _init_audio_system(self): """初始化 PyAudio 并创建全局 _recognizer""" try: self.audio = pyaudio.PyAudio() # 创建默认识别器(可在每次识别前 Reset) self._recognizer = KaldiRecognizer(self.model, self.sample_rate) logger.debug("✅ 音频系统初始化完成") except Exception as e: logger.exception("❌ 初始化音频系统失败") raise @property def last_text(self) -> str: return self._last_text def is_available(self) -> bool: """检查麦克风是否可用""" if not self.audio: return False try: stream = self.audio.open( format=pyaudio.paInt16, channels=1, rate=self.sample_rate, input=True, frames_per_buffer=self.chunk_size ) stream.close() return True except Exception as e: logger.error(f"🔴 麦克风不可用或无权限: {e}") return False @log_step("执行单次语音识别") @log_time def listen_and_recognize(self, timeout=None) -> str: """ 执行一次语音识别,支持外部指定超时时间。 若未指定,则使用 self.current_timeout(受最小/最大值保护) """ # === Step 1: 确定最终使用的 timeout 值 === if timeout is None: use_timeout = self.current_timeout # ✅ 自动受 property 保护 else: # ❗即使外部传了,我们也必须 clamp 到合法范围 min_t = config.get("voice_recognition","timeout","min") max_t = config.get("voice_recognition","timeout","max") if timeout < min_t: logger.warning(f"⚠️ 外部指定的超时时间 {timeout}s 小于最小允许值 {min_t}s,已修正") use_timeout = min_t elif timeout > max_t: logger.warning(f"⚠️ 外部指定的超时时间 {timeout}s 超过最大允许值 {max_t}s,已修正") use_timeout = max_t else: use_timeout = float(timeout) start_time = time.time() in_speech = False result_text = "" logger.debug(f"🎙️ 开始单次语音识别 (effective_timeout={use_timeout:.1f}s)...") # 🔴 如果正在播放 TTS,直接返回空 if self.is_tts_playing: logger.info("🔇 TTS 正在播放,跳过本次识别") return "" logger.info("🔊 请说话...") stream = None try: _recognizer = KaldiRecognizer(self.model, self.sample_rate) stream = self.audio.open( format=pyaudio.paInt16, channels=1, rate=self.sample_rate, input=True, frames_per_buffer=self.chunk_size ) while (time.time() - start_time) < use_timeout: # 再次检查播放状态(可能中途开始) if self.is_tts_playing: logger.info("🔇 TTS 开始播放,中断识别") break data = stream.read(self.chunk_size, exception_on_overflow=False) if _recognizer.AcceptWaveform(data): final_result = json.loads(_recognizer.Result()) text = final_result.get("text", "").strip() if text: result_text = text break else: partial = json.loads(_recognizer.PartialResult()) if partial.get("partial", "").strip(): in_speech = True # 注意:这里的判断已经由 use_timeout 控制 if not in_speech and (time.time() - start_time) >= use_timeout: logger.info("💤 超时未检测到语音输入") break if result_text: self._last_text = result_text logger.info(f"🎯 识别结果: '{result_text}'") return result_text else: logger.info("❓ 未识别到有效内容") self._last_text = "" return "" except Exception as e: logger.exception("🔴 执行单次语音识别时发生异常") self._last_text = "" return "" finally: if stream: try: stream.stop_stream() stream.close() except Exception as e: logger.warning(f"⚠️ 关闭音频流失败: {e}") # 全局实例(方便其他模块调用) recognizer = SpeechRecognizer() 为我增加API接口,并写一份API接口文档
10-26
总结一下我的代码,我现在希望录音策略改用用户未讲话时持续等待,说话时长时等待10~20s,停顿时短时等待5s """ 【语音识别模块】Speech Recognition (Offline) 使用麦克风进行实时语音识别,基于 Vosk 离线模型 支持单次识别 & 持续监听模式 音量可视化、模型路径检查、资源安全释放 """ import random import threading import time import logging import json import os from vosk import Model, KaldiRecognizer import pyaudio from database import config from Progress.utils.logger_utils import log_time, log_step, log_var, log_call from Progress.utils.logger_config import setup_logger # --- 配置参数 --- VOICE_TIMEOUT = config.timeout # 最大等待语音输入时间(秒) VOICE_PHRASE_TIMEOUT = config.phrase_timeout # 单句话最长录音时间 VOSK_MODEL_PATH = "./vosk-model-small-cn-0.22" # --- 初始化日志器 --- logger = logging.getLogger("ai_assistant") # 定义最小有效音量阈值 MIN_VOLUME_THRESHOLD = 600 # 可调(根据环境测试) class SpeechRecognizer: def __init__(self): self.model = None self.recognizer = None self.audio = None self.is_listening = False self.callback = None # 用户注册的回调函数:callback(text) self._last_text = "" self._listen_thread = None self.sample_rate = 16000 # Vosk 要求采样率 16kHz self.chunk_size = 1600 # 推荐帧大小(对应 ~100ms) # 🔒 TTS 播放状态标志(由外部控制) self._is_tts_playing = False self._tts_lock = threading.Lock() self._load_model() self._init_audio_system() @property def is_tts_playing(self) -> bool: with self._tts_lock: return self._is_tts_playing def set_tts_playing(self, status: bool): """供 TTS 模块调用:通知当前是否正在播放""" with self._tts_lock: self._is_tts_playing = status if not status: logger.debug("🟢 TTS 播放结束,语音识别恢复") @log_step("加载 Vosk 离线模型") @log_time def _load_model(self): """加载本地 Vosk 模型""" if not os.path.exists(VOSK_MODEL_PATH): raise FileNotFoundError(f"❌ Vosk 模型路径不存在: {VOSK_MODEL_PATH}\n","请从 https://alphacephei.com/vosk/models 下载中文小模型并解压至此路径") try: logger.info(f"📦 正在加载模型: {VOSK_MODEL_PATH}") self.model = Model(VOSK_MODEL_PATH) log_call("✅ 模型加载成功") except Exception as e: logger.critical(f"🔴 加载 Vosk 模型失败: {e}") raise RuntimeError("Failed to load Vosk model") from e @log_step("初始化音频系统") @log_time def _init_audio_system(self): """初始化 PyAudio 并创建全局 recognizer""" try: self.audio = pyaudio.PyAudio() # 创建默认识别器(可在每次识别前 Reset) self.recognizer = KaldiRecognizer(self.model, self.sample_rate) logger.debug("✅ 音频系统初始化完成") except Exception as e: logger.exception("❌ 初始化音频系统失败") raise @property def last_text(self) -> str: return self._last_text def is_available(self) -> bool: """检查麦克风是否可用""" if not self.audio: return False try: stream = self.audio.open( format=pyaudio.paInt16, channels=1, rate=self.sample_rate, input=True, frames_per_buffer=self.chunk_size ) stream.close() return True except Exception as e: logger.error(f"🔴 麦克风不可用或无权限: {e}") return False @log_step("执行单次语音识别") @log_time def listen_and_recognize(self, timeout=None) -> str: timeout = timeout or VOICE_TIMEOUT start_time = time.time() in_speech = False result_text = "" logger.debug(f"🎙️ 开始单次语音识别 (timeout={timeout:.1f}s)...") # 🔴 如果正在播放 TTS,直接返回空 if self.is_tts_playing: logger.info("🔇 TTS 正在播放,跳过本次识别") return "" logger.info("🔊 请说话...") stream = None try: recognizer = KaldiRecognizer(self.model, self.sample_rate) stream = self.audio.open( format=pyaudio.paInt16, channels=1, rate=self.sample_rate, input=True, frames_per_buffer=self.chunk_size ) while (time.time() - start_time) < timeout: # 再次检查播放状态(可能中途开始) if self.is_tts_playing: logger.info("🔇 TTS 开始播放,中断识别") break data = stream.read(self.chunk_size, exception_on_overflow=False) if recognizer.AcceptWaveform(data): final_result = json.loads(recognizer.Result()) text = final_result.get("text", "").strip() if text: result_text = text break else: partial = json.loads(recognizer.PartialResult()) if partial.get("partial", "").strip(): in_speech = True if not in_speech and (time.time() - start_time) >= timeout: logger.info("💤 超时未检测到语音输入") break if result_text: self._last_text = result_text logger.info(f"🎯 识别结果: '{result_text}'") return result_text else: logger.info("❓ 未识别到有效内容") self._last_text = "" return "" except Exception as e: logger.exception("🔴 执行单次语音识别时发生异常") self._last_text = "" return "" finally: if stream: try: stream.stop_stream() stream.close() except Exception as e: logger.warning(f"⚠️ 关闭音频流失败: {e}") @log_step("启动持续语音监听") def start_listening(self, callback=None, language=None): """ 启动后台线程持续监听语音输入 :param callback: 回调函数,接受一个字符串参数 text :param language: 语言代码(忽略,由模型决定) """ if self.is_listening: logger.warning("⚠️ 已在监听中,忽略重复启动") return if not callable(callback): logger.error("🔴 回调函数无效,请传入可调用对象") return self.callback = callback self.is_listening = True self._listen_thread = threading.Thread(target=self._background_listen, args=(language,), daemon=True) self._listen_thread.start() logger.info("🟢 已启动后台语音监听") @log_step("停止语音监听") def stop_listening(self): """安全停止后台监听""" if not self.is_listening: return self.is_listening = False logger.info("🛑 正在停止语音监听...") if self._listen_thread and self._listen_thread != threading.current_thread(): self._listen_thread.join(timeout=3) if self._listen_thread.is_alive(): logger.warning("🟡 监听线程未能及时退出(可能阻塞)") elif self._listen_thread == threading.current_thread(): logger.error("❌ 无法在当前线程中 join 自己!请检查调用栈") else: logger.debug("No thread to join") logger.info("✅ 语音监听已停止") def _background_listen(self, language=None): """后台循环监听线程""" logger.debug("🎧 后台监听线程已启动") stream = None try: stream = self.audio.open( format=pyaudio.paInt16, channels=1, rate=self.sample_rate, input=True, frames_per_buffer=self.chunk_size ) except Exception as e: logger.error(f"🔴 无法打开音频流: {e}") return try: while self.is_listening: # 🔴 检查是否正处于 TTS 播放中 → 跳过本次读取 if self.is_tts_playing: time.sleep(0.1) # 减少 CPU 占用 continue try: data = stream.read(self.chunk_size, exception_on_overflow=False) if self.recognizer.AcceptWaveform(data): result_json = self.recognizer.Result() result_dict = json.loads(result_json) text = result_dict.get("text", "").strip() if text and self.callback: logger.info(f"🔔 回调触发: '{text}'") self.callback(text) self.recognizer.Reset() else: partial = json.loads(self.recognizer.PartialResult()) partial_text = partial.get("partial", "") if partial_text.strip(): logger.debug(f"🗣️ 当前语音片段: '{partial_text}'") except Exception as e: logger.exception("Background listening error") time.sleep(0.05) finally: if stream: stream.stop_stream() stream.close() logger.debug("🔚 后台监听线程退出") recognizer = SpeechRecognizer() """ 【AI语音助手】主程序入口 集成语音识别、Qwen 意图理解、TTS 与动作执行 ✅ 已修复:不再访问 _last_text 私有字段 ✅ 增强:异常防护、类型提示、唤醒词预留接口 """ import sys import time import logging # --- 导入日志工具 --- from Progress.utils.logger_config import setup_logger from Progress.utils.logger_utils import log_time, log_step, log_var, log_call # --- 显式导入各模块核心类 --- from Progress.app.voice_recognizer import recognizer from Progress.app.qwen_assistant import assistant from Progress.app.text_to_speech import tts_engine from Progress.app.system_controller import executor from database import config # --- 初始化全局日志器 --- logger = logging.getLogger("ai_assistant") @log_step("处理一次语音交互") @log_time def handle_single_interaction(): """ 单次完整交互:听 -> 识别 -> AI 决策 -> 执行 -> 回复 """ # 1. 听 # 使用动态超时进行语音识别 text = recognizer.listen_and_recognize(recognizer.current_timeout) if not text: logger.info(f"🔇 未检测到有效语音(超时:{recognizer.current_timeout}秒)") return logger.info(f"🗣️ 用户说: '{text}'") # 2. AI决策 decition = assistant.process_voice_command(text) # 3. 构造回复语句 result = executor.execute_task_plan(decition) if result["success"]: ai_reply = str(result["message"]) logger.info(f"✅ 操作成功: {result['operation']} -> {ai_reply}") else: error_msg = result["message"] ai_reply = f"抱歉,{error_msg if '抱歉' not in error_msg else error_msg[3:]}" logger.warning(f"❌ 执行失败: {error_msg}") # 4. 说 logger.info(f"🤖 回复: {ai_reply}") tts_engine.speak(ai_reply) @log_step("启动 AI 语音助手") @log_time def main(): logger.info("🚀 正在启动 AI 语音助手系统...") try: tts_engine.start() log_call("✅ 所有模块初始化完成,进入监听循环") log_call("\n" + "—" * 50) log_call("🎙️ 语音助手已就绪") log_call("💡 说出你的命令,例如:'打开浏览器'、'写一篇春天的文章'") log_call("🛑 说出‘退出’、‘关闭’、‘停止’或‘拜拜’来结束程序") log_call("—" * 50 + "\n") while True: try: handle_single_interaction() # 🚩 检查上一次执行的结果是否有退出请求 last_result = executor.last_result # 假设 TaskOrchestrator 记录了 last_result if last_result and last_result.get("should_exit"): logger.info("🎯 接收到退出指令,即将终止程序...") break # 跳出循环,进入清理流程 except KeyboardInterrupt: logger.info("🛑 用户主动中断 (Ctrl+C),准备退出...") raise # 让 main 捕获并退出 except Exception as e: logger.exception("⚠️ 单次交互过程中发生异常,已降级处理") error_msg = "抱歉,我在处理刚才的操作时遇到了一点问题。" logger.info(f"🗣️ 回复: {error_msg}") tts_engine.speak(error_msg) last_text = recognizer.last_text.lower() exit_keywords = ['退出', '关闭', '停止', '拜拜', '再见'] if any(word in last_text for word in exit_keywords): logger.info("🎯 用户请求退出,程序即将终止") break time.sleep(0.5) tts_engine.stop() logger.info("👋 语音助手已安全退出") except KeyboardInterrupt: logger.info("🛑 用户通过 Ctrl+C 中断程序") print("\n👋 再见!") except Exception as e: logger.exception("❌ 主程序运行时发生未预期异常") print(f"\n🚨 程序异常终止:{e}") sys.exit(1) if __name__ == "__main__": if not logging.getLogger().handlers: setup_logger(name="ai_assistant", log_dir="logs", level=logging.INFO) main()
10-25
潮汐研究作为海洋科学的关键分支,融合了物理海洋学、地理信息系统及水利工程等多领域知识。TMD2.05.zip是一套基于MATLAB环境开发的潮汐专用分析工具集,为科研人员与工程实践者提供系统化的潮汐建模与计算支持。该工具箱通过模块化设计实现了两大核心功能: 在交互界面设计方面,工具箱构建了图形化操作环境,有效降低了非专业用户的操作门槛。通过预设参数输入模块(涵盖地理坐标、时间序列、测站数据等),用户可自主配置模型运行条件。界面集成数据加载、参数调整、可视化呈现及流程控制等标准化组件,将复杂的数值运算过程转化为可交互的操作流程。 在潮汐预测模块中,工具箱整合了谐波分解法与潮流要素解析法等数学模型。这些算法能够解构潮汐观测数据,识别关键影响要素(包括K1、O1、M2等核心分潮),并生成不同时间尺度的潮汐预报。基于这些模型,研究者可精准推算特定海域的潮位变化周期与振幅特征,为海洋工程建设、港湾规划设计及海洋生态研究提供定量依据。 该工具集在实践中的应用方向包括: - **潮汐动力解析**:通过多站点观测数据比对,揭示区域主导潮汐成分的时空分布规律 - **数值模型构建**:基于历史观测序列建立潮汐动力学模型,实现潮汐现象的数字化重构与预测 - **工程影响量化**:在海岸开发项目中评估人工构筑物对自然潮汐节律的扰动效应 - **极端事件模拟**:建立风暴潮与天文潮耦合模型,提升海洋灾害预警的时空精度 工具箱以"TMD"为主程序包,内含完整的函数库与示例脚本。用户部署后可通过MATLAB平台调用相关模块,参照技术文档完成全流程操作。这套工具集将专业计算能力与人性化操作界面有机结合,形成了从数据输入到成果输出的完整研究链条,显著提升了潮汐研究的工程适用性与科研效率。 资源来源于网络分享,仅用于学习交流使用,请勿用于商业,如有侵权请联系我删除!
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值