通过AS中BuildConfig.DEBUG字段用来自定义调试Log

本文介绍如何在Android项目中自定义BuildConfig字段以控制不同环境下的日志输出,并展示了如何通过Gradle配置实现这一目标。

BuildConfig.DEBUG

首先在Gradle脚本中默认的debug和release两种模式BuildCondig.DEBUG字段分别为true和false,而且不可更改。该字段编译后自动生成,在Studio中生成的目录在app/build/source/BuildConfig/Build Varients/package name/BuildConfig 文件下。我们以9GAG为例来看下release模式下该文件的内容:


public final class BuildConfig {
  public static final boolean DEBUG = false;
  //...
  public static final boolean IS_SHOW_DEBUG = false;
}

自定义BuildConfig字段

大家看到上述内容的时候发现莫名的有个IS_SHOW_DEBUG字段,这个完全是我自定义的一个字段,我来用它控制Log的输出,而没有选择用默认的DEBUG字段。举例一个场景,我们在App开发用到的api环境假设可能会有测试、正式环境,我们不可能所有的控制都通过DEBUG字段来控制,而且有时候环境复杂可能还会有两个以上的环境,这个时候就用到了Gradle提供了自定义BuildConfig字段,我们在程序中通过这个字段就可以配置我们不同的开发环境:

在app的build.gradle配置如下:

buildTypes {
        release {
            minifyEnabled false
            buildConfigField "boolean", "IS_SHOW_LOG", NOT_SHOW_LOG
            proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
        }
        debug {
            minifyEnabled false
            buildConfigField "boolean", "IS_SHOW_LOG", SHOW_LOG
        }
        preview {
            minifyEnabled false
            buildConfigField "boolean", "IS_SHOW_LOG", SHOW_LOG
        }
    }
其中SHOW_LOG和NOT_SHOW_LOG在gradle.properties中声明:

org.gradle.jvmargs=-Xmx1024m
SHOW_LOG true
NOT_SHOW_LOG false
点击项目重新build一下,下面就可以写app你想要的日志输出了:

public class MyLog {

    public static int i(String tag, String msg) {
        if (BuildConfig.IS_SHOW_LOG) {
            return Log.i(tag, msg);
        } else {
            return -1;
        }
    }

    public static int i(String tag, String msg, Throwable tr) {
        if (BuildConfig.IS_SHOW_LOG) {
            return Log.i(tag, msg, tr);
        } else {
            return -1;
        }
    }
    
    public static int v(String tag, String msg) {
        if (BuildConfig.IS_SHOW_LOG) {
            return Log.v(tag, msg);
        } else {
            return -1;
        }
    }

    public static int v(String tag, String msg, Throwable tr) {
        if (BuildConfig.IS_SHOW_LOG) {
            return Log.v(tag, msg, tr);
        } else {
            return -1;
        }
    }
    //等等其他你想打印的方法....
 }


为我写方式三的全代码 """ 【通义千问 Qwen】API集成模块 用于意图理解和任务处理 """ import json import re import logging import dashscope from dashscope import Generation from database import config from Progress.utils.logger_utils import log_time, log_step, log_var, log_call from Progress.utils.logger_config import setup_logger # --- 初始化日志器 --- logger = logging.getLogger("ai_assistant") DASHSCOPE_API_KEY = config.api_key DASHSCOPE_MODEL = config.model class QWENAssistant: def __init__(self): if not DASHSCOPE_API_KEY: raise ValueError("缺少 DASHSCOPE_API_KEY,请检查配置文件") dashscope.api_key = DASHSCOPE_API_KEY self.model_name = DASHSCOPE_MODEL or 'qwen-max' logger.info(f"✅ QWENAssistant 初始化完成,使用模型: {self.model_name}") self.conversation_history = [] self.system_prompt = """ 你是一个智能语音控制助手,能够理解用户的自然语言指令,并将其转化为可执行的任务计划。 你的职责是: - 准确理解用户意图; - 若涉及多个动作,需拆解为【执行计划】; - 输出一个严格符合规范的 JSON 对象,供系统解析执行; - 所有回复必须使用中文(仅限于 response_to_user 字段); 🎯 输出格式要求(必须遵守): { "intent": "system_control", // 意图类型:"system_control" "task_type": "start_background_tasks",// 任务类型的简要描述(动态生成) "execution_plan": [ // 执行步骤列表(每个步骤包含 operation, parameters, description) { "operation": "函数名", // 必须是已知操作之一 "parameters": { ... }, // 参数对象(按需提供) "description": "该步骤的目的说明" } ], "response_to_user": "你要对用户说的话(用中文)", "requires_confirmation": false, // 是否需要用户确认后再执行 "mode": "parallel" // 执行模式:"parallel"(并行)或 "serial"(串行) } 📌 已知 operation 列表(不可拼写错误): - play_music(music_path: str) - stop_music() - pause_music() - resume_music() - open_application(app_name: str) - create_file(file_name: str, content?: str) - read_file(file_name: str) - write_file(file_name: str, content: str) - set_reminder(reminder_time: str, message: str) - exit() 📌 规则说明: 1. 只有当用户明确要求执行系统级任务时,才设置 intent="system_control"; 否则设为 intent="chat"(例如闲聊、问天气、讲笑话等)。 2. execution_plan 中的每一步都必须与用户需求直接相关; ❌ 禁止添加无关操作(如随便加 speak_response 或 play_music)! 3. mode 决定执行方式: - 如果各步骤互不依赖 → "parallel" - 如果有先后依赖(如先打开再写入)→ "serial" 4. response_to_user 是你对用户的自然语言反馈,必须简洁友好,使用中文。 5. requires_confirmation: - 涉及删除、覆盖文件、长时间运行任务 → true - 普通操作(打开应用、播放音乐)→ false ⚠️ 重要警告: - 绝不允许照搬示例中的参数或路径!必须根据用户输入提取真实信息。 - 不得虚构不存在的 operation 或 parameter 名称。 - 不得省略任何字段,所有 key 都必须存在。 - 不得输出额外文本(如解释、注释、 ```json ``` 包裹符),只输出纯 JSON 对象。 ✅ 正确行为示例: 用户说:“帮我写一份自我介绍到 D:/intro.txt,并打开看看” → 应返回包含 write_file 和 read_file 的 serial 计划。 用户说:“播放 C:/Music/background.mp3 并告诉我准备好了” → 可以并行执行 play_music 和 speak_response。 用户说:“今天过得怎么样?” → intent="chat",response_to_user="我很好,谢谢!" 🚫 错误行为: - 把所有指令都变成和示例一样的操作组合; - 在没有请求的情况下自动添加 speak_response; - 使用未定义的操作如 run_script、send_email。 现在,请根据用户的最新指令生成对应的 JSON 响应。 """ @log_time @log_step("处理语音指令") def process_voice_command(self, voice_text): log_var("原始输入", voice_text) if not voice_text.strip(): return self._create_fallback_response("我没有听清楚,请重新说话。") self.conversation_history.append({"role": "user", "content": voice_text}) try: messages = [{"role": "system", "content": self.system_prompt}] messages.extend(self.conversation_history[-10:]) # 保留最近上下文 response = Generation.call( model=self.model_name, messages=messages, temperature=0.5, top_p=0.8, max_tokens=1024 ) if response.status_code != 200: logger.error(f"Qwen API 调用失败: {response.status_code}, {response.message}") return self._create_fallback_response(f"服务暂时不可用: {response.message}") ai_output = response.output['text'].strip() log_var("模型输出", ai_output) self.conversation_history.append({"role": "assistant", "content": ai_output}) # === 尝试解析 JSON === parsed = self._extract_and_validate_json(ai_output) if parsed: return parsed else: # 若无法解析为有效计划,则降级为普通对话 return self._create_fallback_response(ai_output) except Exception as e: logger.exception("处理语音指令时发生异常") return self._create_fallback_response("抱歉,我遇到了一些技术问题,请稍后再试。") def _extract_and_validate_json(self, text: str): """从文本中提取 JSON 并验证结构""" try: # 方法1:直接加载 data = json.loads(text) return self._validate_plan_structure(data) except json.JSONDecodeError: pass # 方法2:正则匹配第一个大括号包裹的内容 match = re.search(r'\{[\s\S]*\}', text) if not match: return None try: data = json.loads(match.group()) return self._validate_plan_structure(data) except: return None def _validate_plan_structure(self, data: dict): """验证是否符合多任务计划格式""" required_top_level = ["intent", "task_type", "execution_plan", "response_to_user", "requires_confirmation"] for field in required_top_level: if field not in data: logger.warning(f"缺少必要字段: {field}") return None valid_operations = { "play_music", "stop_music", "pause_music", "resume_music", "open_application", "create_file", "read_file", "write_file", "set_reminder", "speak_response", "exit" } for step in data["execution_plan"]: op = step.get("operation") params = step.get("parameters", {}) if not op or op not in valid_operations: logger.warning(f"无效操作: {op}") return None if not isinstance(params, dict): logger.warning(f"parameters 必须是对象: {params}") return None # 补全默认值 if "mode" not in data: data["mode"] = "parallel" return data def _create_fallback_response(self, message: str): """降级响应:用于非结构化输出""" return { "intent": "chat", "task_type": "reply", "response_to_user": message, "requires_confirmation": False, "execution_plan": [], "mode": "serial" } def _create_response(self, intent, action, parameters, response, needs_confirmation): resp = {"intent": intent, "action": action, "parameters": parameters, "response": response, "needs_confirmation": needs_confirmation} log_var("返回响应", resp) return resp @log_time def generate_text(self, prompt, task_type="general"): log_var("任务类型", task_type) log_var("提示词长度", len(prompt)) try: system_prompt = f""" 你是一个专业的文本生成助手。根据用户的要求生成高质量的文本内容。 任务类型:{task_type} 要求:{prompt} 请生成相应的文本内容,确保内容准确、有逻辑、语言流畅。 """ response = Generation.call( model=self.model_name, messages=[ {"role": "system", "content": system_prompt}, {"role": "user", "content": prompt} ], temperature=0.8, max_tokens=2000 ) if response.status_code == 200: result = response.output['text'] log_var("生成结果长度", len(result)) return result else: error_msg = f"文本生成失败: {response.message}" logger.error(error_msg) return error_msg except Exception as e: logger.exception("文本生成出错") return f"抱歉,生成文本时遇到错误:{str(e)}" @log_time def summarize_text(self, text): log_var("待总结文本长度", len(text)) try: prompt = f"请总结以下文本的主要内容:\n\n{text}" response = Generation.call( model=self.model_name, messages=[{"role": "user", "content": prompt}], temperature=0.3, max_tokens=500 ) if response.status_code == 200: result = response.output['text'] log_var("总结结果长度", len(result)) return result else: error_msg = f"总结失败: {response.message}" logger.error(error_msg) return error_msg except Exception as e: logger.exception("文本总结出错") return f"抱歉,总结文本时遇到错误:{str(e)}" @log_time def translate_text(self, text, target_language="英文"): log_var("目标语言", target_language) log_var("原文长度", len(text)) try: prompt = f"请将以下文本翻译成{target_language}:\n\n{text}" response = Generation.call( model=self.model_name, messages=[{"role": "user", "content": prompt}], temperature=0.3, max_tokens=1000 ) if response.status_code == 200: result = response.output['text'] log_var("翻译结果长度", len(result)) return result else: error_msg = f"翻译失败: {response.message}" logger.error(error_msg) return error_msg except Exception as e: logger.exception("文本翻译出错") return f"抱歉,翻译文本时遇到错误:{str(e)}" assistant = QWENAssistant() from functools import wraps import inspect import logging from Progress.app.qwen_assistant import assistant # 全局注册表 REGISTERED_FUNCTIONS = {} FUNCTION_SCHEMA = [] FUNCTION_MAP = {} # (intent, action) -> method_name logger = logging.getLogger("ai_assistant") def ai_callable( *, description: str, params: dict, intent: str = None, action: str = None, concurrent: bool = False # 新增:是否允许并发执行 ): def decorator(func): func_name = func.__name__ metadata = { "func": func, "description": description, "params": params, "intent": intent, "action": action, "signature": str(inspect.signature(func)), "concurrent": concurrent # 记录是否可并发 } REGISTERED_FUNCTIONS[func_name] = metadata FUNCTION_SCHEMA.append({ "name": func_name, "description": description, "parameters": params }) if intent and action: key = (intent, action) if key in FUNCTION_MAP: raise ValueError(f"冲突:语义 ({intent}, {action}) 已被函数 {FUNCTION_MAP[key]} 占用") FUNCTION_MAP[key] = func_name @wraps(func) def wrapper(*args, **kwargs): return func(*args, **kwargs) wrapper._ai_metadata = metadata return wrapper return decorator """ 【语音识别模块】Speech Recognition (Offline) 使用麦克风进行实时语音识别,基于 Vosk 离线模型 支持单次识别 & 持续监听模式 音量可视化、模型路径检查、资源安全释放 """ import random import threading import time import logging import json import os from vosk import Model, KaldiRecognizer import pyaudio from database import config from Progress.utils.logger_utils import log_time, log_step, log_var, log_call from Progress.utils.logger_config import setup_logger # --- 配置参数 --- VOICE_TIMEOUT = config.timeout # 最大等待语音输入时间(秒) VOICE_PHRASE_TIMEOUT = config.phrase_timeout # 单句话最长录音时间 VOSK_MODEL_PATH = "./vosk-model-small-cn-0.22" # --- 初始化日志器 --- logger = logging.getLogger("ai_assistant") # 定义最小有效音量阈值 MIN_VOLUME_THRESHOLD = 600 # 可调(根据环境测试) class SpeechRecognizer: def __init__(self): self.current_timeout = 10 # 可被外部动态调整 self.model = None self.recognizer = None self.audio = None self.is_listening = False self.callback = None # 用户注册的回调函数:callback(text) self._last_text = "" self._listen_thread = None self.sample_rate = 16000 # Vosk 要求采样率 16kHz self.chunk_size = 1600 # 推荐帧大小(对应 ~100ms) # 🔒 TTS 播放状态标志(由外部控制) self._is_tts_playing = False self._tts_lock = threading.Lock() self._load_model() self._init_audio_system() @property def is_tts_playing(self) -> bool: with self._tts_lock: return self._is_tts_playing def set_tts_playing(self, status: bool): """供 TTS 模块调用:通知当前是否正在播放""" with self._tts_lock: self._is_tts_playing = status if not status: logger.debug("🟢 TTS 播放结束,语音识别恢复") @log_step("加载 Vosk 离线模型") @log_time def _load_model(self): """加载本地 Vosk 模型""" if not os.path.exists(VOSK_MODEL_PATH): raise FileNotFoundError(f"❌ Vosk 模型路径不存在: {VOSK_MODEL_PATH}\n","请从 https://alphacephei.com/vosk/models 下载中文小模型并解压至此路径") try: logger.info(f"📦 正在加载模型: {VOSK_MODEL_PATH}") self.model = Model(VOSK_MODEL_PATH) log_call("✅ 模型加载成功") except Exception as e: logger.critical(f"🔴 加载 Vosk 模型失败: {e}") raise RuntimeError("Failed to load Vosk model") from e @log_step("初始化音频系统") @log_time def _init_audio_system(self): """初始化 PyAudio 并创建全局 recognizer""" try: self.audio = pyaudio.PyAudio() # 创建默认识别器(可在每次识别前 Reset) self.recognizer = KaldiRecognizer(self.model, self.sample_rate) logger.debug("✅ 音频系统初始化完成") except Exception as e: logger.exception("❌ 初始化音频系统失败") raise @property def last_text(self) -> str: return self._last_text def is_available(self) -> bool: """检查麦克风是否可用""" if not self.audio: return False try: stream = self.audio.open( format=pyaudio.paInt16, channels=1, rate=self.sample_rate, input=True, frames_per_buffer=self.chunk_size ) stream.close() return True except Exception as e: logger.error(f"🔴 麦克风不可用或无权限: {e}") return False @log_step("执行单次语音识别(自适应超时)") @log_time def listen_and_recognize(self, initial_timeout=10, long_speech_threshold=3.0, post_speech_long_wait=15, post_speech_short_wait=5) -> str: """ 自适应语音识别:用户不说话时等待,说话后根据情况延长等待。 参数说明: initial_timeout: 初始等待语音输入的最大时间(秒) long_speech_threshold: 被认为是“长句”的最小语音持续时间(秒) post_speech_long_wait: 长句后的等待时间(秒) post_speech_short_wait: 短句或中途停顿时的等待时间(秒) """ start_time = time.time() speech_start_time = None # 记录首次检测到语音的时间 last_speech_time = None # 最后一次有语音片段的时间 in_speech = False # 是否正在语音中 final_result_text = "" silence_start_time = None # 开始静默的时间点 logger.debug(f"🎙️ 开始自适应语音识别 (初始等待={initial_timeout}s)...") if self.is_tts_playing: logger.info("🔇 TTS 正在播放,跳过本次识别") return "" logger.info("🔊 请说话...") stream = None try: recognizer = KaldiRecognizer(self.model, self.sample_rate) stream = self.audio.open( format=pyaudio.paInt16, channels=1, rate=self.sample_rate, input=True, frames_per_buffer=self.chunk_size ) while True: current_time = time.time() # 检查是否应中断(TTS 播放) if self.is_tts_playing: logger.info("🔇 TTS 开始播放,中断识别") break # 超出初始等待时间且尚未开始说话 → 超时退出 if not in_speech and (current_time - start_time) > initial_timeout: logger.info("💤 初始等待超时,未检测到语音输入") break # 读取音频数据 data = stream.read(self.chunk_size, exception_on_overflow=False) # 获取音量(用于调试或可视化) audio_level = max(data) is_speaking = audio_level > MIN_VOLUME_THRESHOLD # 处理 Vosk 识别结果 if recognizer.AcceptWaveform(data): result_json = recognizer.FinalResult() text = json.loads(result_json).get("text", "").strip() if text: final_result_text = text last_speech_time = current_time in_speech = True if speech_start_time is None: speech_start_time = current_time logger.debug(f"✅ 完整句子识别: '{text}'") recognizer.Reset() # 重置以便下一句 else: partial_json = recognizer.PartialResult() partial_text = json.loads(partial_json).get("partial", "").strip() if partial_text: last_speech_time = current_time if not in_speech: in_speech = True speech_start_time = current_time logger.debug(f"🎤 检测到语音开始: '{partial_text}'") # === 动态判断是否结束 === if in_speech and last_speech_time: elapsed_since_last_speech = current_time - last_speech_time # 决定当前该用哪个等待阈值 if speech_start_time and (current_time - speech_start_time) > long_speech_threshold: # 长句模式:等久一点 wait_duration = post_speech_long_wait else: # 短句或刚说一点就停:快速收尾 wait_duration = post_speech_short_wait # 设置静默起点(首次进入静默) if is_speaking: silence_start_time = None else: if silence_start_time is None: silence_start_time = current_time elif (current_time - silence_start_time) >= wait_duration: logger.info(f"🔚 静默超过 {wait_duration}s,判定语音结束") break time.sleep(0.05) # 小延迟减少 CPU 占用 # 返回最终识别结果 if final_result_text: self._last_text = final_result_text logger.info(f"🎯 识别结果: '{final_result_text}'") return final_result_text else: logger.info("❓ 未识别到有效内容") self._last_text = "" return "" except Exception as e: logger.exception("🔴 执行语音识别时发生异常") self._last_text = "" return "" finally: if stream: try: stream.stop_stream() stream.close() except Exception as e: logger.warning(f"⚠️ 关闭音频流失败: {e}") @log_step("启动持续语音监听") def start_listening(self, callback=None, language=None): """ 启动后台线程持续监听语音输入 :param callback: 回调函数,接受一个字符串参数 text :param language: 语言代码(忽略,由模型决定) """ if self.is_listening: logger.warning("⚠️ 已在监听中,忽略重复启动") return if not callable(callback): logger.error("🔴 回调函数无效,请传入可调用对象") return self.callback = callback self.is_listening = True self._listen_thread = threading.Thread(target=self._background_listen, args=(language,), daemon=True) self._listen_thread.start() logger.info("🟢 已启动后台语音监听") @log_step("停止语音监听") def stop_listening(self): """安全停止后台监听""" if not self.is_listening: return self.is_listening = False logger.info("🛑 正在停止语音监听...") if self._listen_thread and self._listen_thread != threading.current_thread(): self._listen_thread.join(timeout=3) if self._listen_thread.is_alive(): logger.warning("🟡 监听线程未能及时退出(可能阻塞)") elif self._listen_thread == threading.current_thread(): logger.error("❌ 无法在当前线程中 join 自己!请检查调用栈") else: logger.debug("No thread to join") logger.info("✅ 语音监听已停止") def _background_listen(self, language=None): """后台循环监听线程""" logger.debug("🎧 后台监听线程已启动") stream = None try: stream = self.audio.open( format=pyaudio.paInt16, channels=1, rate=self.sample_rate, input=True, frames_per_buffer=self.chunk_size ) except Exception as e: logger.error(f"🔴 无法打开音频流: {e}") return try: while self.is_listening: # 🔴 检查是否正处于 TTS 播放中 → 跳过本次读取 if self.is_tts_playing: time.sleep(0.1) # 减少 CPU 占用 continue try: data = stream.read(self.chunk_size, exception_on_overflow=False) if self.recognizer.AcceptWaveform(data): result_json = self.recognizer.Result() result_dict = json.loads(result_json) text = result_dict.get("text", "").strip() if text and self.callback: logger.info(f"🔔 回调触发: '{text}'") self.callback(text) self.recognizer.Reset() else: partial = json.loads(self.recognizer.PartialResult()) partial_text = partial.get("partial", "") if partial_text.strip(): logger.debug(f"🗣️ 当前语音片段: '{partial_text}'") except Exception as e: logger.exception("Background listening error") time.sleep(0.05) finally: if stream: stream.stop_stream() stream.close() logger.debug("🔚 后台监听线程退出") recognizer = SpeechRecognizer() """ 【AI语音助手】主程序入口 集成语音识别、Qwen 意图理解、TTS 与动作执行 ✅ 已修复:不再访问 _last_text 私有字段 ✅ 增强:异常防护、类型提示、唤醒词预留接口 """ import random import sys import time import logging # --- 导入日志工具 --- from Progress.utils.logger_config import setup_logger from Progress.utils.logger_utils import log_time, log_step, log_var, log_call # --- 显式导入各模块核心类 --- from Progress.app.voice_recognizer import recognizer from Progress.app.qwen_assistant import assistant from Progress.app.text_to_speech import tts_engine from Progress.app.system_controller import executor from database import config # --- 初始化全局日志器 --- logger = logging.getLogger("ai_assistant") @log_step("处理一次语音交互(AI动态控制等待)") @log_time def handle_single_interaction(): text = recognizer.listen_and_recognize(recognizer.current_timeout) if not text: logger.info("🔇 未检测到语音") return logger.info(f"🗣️ 用户说: '{text}'") # AI 决策包含是否预期后续输入 decision = assistant.process_voice_command(text) expect_follow_up = decision.get("expect_follow_up", False) # 执行任务 result = executor.execute_task_plan(decision) ai_reply = build_reply(result) # 根据 AI 判断设置下次等待策略 if expect_follow_up: recognizer.current_timeout = random.uniform(10, 20) logger.debug("🧠 AI 预期后续提问,延长等待时间") else: recognizer.current_timeout = 5 logger.debug("🔚 AI 认为对话结束,缩短等待") # 回复 logger.info(f"🤖 回复: {ai_reply}") tts_engine.speak(ai_reply) @log_step("启动 AI 语音助手") @log_time def main(): logger.info("🚀 正在启动 AI 语音助手系统...") try: tts_engine.start() log_call("✅ 所有模块初始化完成,进入监听循环") log_call("\n" + "—" * 50) log_call("🎙️ 语音助手已就绪") log_call("💡 说出你的命令,例如:'打开浏览器'、'写一篇春天的文章'") log_call("🛑 说出‘退出’、‘关闭’、‘停止’或‘拜拜’来结束程序") log_call("—" * 50 + "\n") while True: try: handle_single_interaction() # 🚩 检查上一次执行的结果是否有退出请求 last_result = executor.last_result # 假设 TaskOrchestrator 记录了 last_result if last_result and last_result.get("should_exit"): logger.info("🎯 接收到退出指令,即将终止程序...") break # 跳出循环,进入清理流程 except KeyboardInterrupt: logger.info("🛑 用户主动中断 (Ctrl+C),准备退出...") raise # 让 main 捕获并退出 except Exception as e: logger.exception("⚠️ 单次交互过程中发生异常,已降级处理") error_msg = "抱歉,我在处理刚才的操作时遇到了一点问题。" logger.info(f"🗣️ 回复: {error_msg}") tts_engine.speak(error_msg) last_text = recognizer.last_text.lower() exit_keywords = ['退出', '关闭', '停止', '拜拜', '再见'] if any(word in last_text for word in exit_keywords): logger.info("🎯 用户请求退出,程序即将终止") break time.sleep(0.5) tts_engine.stop() logger.info("👋 语音助手已安全退出") except KeyboardInterrupt: logger.info("🛑 用户通过 Ctrl+C 中断程序") print("\n👋 再见!") except Exception as e: logger.exception("❌ 主程序运行时发生未预期异常") print(f"\n🚨 程序异常终止:{e}") sys.exit(1) if __name__ == "__main__": if not logging.getLogger().handlers: setup_logger(name="ai_assistant", log_dir="logs", level=logging.INFO) main()
10-25
# power/power_sync.py import json import os import re import logging import sys from pathlib import Path from shutil import copy2 from datetime import datetime from utils import resource_path from typing import Dict, List, Tuple, Any # ------------------------------- # 日志配置 # ------------------------------- PROJECT_ROOT = Path(__file__).parent.parent.resolve() LOG_DIR = PROJECT_ROOT / "output" / "log" LOG_DIR.mkdir(parents=True, exist_ok=True) LOG_FILE = LOG_DIR / f"power_sync_{datetime.now().strftime('%Y%m%d_%H%M%S')}.log" class PowerTableSynchronizer: def __init__(self, c_file_path=None, dry_run=False, config_path="config/config.json"): self.logger = logging.getLogger(__name__) # === Step 1: 使用 resource_path 解析所有路径 === self.config_file_path = resource_path(config_path) self.logger.info(f"配置文件: {self.config_file_path}") if not os.path.exists(self.config_file_path): raise FileNotFoundError(f"配置文件不存在: {self.config_file_path}") try: with open(self.config_file_path, 'r', encoding='utf-8') as f: self.config = json.load(f) self.logger.info(f"配置文件已加载: {self.config_file_path}") except json.JSONDecodeError as e: raise ValueError(f"配置文件格式错误,JSON 解析失败: {self.config_file_path}") from e except Exception as e: raise RuntimeError(f"读取配置文件时发生未知错误: {e}") from e self.dry_run = dry_run # === Step 2: 目标 C 文件处理 === if c_file_path is None: if "target_c_file" not in self.config: raise KeyError("config 文件缺少 'target_c_file' 字段") internal_c_path = self.config["target_c_file"] logging.info(f"使用内置 C 文件: {internal_c_path}") self.c_file_path =resource_path(internal_c_path) self._is_internal_c_file = True else: self.c_file_path = Path(c_file_path) self._is_internal_c_file = False if not self.c_file_path.exists(): raise FileNotFoundError(f"找不到 C 源文件: {self.c_file_path}") # === Step 3: 初始化数据容器 === self.locale_enums = {} # enum_name -> {"macros": [macro], "values": {macro: idx}} self.power_tables = {} # table_name -> [lines] self.table_pending_appends = {} # table_name -> List[str] # === Step 4: 加载锚点标记 === for marker_key in ["STR_POWER_LOCALE_ENUM", "END_POWER_LOCALE_ENUM", "STR_POWER_TABLE", "END_POWER_TABLE"]: if marker_key not in self.config: raise KeyError(f"config 文件缺少 '{marker_key}' 字段") self.start_enum_marker = self.config["STR_POWER_LOCALE_ENUM"] self.end_enum_marker = self.config["END_POWER_LOCALE_ENUM"] self.start_table_marker = self.config["STR_POWER_TABLE"] self.end_table_marker = self.config["END_POWER_TABLE"] # === Step 5: 功率表文件 === gen_file = PROJECT_ROOT / "output" / "tx_limit_table.c" if not gen_file.exists(): self.logger.error(f" 找不到生成文件: {gen_file}") raise FileNotFoundError(f"请先运行 excel_to_clm.py 生成 tx_limit_table.c: {gen_file}") try: self.power = gen_file.read_text(encoding='utf-8') except Exception as e: self.logger.error(f" 读取 {gen_file} 失败: {e}") raise # 加载 locale_targets 配置 if "locale_targets" not in self.config: raise KeyError("config 文件缺少 'locale_targets' 字段") required_keys = {"enum", "table", "suffix"} for i, item in enumerate(self.config["locale_targets"]): if not isinstance(item, dict) or not required_keys.issubset(item.keys()): raise ValueError(f"locale_targets[{i}] 缺少必要字段 {required_keys}: {item}") self.locale_targets = self.config["locale_targets"] self.logger.info(f"已加载 {len(self.locale_targets)} 个 Locale 映射目标") def offset_to_lineno(self, content: str, offset: int) -> int: """将字符偏移量转换为行号(从1开始)""" return content.count('\n', 0, offset) + 1 def _extract_brace_content(self, content: str, start_brace_pos: int) -> tuple[str | None, int]: depth = 0 i = start_brace_pos while i < len(content): c = content[i] if c == '{': depth += 1 elif c == '}': depth -= 1 if depth == 0: inner = content[start_brace_pos + 1:i].strip() return inner, i + 1 # 返回内部内容 和 '}' 后的下一个索引 i += 1 return None, -1 def parse_c_power_definitions(self): """解析 C 源文件中的 enum locale_xxx_idx 和 static const unsigned char locales_xxx[]""" self.logger.info("解析 C 文件中的功率表定义...") self.logger.info("...") content = self.c_file_path.read_text(encoding='utf-8') # --- 解析 ENUM 区域 --- try: enum_start_idx = content.find(self.start_enum_marker) enum_end_idx = content.find(self.end_enum_marker) if enum_start_idx == -1 or enum_end_idx == -1: raise ValueError("未找到 LOCALE ENUM 标记块") enum_block = content[enum_start_idx:enum_end_idx] start_line = self.offset_to_lineno(content, enum_start_idx) end_line = self.offset_to_lineno(content, enum_end_idx) self.logger.info(f"找到 ENUM 标记范围:第 {start_line} 行 → 第 {end_line} 行") enum_pattern = re.compile( r'(enum\s+locale_[a-zA-Z0-9_]+(?:_[a-zA-Z0-9_]+)*_idx\s*\{)([^}]*)\}\s*;', re.DOTALL | re.IGNORECASE ) for match in enum_pattern.finditer(enum_block): enum_decl = match.group(0) self.logger.debug(f" 解析枚举声明: {enum_decl}") enum_name_match = re.search(r'locale_[\w\d_]+_idx', enum_decl, re.IGNORECASE) if not enum_name_match: continue enum_name = enum_name_match.group(0) body = match.group(2) # 在 parse_c_power_definitions() 中 body_no_comment = re.sub(r'//.*|/\*.*?\*/', '', body, flags=re.DOTALL) # 只提取 = 数字 的宏 valid_assignments = re.findall( r'(LOCALE_[A-Za-z0-9_]+)\s*=\s*(-?\b\d+\b)', body_no_comment ) macro_list = [m[0] for m in valid_assignments] value_map = {m: int(v) for m, v in valid_assignments} self.locale_enums[enum_name] = { "macros": macro_list, "values": value_map, "raw_body": body } self.logger.info(f" 解析枚举 {enum_name}: {len(macro_list)} 个宏") except Exception as e: self.logger.error(f"解析 ENUM 失败: {e}", exc_info=True) # --- 解析 TABLE 区域 --- try: table_start_idx = content.find(self.start_table_marker) table_end_idx = content.find(self.end_table_marker) if table_start_idx == -1 or table_end_idx == -1: raise ValueError("未找到 POWER TABLE 标记块") table_block = content[table_start_idx:table_end_idx] start_line = self.offset_to_lineno(content, table_start_idx) end_line = self.offset_to_lineno(content, table_end_idx) self.logger.info(f"找到 TABLE 标记范围:第 {start_line} 行 → 第 {end_line} 行") # === 增强解析 TABLE:按 /* Locale X */ 分块提取 === array_matches = list(re.finditer( r''' ^ # 行首(配合 MULTILINE) \s* # 可选前导空白 (?:static\s+)? # 可选 static (?:const\s+)? # 可选 const (?:PROGMEM\s+)? # 可选 PROGMEM(常见于嵌入式) (?:unsigned\s+char|uint8_t) # 支持两种类型 \s+ # 类型与变量名之间至少一个空白 ([a-zA-Z_]\w*) # 数组名(如 locales_2g_ht) \s*\[\s*\] # 匹配 [ ],允许空格 ''', table_block, re.VERBOSE | re.MULTILINE | re.IGNORECASE )) if not array_matches: self.logger.warning("未在 TABLE 区域找到任何 power table 数组定义") # === 新增调试信息 === sample = table_block[:1000] self.logger.debug(f"TABLE block 前 1000 字符内容:\n{sample}") else: for match in array_matches: table_name = match.group(1) self.logger.info( f" 找到数组定义: {table_name} @ 第 {self.offset_to_lineno(table_block, match.start())} 行") self.logger.debug(f" 正则匹配到数组名: '{table_name}' (原始匹配: {match.group(0)})") self.logger.debug(f" match.end() = {match.end()}, " f"后续字符 = '{table_block[match.end():match.end() + 20].replace(chr(10), '\\n')}'") # 查找 '{' 的位置 brace_start = table_block.find('{', match.end()) if brace_start == -1: self.logger.warning(f" 未找到 起始符 → 跳过数组 {table_name}") continue else: self.logger.debug( f" 找到 '{{' 位置: 偏移量 {brace_start}, 行号 {self.offset_to_lineno(table_block, brace_start)}") # 提取大括号内的内容 inner_content, end_pos = self._extract_brace_content(table_block, brace_start) if inner_content is None: self.logger.warning(f" 提取 {table_name} 的大括号内容失败 → inner_content 为 None") continue else: self.logger.info(f" 成功提取 {table_name} 的大括号内容,长度: {len(inner_content)} 字符") # self.logger.info(f"--- 开始 ---") # self.logger.info(f"{inner_content}") # self.logger.info(f"--- 结束 ---") # 按行分割 lines = inner_content.splitlines() self.logger.info(f" {table_name} 共提取 {len(lines)} 行数据") # 可选:打印前几行预览(避免日志爆炸) preview_lines = min(10, len(lines)) for i in range(preview_lines): self.logger.debug(f"[{i:2d}] {lines[i]}") if len(lines) > 10: self.logger.debug("... 还有更多行") # 逐行解析 body_content,按 /* Locale X */ 分块 entries = [] # 存储每一块: {'locale_tag': 'a_359', 'lines': [...]} current_block = [] current_locale = None for line_num, line in enumerate(lines): stripped = line.strip() self.logger.debug(f"[Line {line_num:3d}] |{line}|") # 原始行(含空白) self.logger.debug(f" → stripped: |{stripped}|") # 检查是否是新的 Locale 注释 comment_match = re.match(r'/\*\s*Locale\s+([A-Za-z0-9_-]+)\s*\([^)]+\)\s*\*/', stripped, re.IGNORECASE) if comment_match: # 保存上一个 block if current_locale and current_block: entries.append({ 'locale_tag': current_locale, 'lines': [ln.rstrip(',').rstrip() for ln in current_block] }) # self.logger.info( # f" 保存前一个 Locale 数据块: {current_locale.} ({len(current_block)} 行)") # 开始新 block raw_name = comment_match.group(1) # 如 A-359 normalized = raw_name.replace('-', '_') # → A_359 current_locale = normalized current_block = [] #self.logger.info(f" 发现新 Locale 注释: '{raw_name}' → 标准化为 '{normalized}'") continue # 忽略空行、纯注释行 clean_line = re.sub(r'/\*.*?\*/|//.*', '', stripped).strip() if clean_line: current_block.append(stripped) self.logger.debug(f" 添加有效行: {stripped}") else: if not stripped: self.logger.debug(" 忽略空行") elif '//' in stripped or ('/*' in stripped and '*/' in stripped): self.logger.debug(f" 忽略纯注释行: {stripped}") else: self.logger.warning(f" 可疑但未处理的行: {stripped}") # 可能是跨行注释开头 # 保存最后一个 block if current_locale and current_block: entries.append({ 'locale_tag': current_locale, 'lines': [ln.rstrip(',').rstrip() for ln in current_block] }) self.power_tables[table_name] = entries self.logger.info(f" 解析数组 {table_name}: {len(entries)} 个 Locale 数据块") except Exception as e: self.logger.error(f"解析 TABLE 失败: {e}", exc_info=True) def validate_and_repair(self): self.logger.info("对原始数据块进行验证和修复...") self.logger.info("...") modified = False changes = [] # 提取所有 Locale 原始数据块(已由 extract_all_raw_locale_data 返回原始行) tx_power_data = self.extract_all_raw_locale_data() for target in self.locale_targets: enum_name = target["enum"] table_name = target["table"] suffix = target["suffix"] # 关键字段检查 if "assigned_locale" not in target: raise KeyError(f"locale_targets 缺少 'assigned_locale': {target}") locale = target["assigned_locale"] macro_name = f"LOCALE_{suffix}_IDX_{locale.replace('-', '_')}" # 检查是否能在源文件中找到该 Locale 数据 if locale not in tx_power_data: self.logger.warning(f" 在 tx_limit_table.c 中找不到 Locale 数据: {locale}") continue # 获取原始行列表(含缩进、注释、逗号) data_lines = tx_power_data[locale] # ← 这些是原始字符串行 # --- 处理 ENUM --- if enum_name not in self.locale_enums: self.logger.warning(f"未找到枚举定义: {enum_name}") continue enum_data = self.locale_enums[enum_name] macros = enum_data["macros"] values = enum_data["values"] next_idx = self._get_next_enum_index(enum_name) if macro_name not in macros: macros.append(macro_name) values[macro_name] = next_idx changes.append(f"ENUM + {macro_name} = {next_idx}") modified = True if "pending_updates" not in enum_data: enum_data["pending_updates"] = [] enum_data["pending_updates"].append((macro_name, next_idx)) # --- 处理 TABLE --- if table_name not in self.power_tables: self.logger.warning(f"未找到 power table 数组: {table_name}") continue self.logger.info(f"找到 power table 数组: {table_name}") current_entries = self.power_tables[table_name] # 已加载的条目列表 # 归一化目标 locale 名称用于比较 target_locale_normalized = locale.replace('-', '_') self.logger.debug(f" 目标 Locale 名称: {locale} → 标准化为 {target_locale_normalized}") # 检查是否已存在(仅比对 locale_tag) self.logger.debug(f"当前 {table_name} 中已有的 locale_tags: {[e['locale_tag'] for e in current_entries]}") already_exists = any( entry['locale_tag'] == target_locale_normalized for entry in current_entries ) if already_exists: self.logger.warning(f"Locale '{locale}' 已存在于 {table_name},跳过") continue # 直接记录原始行,不再清洗! current_entries.append({ 'locale_tag': target_locale_normalized, 'lines': data_lines # 原样保存原始行(用于后续显示或校验) }) changes.append(f"TABLE + {len(data_lines)} 行 → {table_name}") modified = True # 记录待写入的数据块(包含原始带格式内容) if table_name not in self.table_pending_appends: self.table_pending_appends[table_name] = [] self.table_pending_appends[table_name].append({ 'locale_tag': locale, # 原始名称 'data_lines': data_lines # 完整原始行(含缩进、注释、逗号) }) if changes: self.logger.info(f"共需添加 {len(changes)} 项:\n" + "\n".join(f" → {ch}" for ch in changes)) return modified def _get_next_enum_index(self, enum_name): """基于已解析的 values 获取下一个可用索引""" if enum_name not in self.locale_enums: self.logger.warning(f"未找到枚举定义: {enum_name}") return 0 value_map = self.locale_enums[enum_name]["values"] # 直接使用已解析的数据 if not value_map: return 0 # 只考虑非负数(排除 CLM_LOC_NONE=-1, CLM_LOC_SAME=-2 等保留值) used_indices = [v for v in value_map.values() if v >= 0] if used_indices: next_idx = max(used_indices) + 1 else: next_idx = 0 # 没有有效数值时从 0 开始 return next_idx def extract_all_raw_locale_data(self) -> Dict[str, List[str]]: """ 从 output/tx_limit_table.c 中提取所有 /* Locale XXX */ 后面的数据块(直到下一个 Locale 或 EOF) 使用逐行解析,保留原始格式(含缩进、注释、逗号),不进行任何清洗 """ lines = self.power.splitlines() locale_data = {} current_locale = None current_block = [] for i, line in enumerate(lines): stripped = line.strip() # 检查是否是新的 Locale 标记 match = re.match(r'/\*\s*Locale\s+([A-Za-z0-9_]+)\s*\*/', stripped, re.IGNORECASE) if match: # 保存上一个 block(直接保存原始行,不清洗) if current_locale: locale_data[current_locale] = current_block self.logger.debug(f" 已提取 Locale {current_locale},共 {len(current_block)} 行") # 开始新 block current_locale = match.group(1) current_block = [] self.logger.debug(f" 发现 Locale: {current_locale}") continue # 收集当前 locale 的内容(原样保留) if current_locale is not None: current_block.append(line.rstrip('\r\n')) # 仅去除换行符,其他不变 # 处理最后一个 block if current_locale: locale_data[current_locale] = current_block self.logger.debug(f" 已提取最后 Locale {current_locale},共 {len(current_block)} 行") self.logger.info(f" 成功提取 {len(locale_data)} 个 Locale 数据块: {list(locale_data.keys())}") return locale_data def _write_back_in_blocks(self): """将修改后的 enum 和 table 块写回原 C 文件,基于锚点 block 精准更新""" self.logger.info("正在写回修改后的数据...") if self.dry_run: self.logger.info("DRY-RUN: 跳过写入文件") return try: content = self.c_file_path.read_text(encoding='utf-8') # === Step 1: 查找所有锚点位置 === enum_start = content.find(self.start_enum_marker) enum_end = content.find(self.end_enum_marker) table_start = content.find(self.start_table_marker) table_end = content.find(self.end_table_marker) if -1 in (enum_start, enum_end, table_start, table_end): missing = [] if enum_start == -1: missing.append(f"起始 ENUM: {self.start_enum_marker}") if enum_end == -1: missing.append(f"结束 ENUM: {self.end_enum_marker}") if table_start == -1: missing.append(f"起始 TABLE: {self.start_table_marker}") if table_end == -1: missing.append(f"结束 TABLE: {self.end_table_marker}") raise ValueError(f"未找到锚点标记: {missing}") enum_block = content[enum_start:enum_end] table_block = content[table_start:table_end] self.logger.info(f" 修改枚举范围: 第 {self.offset_to_lineno(content, enum_start)} 行 → " f"{self.offset_to_lineno(content, enum_end)} 行") self.logger.info(f" 修改数组范围: 第 {self.offset_to_lineno(content, table_start)} 行 → " f"{self.offset_to_lineno(content, table_end)} 行") replacements = [] # (start, end, replacement) def remove_comments(text): text = re.sub(r'//.*$', '', text, flags=re.MULTILINE) text = re.sub(r'/\*.*?\*/', '', text, flags=re.DOTALL) return text.strip() # === Step 3: 更新 ENUMs === for target in self.locale_targets: enum_name_key = target["enum"] enum_data = self.locale_enums.get(enum_name_key) if not enum_data or "pending_updates" not in enum_data: continue insertions = enum_data["pending_updates"] if not insertions: continue pattern = re.compile( rf'(enum\s+{re.escape(enum_name_key)}\s*\{{)([^}}]*)\}}\s*;', re.DOTALL | re.IGNORECASE ) match = pattern.search(enum_block) if not match: self.logger.warning(f"未找到枚举: {enum_name_key}") continue header_part = match.group(1) body_content = match.group(2) lines = [ln for ln in body_content.split('\n') if ln.strip()] last_line = lines[-1] if lines else "" indent_match = re.match(r'^(\s*)', last_line) line_indent = indent_match.group(1) if indent_match else " " expanded_last = last_line.expandtabs(4) clean_last = remove_comments(last_line) first_macro_match = re.search(r'LOCALE_[A-Z0-9_]+', clean_last) default_indent_len = len(line_indent.replace('\t', ' ')) target_macro_col = default_indent_len if first_macro_match: raw_before = last_line[:first_macro_match.start()] expanded_before = raw_before.expandtabs(4) target_macro_col = len(expanded_before) eq_match = re.search(r'=\s*\d+', clean_last) if eq_match and first_macro_match: eq_abs_start = first_macro_match.start() + eq_match.start() raw_eq_part = last_line[:eq_abs_start] expanded_eq_part = raw_eq_part.expandtabs(4) target_eq_col = len(expanded_eq_part) else: target_eq_col = target_macro_col + 30 new_body = body_content.rstrip() if not new_body.endswith(','): new_body += ',' for macro_name, next_idx in insertions: current_visual_len = len(macro_name.replace('\t', ' ')) padding_to_eq = max(1, target_eq_col - target_macro_col - current_visual_len) formatted_macro = f"{macro_name}{' ' * padding_to_eq}= {next_idx}" visible_macros = len(re.findall(r'LOCALE_[A-Z0-9_]+', clean_last)) MAX_PER_LINE = 4 if visible_macros < MAX_PER_LINE and last_line.strip(): insertion = f" {formatted_macro}," updated_last = last_line.rstrip() + insertion new_body = body_content.rsplit(last_line, 1)[0] + updated_last last_line = updated_last clean_last = remove_comments(last_line) else: raw_indent_len = len(line_indent.replace('\t', ' ')) leading_spaces = max(0, target_macro_col - raw_indent_len) prefix_padding = ' ' * leading_spaces new_line = f"\n{line_indent}{prefix_padding}{formatted_macro}," new_body += new_line last_line = new_line.strip() clean_last = remove_comments(last_line) new_enum = f"{header_part}{new_body}\n}};" full_start = enum_start + match.start() full_end = enum_start + match.end() replacements.append((full_start, full_end, new_enum)) self.logger.debug(f"插入 ENUM: {dict(insertions)}") enum_data.pop("pending_updates", None) # === Step 4: 更新 TABLEs &mdash;&mdash; 使用 pending_appends 中的数据 === seen = set() table_names = [] for target in self.locale_targets: name = target["table"] if name not in seen: table_names.append(name) seen.add(name) for table_name in table_names: if table_name not in self.power_tables: self.logger.debug(f"跳过未定义的表: {table_name}") continue if table_name not in self.table_pending_appends: self.logger.debug(f"无待插入数据: {table_name}") continue data_to_insert = self.table_pending_appends[table_name] if not data_to_insert: continue pattern = re.compile( rf'(\b{re.escape(table_name)}\s*\[\s*\]\s*=\s*\{{)(.*?)(\}}\s*;\s*)', re.DOTALL | re.IGNORECASE ) match = pattern.search(table_block) if not match: self.logger.warning(f"未找到数组定义: {table_name}") continue header_part = match.group(1) body_content = match.group(2) footer_part = match.group(3) lines = [ln for ln in body_content.split('\n') if ln.strip()] last_line = lines[-1] if lines else "" indent_match = re.match(r'^(\s*)', last_line) line_indent = indent_match.group(1) if indent_match else " " new_body = body_content.rstrip() # ==== 遍历每个待插入的 locale 数据块 ==== for item in data_to_insert: locale_tag = item['locale_tag'] locale_display = locale_tag.replace('_', '-') macro_suffix = locale_tag # 添加注释标记(与原始风格一致) new_body += f"\n{line_indent}/* Locale {locale_display} ({macro_suffix}) */" # 原始行加空格,不 strip,不加额外 indent for raw_line in item['data_lines']: # 仅排除纯空白行(可选),保留所有格式 if raw_line.strip(): # 排除空行 # 使用原始缩进,不再加 {line_indent} new_body += f"\n{line_indent}{raw_line}" # 构造新 table 内容 full_start = table_start + match.start() full_end = table_start + match.end() new_table = f"{header_part}{new_body}\n{footer_part}" replacements.append((full_start, full_end, new_table)) self.logger.debug(f"插入{len(data_to_insert)} 个 Locale 数据块到 {table_name}") # 清除防止重复写入 self.table_pending_appends.pop(table_name, None) # === Step 5: 应用所有替换(倒序避免偏移错乱)=== if not replacements: self.logger.info("无任何修改需要写入") return replacements.sort(key=lambda x: x[0], reverse=True) # 倒序应用 final_content = content for start, end, r in replacements: #self.logger.info(f"增加 [{start}:{end}] → 新内容:\n{r[:150]}...") final_content = final_content[:start] + r + final_content[end:] if content == final_content: self.logger.info("文件内容未发生变化,无需写入") return # 备份原文件 backup_path = self.c_file_path.with_suffix('.c.bak') copy2(self.c_file_path, backup_path) self.logger.info(f"已备份 → {backup_path}") # 写入新内容 self.c_file_path.write_text(final_content, encoding='utf-8') self.logger.info(f"成功写回 C 文件: {self.c_file_path}") self.logger.info(f"共更新 {len(replacements)} 个区块") except Exception as e: self.logger.error(f"写回文件失败: {e}", exc_info=True) raise def run(self): self.logger.info("开始同步 POWER LOCALE 定义...") try: self.parse_c_power_definitions() was_modified = self.validate_and_repair() if was_modified: if self.dry_run: self.logger.info("预览模式:检测到变更,但不会写入文件") else: self._write_back_in_blocks() # 执行写入操作 self.logger.info("同步完成:已成功更新 C 文件") else: self.logger.info("所有 Locale 已存在,无需修改") return was_modified except Exception as e: self.logger.error(f"同步失败: {e}", exc_info=True) raise def main(): logging.basicConfig( level=logging.INFO, format='%(asctime)s [%(levelname)s] %(name)s: %(message)s', handlers=[ logging.FileHandler(LOG_FILE, encoding='utf-8'), logging.StreamHandler(sys.stdout) ], force=True ) logger = logging.getLogger(__name__) # 固定配置 c_file_path = "input/wlc_clm_data_6726b0.c" dry_run = False log_level = "INFO" config_path = "config/config.json" logging.getLogger().setLevel(log_level) print(f"开始同步 POWER LOCALE 定义...") print(f"C 源文件: {c_file_path}") if dry_run: print("启用 dry-run 模式:仅预览变更,不修改文件") try: sync = PowerTableSynchronizer( c_file_path=None, dry_run=dry_run, config_path=config_path, ) sync.run() print("同步完成!") print(f"详细日志已保存至: {LOG_FILE}") except FileNotFoundError as e: logger.error(f"文件未找到: {e}") print("请检查文件路径是否正确。") sys.exit(1) except PermissionError as e: logger.error(f"权限错误: {e}") print("无法读取或写入文件,请检查权限。") sys.exit(1) except Exception as e: logger.error(f"程序异常退出: {e}", exc_info=True) sys.exit(1) if __name__ == '__main__': main() 内容变更检测 参考power_sync怎么做的
10-25
C:\Users\admin\PyCharmMiscProject\.venv\Scripts\python.exe F:\excle_to_clm\rate_set\rate_sync.py 2025-10-25 17:04:11,054 [INFO] root: 资源路径: F:\excle_to_clm 2025-10-25 17:04:11,055 [INFO] root: 资源路径: F:\excle_to_clm 2025-10-25 17:04:11,055 [INFO] __main__.RateSetSynchronizer: 开始同步 RATE_SET 数据... 2025-10-25 17:04:11,106 [INFO] __main__.RateSetSynchronizer: 正在处理 C 文件: wlc_clm_data_6726b0.c 2025-10-25 17:04:11,106 [INFO] __main__.RateSetSynchronizer: → 处理子文件: 2G_20M_EXT4_rate_set.c 2025-10-25 17:04:11,107 [INFO] __main__.RateSetSynchronizer: 解析出 23 个已有枚举项 2025-10-25 17:04:11,109 [INFO] __main__.RateSetSynchronizer: 从文件中初步匹配到 6 个 rate set 定义块 2025-10-25 17:04:11,109 [INFO] __main__.RateSetSynchronizer: 共成功提取 6 个有效子集 2025-10-25 17:04:11,110 [INFO] __main__.RateSetSynchronizer: 开始构建注入内容,当前最大枚举值 = 23 2025-10-25 17:04:11,111 [INFO] __main__.RateSetSynchronizer: 构建完成:新增 246 个数据项,6 个索引,6 个枚举 2025-10-25 17:04:11,111 [INFO] __main__.RateSetSynchronizer: 开始执行局部块写入操作... 2025-10-25 17:04:11,111 [WARNING] __main__.RateSetSynchronizer: ❌ 处理文件失败 [2G_20M_EXT4_rate_set.c]: 未找到枚举定义: rate_set_2g_20m_ext4 Traceback (most recent call last): File "F:\excle_to_clm\rate_set\rate_sync.py", line 353, in inject_new_data updated_content = self._write_back_in_blocks( full_content, parsed, new_data, new_indices, new_enums ) File "F:\excle_to_clm\rate_set\rate_sync.py", line 408, in _write_back_in_blocks raise ValueError(f"未找到枚举定义: {self.enum_name}") ValueError: 未找到枚举定义: rate_set_2g_20m_ext4 2025-10-25 17:04:11,116 [INFO] __main__.RateSetSynchronizer: → 处理子文件: 2G_20M_EXT_rate_set.c 2025-10-25 17:04:11,116 [INFO] __main__.RateSetSynchronizer: 解析出 19 个已有枚举项 2025-10-25 17:04:11,118 [INFO] __main__.RateSetSynchronizer: 从文件中初步匹配到 6 个 rate set 定义块 2025-10-25 17:04:11,118 [INFO] __main__.RateSetSynchronizer: 共成功提取 6 个有效子集 2025-10-25 17:04:11,118 [INFO] __main__.RateSetSynchronizer: 开始构建注入内容,当前最大枚举值 = 19 2025-10-25 17:04:11,118 [INFO] __main__.RateSetSynchronizer: 构建完成:新增 194 个数据项,6 个索引,6 个枚举 2025-10-25 17:04:11,118 [INFO] __main__.RateSetSynchronizer: 开始执行局部块写入操作... 2025-10-25 17:04:11,118 [WARNING] __main__.RateSetSynchronizer: ❌ 处理文件失败 [2G_20M_EXT_rate_set.c]: 未找到枚举定义: rate_set_2g_20m_ext Traceback (most recent call last): File "F:\excle_to_clm\rate_set\rate_sync.py", line 353, in inject_new_data updated_content = self._write_back_in_blocks( full_content, parsed, new_data, new_indices, new_enums ) File "F:\excle_to_clm\rate_set\rate_sync.py", line 408, in _write_back_in_blocks raise ValueError(f"未找到枚举定义: {self.enum_name}") ValueError: 未找到枚举定义: rate_set_2g_20m_ext 2025-10-25 17:04:11,119 [INFO] __main__.RateSetSynchronizer: → 处理子文件: 2G_20M_rate_set.c 2025-10-25 17:04:11,119 [INFO] __main__.RateSetSynchronizer: 解析出 32 个已有枚举项 2025-10-25 17:04:11,120 [INFO] __main__.RateSetSynchronizer: 从文件中初步匹配到 8 个 rate set 定义块 2025-10-25 17:04:11,120 [INFO] __main__.RateSetSynchronizer: 共成功提取 8 个有效子集 2025-10-25 17:04:11,120 [INFO] __main__.RateSetSynchronizer: 开始构建注入内容,当前最大枚举值 = 32 2025-10-25 17:04:11,120 [INFO] __main__.RateSetSynchronizer: 构建完成:新增 184 个数据项,8 个索引,8 个枚举 2025-10-25 17:04:11,120 [INFO] __main__.RateSetSynchronizer: 开始执行局部块写入操作... 2025-10-25 17:04:11,122 [INFO] __main__.RateSetSynchronizer: 成功构建新内容,总长度变化: 3905640 → 3910047 2025-10-25 17:04:11,122 [INFO] __main__.RateSetSynchronizer: ✅ 成功注入 8 条目到 rate_set_2g_20m 2025-10-25 17:04:11,122 [INFO] __main__.RateSetSynchronizer: → 处理子文件: 2G_40M_EXT4_rate_set.c 2025-10-25 17:04:11,123 [INFO] __main__.RateSetSynchronizer: 解析出 19 个已有枚举项 2025-10-25 17:04:11,125 [INFO] __main__.RateSetSynchronizer: 从文件中初步匹配到 5 个 rate set 定义块 2025-10-25 17:04:11,125 [INFO] __main__.RateSetSynchronizer: 共成功提取 5 个有效子集 2025-10-25 17:04:11,125 [INFO] __main__.RateSetSynchronizer: 开始构建注入内容,当前最大枚举值 = 19 2025-10-25 17:04:11,125 [INFO] __main__.RateSetSynchronizer: 构建完成:新增 241 个数据项,5 个索引,5 个枚举 2025-10-25 17:04:11,125 [INFO] __main__.RateSetSynchronizer: 开始执行局部块写入操作... 2025-10-25 17:04:11,125 [WARNING] __main__.RateSetSynchronizer: ❌ 处理文件失败 [2G_40M_EXT4_rate_set.c]: 未找到枚举定义: rate_set_2g_40m_ext4 Traceback (most recent call last): File "F:\excle_to_clm\rate_set\rate_sync.py", line 353, in inject_new_data updated_content = self._write_back_in_blocks( full_content, parsed, new_data, new_indices, new_enums ) File "F:\excle_to_clm\rate_set\rate_sync.py", line 408, in _write_back_in_blocks raise ValueError(f"未找到枚举定义: {self.enum_name}") ValueError: 未找到枚举定义: rate_set_2g_40m_ext4 2025-10-25 17:04:11,126 [INFO] __main__.RateSetSynchronizer: → 处理子文件: 2G_40M_EXT_rate_set.c 2025-10-25 17:04:11,126 [INFO] __main__.RateSetSynchronizer: 解析出 15 个已有枚举项 2025-10-25 17:04:11,127 [INFO] __main__.RateSetSynchronizer: 从文件中初步匹配到 5 个 rate set 定义块 2025-10-25 17:04:11,127 [INFO] __main__.RateSetSynchronizer: 共成功提取 5 个有效子集 2025-10-25 17:04:11,127 [INFO] __main__.RateSetSynchronizer: 开始构建注入内容,当前最大枚举值 = 15 2025-10-25 17:04:11,127 [INFO] __main__.RateSetSynchronizer: 构建完成:新增 189 个数据项,5 个索引,5 个枚举 2025-10-25 17:04:11,127 [INFO] __main__.RateSetSynchronizer: 开始执行局部块写入操作... 2025-10-25 17:04:11,128 [WARNING] __main__.RateSetSynchronizer: ❌ 处理文件失败 [2G_40M_EXT_rate_set.c]: 未找到枚举定义: rate_set_2g_40m_ext Traceback (most recent call last): File "F:\excle_to_clm\rate_set\rate_sync.py", line 353, in inject_new_data updated_content = self._write_back_in_blocks( full_content, parsed, new_data, new_indices, new_enums ) File "F:\excle_to_clm\rate_set\rate_sync.py", line 408, in _write_back_in_blocks raise ValueError(f"未找到枚举定义: {self.enum_name}") ValueError: 未找到枚举定义: rate_set_2g_40m_ext 2025-10-25 17:04:11,128 [INFO] __main__.RateSetSynchronizer: → 处理子文件: 2G_40M_rate_set.c 2025-10-25 17:04:11,128 [INFO] __main__.RateSetSynchronizer: 解析出 28 个已有枚举项 2025-10-25 17:04:11,130 [INFO] __main__.RateSetSynchronizer: 从文件中初步匹配到 3 个 rate set 定义块 2025-10-25 17:04:11,130 [INFO] __main__.RateSetSynchronizer: 共成功提取 3 个有效子集 2025-10-25 17:04:11,130 [INFO] __main__.RateSetSynchronizer: 开始构建注入内容,当前最大枚举值 = 28 2025-10-25 17:04:11,130 [INFO] __main__.RateSetSynchronizer: 构建完成:新增 91 个数据项,3 个索引,3 个枚举 2025-10-25 17:04:11,130 [INFO] __main__.RateSetSynchronizer: 开始执行局部块写入操作... 2025-10-25 17:04:11,130 [WARNING] __main__.RateSetSynchronizer: ❌ 处理文件失败 [2G_40M_rate_set.c]: 未找到枚举定义: rate_set_2g_40m Traceback (most recent call last): File "F:\excle_to_clm\rate_set\rate_sync.py", line 353, in inject_new_data updated_content = self._write_back_in_blocks( full_content, parsed, new_data, new_indices, new_enums ) File "F:\excle_to_clm\rate_set\rate_sync.py", line 408, in _write_back_in_blocks raise ValueError(f"未找到枚举定义: {self.enum_name}") ValueError: 未找到枚举定义: rate_set_2g_40m 2025-10-25 17:04:11,136 [INFO] __main__.RateSetSynchronizer: 原文件已备份为: wlc_clm_data_6726b0_20251025_170411.c.bak 2025-10-25 17:04:11,140 [INFO] __main__.RateSetSynchronizer: ✅ 成功写入更新后的文件: wlc_clm_data_6726b0.c ✅ 同步完成 同步完成! 进程已结束,退出代码为 0 # rate_set/rate_sync.py import json import os import re import logging import sys from pathlib import Path from utils import resource_path from datetime import datetime from typing import Dict, List, Tuple, Any # ------------------------------- # 日志配置 # ------------------------------- PROJECT_ROOT = Path(__file__).parent.parent.resolve() LOG_DIR = PROJECT_ROOT / "output" / "log" LOG_DIR.mkdir(parents=True, exist_ok=True) LOG_FILE = LOG_DIR / f"rate_sync_{datetime.now().strftime('%Y%m%d_%H%M%S')}.log" class RateSetSynchronizer: MAX_ENUM_PER_LINE = 4 # enum 每行最多几个宏 MAX_DATA_ITEMS_PER_LINE = 4 # data 数组每行最多几个值 MAX_INDEX_ITEMS_PER_LINE = 15 # index 数组每行最多几个值 def __init__(self, c_file_path=None, dry_run=False, config_path="config/config.json"): self.logger = logging.getLogger(f"{__name__}.RateSetSynchronizer") # 加载配置 self.config_file_path = resource_path(config_path) if not os.path.exists(self.config_file_path): raise FileNotFoundError(f"配置文件不存在: {self.config_file_path}") with open(self.config_file_path, 'r', encoding='utf-8') as f: self.config = json.load(f) self.dry_run = dry_run # C 文件路径 if c_file_path is None: internal_c_path = self.config["target_c_file"] self.c_file_path = resource_path(internal_c_path) else: self.c_file_path = Path(c_file_path) if not self.c_file_path.exists(): raise FileNotFoundError(f"找不到 C 源文件: {self.c_file_path}") # === 单一锚点标记 === self.block_start = self.config["STR_RATE_SET_DATA"] self.block_end = self.config["END_RATE_SET_DATA"] # 数组与枚举名 self.data_array_name = "rate_sets_2g_20m" self.index_array_name = "rate_sets_index_2g_20m" self.enum_name = "rate_set_2g_20m" # 扫描所有 .c 文件(排除自身) self.rate_set_dir = Path(__file__).parent self.rate_files = [ f for f in self.rate_set_dir.iterdir() if f.is_file() and f.suffix == ".c" and f.name != "rate_sync.py" ] # 加载文件名和结构映射 self.target_map = self.config.get("target_map") if not isinstance(self.target_map, dict): raise ValueError("config.json 中缺少 'target_map' 字段或格式错误") self._validate_target_map() # ← 添加一致性校验 def _validate_target_map(self): """验证 target_map 是否一致,防止多个 full_key 映射到同一数组""" seen_data = {} seen_index = {} seen_enum = {} for key, cfg in self.target_map.items(): d = cfg["data"] i = cfg["index"] e = cfg["enum"] if d in seen_data: raise ValueError(f"data 数组冲突: '{d}' 被 '{seen_data[d]}' 和 '{key}' 同时使用") if i in seen_index: raise ValueError(f"index 数组冲突: '{i}' 被 '{seen_index[i]}' 和 '{key}' 同时使用") if e in seen_enum: raise ValueError(f"enum 名称冲突: '{e}' 被 '{seen_enum[e]}' 和 '{key}' 同时使用") seen_data[d] = key seen_index[i] = key seen_enum[e] = key def parse_filename(self, filename: str) -> str: """ 从文件名提取 band_bw_ext 类型键,用于查找 target_map 示例: 2G_20M_rate_set.c → 2G_20M_BASE 2G_20M_EXT_rate_set.c → 2G_20M_EXT 5G_80M_EXT4_rate_set.c → 5G_80M_EXT4 """ match = re.match(r'^([A-Z0-9]+)_([0-9]+M)(?:_(EXT\d*))?_rate_set\.c$', filename, re.I) if not match: raise ValueError(f"无法识别的文件名格式: {filename}") band, bw, ext = match.groups() ext_type = ext.upper() if ext else "BASE" return f"{band.upper()}_{bw.upper()}_{ext_type}" def extract_sub_rate_sets(self, content: str) -> List[Dict[str, Any]]: """ 提取 /*NAME*/ N, WL_RATE_xxx... 子集,支持多行、空格、换行等常见格式 """ sub_sets = [] # 移除所有 ); 结尾符号(不影响结构) cleaned_content = re.sub(r'[);]', '', content) # === 第一阶段:用非贪婪方式找出所有 /*...*/ N, ... 块 === # 匹配:/*NAME*/ 任意空白 数字 , 任意内容(直到下一个 /* 或结尾) block_pattern = r'/\*\s*([A-Z0-9_]+)\s*\*/\s*(\d+)\s*,?[\s\n]*((?:(?!\s*/\*\s*[A-Z0-9_]+\s*\*/).)*)' matches = re.findall(block_pattern, cleaned_content, re.DOTALL | re.IGNORECASE) self.logger.info(f"从文件中初步匹配到 {len(matches)} 个 rate set 定义块") for name, count_str, body in matches: try: count = int(count_str) except ValueError: self.logger.warning(f"计数无效,跳过: {name} = '{count_str}'") continue # 从 body 中提取所有 WL_RATE_XXX rate_items = re.findall(r'WL_RATE_[A-Za-z0-9_]+', body) if len(rate_items) < count: self.logger.warning(f"[{name}] 条目不足: 需要 {count}, 实际 {len(rate_items)} → 截断处理") rate_items = rate_items[:count] else: rate_items = rate_items[:count] self.logger.debug(f" 提取成功: {name} (count={count}) → {len(rate_items)} 项") sub_sets.append({ "name": name.strip(), "count": count, "rates": rate_items }) self.logger.info(f"共成功提取 {len(sub_sets)} 个有效子集") return sub_sets def parse_all_structures(self, full_content: str) -> Dict: """ 直接从完整 C 文件中解析 enum/data/index 结构 """ result = { 'existing_enum': {}, 'data_entries': [], 'index_values': [], 'data_len': 0 } # === 解析 enum === enum_pattern = rf'enum\s+{re.escape(self.enum_name)}\s*\{{([^}}]+)\}};' enum_match = re.search(enum_pattern, full_content, re.DOTALL) if enum_match: body = enum_match.group(1) entries = re.findall(r'(RATE_SET_[^=,\s]+)\s*=\s*(\d+)', body) result['existing_enum'] = {k: int(v) for k, v in entries} self.logger.info(f"解析出 {len(entries)} 个已有枚举项") else: self.logger.warning(f"未找到 enum 定义: {self.enum_name}") # === 解析 data 数组 === data_pattern = rf'static const unsigned char {re.escape(self.data_array_name)}\[\] = \{{([^}}]+)\}};' data_match = re.search(data_pattern, full_content, re.DOTALL) if not data_match: raise ValueError(f"未找到 data 数组: {self.data_array_name}") data_code = data_match.group(1) result['data_entries'] = [item.strip() for item in re.split(r'[,\n]+', data_code) if item.strip()] result['data_len'] = len(result['data_entries']) # === 解析 index 数组 === index_pattern = rf'static const unsigned short {re.escape(self.index_array_name)}\[\] = \{{([^}}]+)\}};' index_match = re.search(index_pattern, full_content, re.DOTALL) if not index_match: raise ValueError(f"未找到 index 数组: {self.index_array_name}") index_code = index_match.group(1) result['index_values'] = [int(x.strip()) for x in re.split(r'[,\n]+', index_code) if x.strip()] return result def build_injection(self, new_subsets: List[Dict], existing_enum: Dict[str, int], current_data_len: int) -> Tuple[List[str], List[int], List[str]]: """ 构建要注入的新内容 返回: (new_data, new_indices, new_enums) """ new_data = [] new_indices = [] new_enums = [] current_offset = 0 # 当前相对于新块起始的偏移 next_enum_value = max(existing_enum.values(), default=-1) + 1 self.logger.info(f"开始构建注入内容,当前最大枚举值 = {next_enum_value}") for subset in new_subsets: enum_name = subset["name"] # ✅ 使用完整名称,避免前缀冲突! if enum_name in existing_enum: self.logger.info(f"跳过已存在的枚举项: {enum_name} = {existing_enum[enum_name]}") current_offset += 1 + subset["count"] continue # 添加长度 + 所有速率 new_data.append(str(subset["count"])) new_data.extend(subset["rates"]) # 索引是“从旧 data 尾部开始”的全局偏移 global_index = current_data_len + current_offset new_indices.append(global_index) # 枚举定义 new_enums.append(f" {enum_name} = {next_enum_value}") self.logger.debug(f"新增枚举: {enum_name} → value={next_enum_value}, index={global_index}") next_enum_value += 1 current_offset += 1 + subset["count"] self.logger.info(f"构建完成:新增 {len(new_data)} 个数据项,{len(new_indices)} 个索引,{len(new_enums)} 个枚举") return new_data, new_indices, new_enums def format_list(self, items: List[str], indent: str = " ", width: int = 8) -> str: """格式化数组为多行字符串""" lines = [] for i in range(0, len(items), width): chunk = items[i:i + width] lines.append(indent + ", ".join(chunk) + ",") return "\n".join(lines).rstrip(",") def _safe_write_back(self, old_content: str, new_content: str) -> bool: """安全写回文件,带备份""" if old_content == new_content: self.logger.info("主文件内容无变化,无需写入") return False if self.dry_run: self.logger.info("DRY-RUN 模式启用,跳过实际写入") print("[DRY RUN] 差异预览(前 20 行):") diff = new_content.splitlines()[:20] for line in diff: print(f" {line}") return True # 创建备份 timestamp = datetime.now().strftime("%Y%m%d_%H%M%S") backup = self.c_file_path.with_name(f"{self.c_file_path.stem}_{timestamp}.c.bak") try: self.c_file_path.rename(backup) self.logger.info(f"原文件已备份为: {backup.name}") except Exception as e: self.logger.error(f"备份失败: {e}") raise # 写入新内容 try: self.c_file_path.write_text(new_content, encoding='utf-8') self.logger.info(f"✅ 成功写入更新后的文件: {self.c_file_path.name}") return True except Exception as e: self.logger.error(f"写入失败: {e}", exc_info=True) raise def inject_new_data(self) -> bool: try: full_content = self.c_file_path.read_text(encoding='utf-8') except Exception as e: self.logger.error(f"读取主 C 文件失败: {e}") raise self.logger.info(f"正在处理 C 文件: {self.c_file_path.name}") start_pos = full_content.find(self.block_start) end_pos = full_content.find(self.block_end) if start_pos == -1: raise ValueError(f"未找到起始锚点: {self.block_start}") if end_pos == -1: raise ValueError(f"未找到结束锚点: {self.block_end}") if end_pos <= start_pos: raise ValueError("结束锚点位于起始锚点之前") inner_start = start_pos + len(self.block_start) block_content = full_content[inner_start:end_pos].strip() all_changes_made = False # === 遍历每一个 rate set 子文件 === for file_path in self.rate_files: try: self.logger.info(f"→ 处理子文件: {file_path.name}") # --- 1. 解析文件名得到 full_key --- try: full_key = self.parse_filename(file_path.name) self.logger.debug(f" ├─ 解析出 key: {full_key}") except ValueError as ve: self.logger.warning(f" └─ 跳过无效文件名: {ve}") continue # --- 2. 查找 target_map 映射 --- target = self.target_map.get(full_key) if not target: self.logger.warning(f" └─ 未在 config.json 中定义映射关系: {full_key},跳过") continue # --- 3. 动态设置当前注入目标 --- self.data_array_name = target["data"] self.index_array_name = target["index"] self.enum_name = target["enum"] self.logger.debug(f" ├─ 绑定目标:") self.logger.debug(f" data: {self.data_array_name}") self.logger.debug(f" index: {self.index_array_name}") self.logger.debug(f" enum: {self.enum_name}") # --- 4. 解析主文件中的当前结构 --- try: parsed = self.parse_all_structures(full_content) except Exception as e: self.logger.error(f" └─ 解析主文件结构失败: {e}") continue # --- 5. 提取该子文件中的 rate sets --- file_content = file_path.read_text(encoding='utf-8') subsets = self.extract_sub_rate_sets(file_content) if not subsets: self.logger.info(f" └─ 无有效子集数据") continue # --- 6. 构建要注入的内容 --- new_data, new_indices, new_enums = self.build_injection( subsets, existing_enum=parsed['existing_enum'], current_data_len=parsed['data_len'] ) if not new_data: self.logger.info(f" └─ 无需更新") continue # --- 7. 写回新内容(精准插入)--- updated_content = self._write_back_in_blocks( full_content, parsed, new_data, new_indices, new_enums ) if updated_content != full_content: all_changes_made = True full_content = updated_content # 更新内存内容供后续文件使用 self.logger.info(f"✅ 成功注入 {len(subsets)} 条目到 {self.enum_name}") except Exception as e: self.logger.warning(f"❌ 处理文件失败 [{file_path.name}]: {e}", exc_info=True) continue # 最终写回磁盘 if all_changes_made: try: return self._safe_write_back(self.c_file_path.read_text(encoding='utf-8'), full_content) except Exception as e: self.logger.error(f"写入最终文件失败: {e}") raise else: self.logger.info("没有需要更新的内容") return False def _write_back_in_blocks(self, full_content: str, parsed: Dict, new_data: List[str], new_indices: List[int], new_enums: List[str]) -> str: """ 使用局部块操作策略:只在 /* START */ ... /* END */ 范围内修改内容 避免跨区域误改,无需额外边界校验 """ self.logger.info("开始执行局部块写入操作...") # === Step 1: 查找锚点位置并提取 block === start_pos = full_content.find(self.block_start) end_pos = full_content.find(self.block_end) if start_pos == -1 or end_pos == -1: raise ValueError(f"未找到锚点标记: {self.block_start} 或 {self.block_end}") if end_pos <= start_pos: raise ValueError("结束锚点位于起始锚点之前") inner_start = start_pos + len(self.block_start) block_content = full_content[inner_start:end_pos] replacements = [] # (start_in_block, end_in_block, replacement) def remove_comments(text: str) -> str: text = re.sub(r'//.*$', '', text, flags=re.MULTILINE) text = re.sub(r'/\*.*?\*/', '', text, flags=re.DOTALL) return text.strip() # === Step 2: 更新 ENUM === if new_enums: enum_pattern = rf'(enum\s+{re.escape(self.enum_name)}\s*\{{)([^}}]*)\}}\s*;' match = re.search(enum_pattern, block_content, re.DOTALL | re.IGNORECASE) if not match: raise ValueError(f"未找到枚举定义: {self.enum_name}") header = match.group(1) body_content = match.group(2) lines = [ln for ln in body_content.split('\n') if ln.strip()] last_line = lines[-1] if lines else "" indent_match = re.match(r'^(\s*)', last_line) line_indent = indent_match.group(1) if indent_match else " " clean_last = remove_comments(last_line) first_macro_match = re.search(r'RATE_SET_[A-Z0-9_]+', clean_last) eq_match = re.search(r'=\s*\d+', clean_last) target_eq_col = 30 if first_macro_match and eq_match: raw_before_eq = last_line[:first_macro_match.start() + eq_match.start()] expanded_before_eq = raw_before_eq.expandtabs(4) target_eq_col = len(expanded_before_eq) new_body = body_content.rstrip() if not new_body.endswith(','): new_body += ',' for enum_def in new_enums: macro_name = enum_def.split('=')[0].strip().split()[-1] value = enum_def.split('=')[1].strip().rstrip(',') current_len = len(macro_name.replace('\t', ' ')) padding = max(1, target_eq_col - current_len) formatted = f"{macro_name}{' ' * padding}= {value}" visible_macros = len(re.findall(r'RATE_SET_[A-Z0-9_]+', remove_comments(last_line))) if visible_macros < self.MAX_ENUM_PER_LINE and last_line.strip(): insertion = f" {formatted}," updated_last = last_line.rstrip() + insertion new_body = body_content.rsplit(last_line, 1)[0] + updated_last last_line = updated_last else: prefix_padding = ' ' * max(0, len(line_indent.replace('\t', ' ')) - len(line_indent)) new_line = f"\n{line_indent}{prefix_padding}{formatted}," new_body += new_line last_line = new_line.strip() new_enum_code = f"{header}{new_body}\n}};" replacements.append((match.start(), match.end(), new_enum_code)) self.logger.debug(f"计划更新 enum: 添加 {len(new_enums)} 项") # === Step 3: 更新 DATA 数组 === if new_data: data_pattern = rf'(static const unsigned char {re.escape(self.data_array_name)}\[\]\s*=\s*\{{)([^}}]*)(\}}\s*;)' match = re.search(data_pattern, block_content, re.DOTALL) if not match: raise ValueError(f"未找到 data 数组: {self.data_array_name}") header = match.group(1) body_content = match.group(2).strip() footer = match.group(3) lines = body_content.splitlines() last_line = lines[-1] if lines else "" indent_match = re.match(r'^(\s*)', last_line) line_indent = indent_match.group(1) if indent_match else " " new_body = body_content.rstrip() if not new_body.endswith(','): new_body += ',' for i in range(0, len(new_data), self.MAX_DATA_ITEMS_PER_LINE): chunk = new_data[i:i + self.MAX_DATA_ITEMS_PER_LINE] line = "\n" + line_indent + ", ".join(chunk) + "," new_body += line new_data_code = f"{header}{new_body}\n{footer}" replacements.append((match.start(), match.end(), new_data_code)) self.logger.debug(f"计划更新 data 数组: 添加 {len(new_data)} 个元素") # === Step 4: 更新 INDEX 数组 === if new_indices: index_pattern = rf'(static const unsigned short {re.escape(self.index_array_name)}\[\]\s*=\s*\{{)([^}}]*)(\}}\s*;)' match = re.search(index_pattern, block_content, re.DOTALL) if not match: raise ValueError(f"未找到 index 数组: {self.index_array_name}") header = match.group(1) body_content = match.group(2).strip() footer = match.group(3) lines = body_content.splitlines() last_line = lines[-1] if lines else "" indent_match = re.match(r'^(\s*)', last_line) line_indent = indent_match.group(1) if indent_match else " " new_body = body_content.rstrip() if not new_body.endswith(','): new_body += ',' str_indices = [str(x) for x in new_indices] chunk_size = self.MAX_INDEX_ITEMS_PER_LINE for i in range(0, len(str_indices), chunk_size): chunk = str_indices[i:i + chunk_size] line = "\n" + line_indent + ", ".join(chunk) + "," new_body += line new_index_code = f"{header}{new_body}\n{footer}" replacements.append((match.start(), match.end(), new_index_code)) self.logger.debug(f"计划更新 index 数组: 添加 {len(new_indices)} 个索引") # === Step 5: 倒序应用所有替换到 block_content === if not replacements: self.logger.info("无任何变更需要写入") return full_content # 倒序避免偏移错乱 for start, end, r in sorted(replacements, key=lambda x: x[0], reverse=True): block_content = block_content[:start] + r + block_content[end:] # === Step 6: 拼接回完整文件 === final_content = ( full_content[:inner_start] + block_content + full_content[end_pos:] ) self.logger.info(f"成功构建新内容,总长度变化: {len(full_content)} → {len(final_content)}") return final_content def run(self): self.logger.info("开始同步 RATE_SET 数据...") try: changed = self.inject_new_data() if changed: print("✅ 同步完成") else: print("✅ 无新数据,无需更新") return { "success": True, "changed": changed, "file": str(self.c_file_path), "backup": f"{self.c_file_path.stem}_{datetime.now().strftime('%Y%m%d_%H%M%S')}.c.bak" if changed and not self.dry_run else None } except Exception as e: self.logger.error(f"同步失败: {e}", exc_info=True) print("❌ 同步失败,详见日志。") return {"success": False, "error": str(e)} def main(): logging.basicConfig( level=logging.INFO, format='%(asctime)s [%(levelname)s] %(name)s: %(message)s', handlers=[ logging.FileHandler(LOG_FILE, encoding='utf-8'), logging.StreamHandler(sys.stdout) ], force=True ) dry_run = False # 设置为 True 可进行试运行 try: sync = RateSetSynchronizer(dry_run=dry_run) sync.run() print("同步完成!") except FileNotFoundError as e: logging.error(f"文件未找到: {e}") print("❌ 文件错误,请检查路径。") sys.exit(1) except PermissionError as e: logging.error(f"权限错误: {e}") print("❌ 权限不足,请关闭编辑器或以管理员运行。") sys.exit(1) except Exception as e: logging.error(f"程序异常退出: {e}", exc_info=True) print("❌ 同步失败,详见日志。") sys.exit(1) if __name__ == '__main__': main() 如何让self.logger.warning(f"❌ 处理文件失败 [{file_path.name}]: {e}", exc_info=True)之后不会报错
最新发布
10-26
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值