C:\Users\admin\PyCharmMiscProject\.venv\Scripts\python.exe F:\excle_to_clm\rate_set\rate_sync.py
2025-10-25 17:04:11,054 [INFO] root: 资源路径: F:\excle_to_clm
2025-10-25 17:04:11,055 [INFO] root: 资源路径: F:\excle_to_clm
2025-10-25 17:04:11,055 [INFO] __main__.RateSetSynchronizer: 开始同步 RATE_SET 数据...
2025-10-25 17:04:11,106 [INFO] __main__.RateSetSynchronizer: 正在处理 C 文件: wlc_clm_data_6726b0.c
2025-10-25 17:04:11,106 [INFO] __main__.RateSetSynchronizer: → 处理子文件: 2G_20M_EXT4_rate_set.c
2025-10-25 17:04:11,107 [INFO] __main__.RateSetSynchronizer: 解析出 23 个已有枚举项
2025-10-25 17:04:11,109 [INFO] __main__.RateSetSynchronizer: 从文件中初步匹配到 6 个 rate set 定义块
2025-10-25 17:04:11,109 [INFO] __main__.RateSetSynchronizer: 共成功提取 6 个有效子集
2025-10-25 17:04:11,110 [INFO] __main__.RateSetSynchronizer: 开始构建注入内容,当前最大枚举值 = 23
2025-10-25 17:04:11,111 [INFO] __main__.RateSetSynchronizer: 构建完成:新增 246 个数据项,6 个索引,6 个枚举
2025-10-25 17:04:11,111 [INFO] __main__.RateSetSynchronizer: 开始执行局部块写入操作...
2025-10-25 17:04:11,111 [WARNING] __main__.RateSetSynchronizer: ❌ 处理文件失败 [2G_20M_EXT4_rate_set.c]: 未找到枚举定义: rate_set_2g_20m_ext4
Traceback (most recent call last):
File "F:\excle_to_clm\rate_set\rate_sync.py", line 353, in inject_new_data
updated_content = self._write_back_in_blocks(
full_content, parsed, new_data, new_indices, new_enums
)
File "F:\excle_to_clm\rate_set\rate_sync.py", line 408, in _write_back_in_blocks
raise ValueError(f"未找到枚举定义: {self.enum_name}")
ValueError: 未找到枚举定义: rate_set_2g_20m_ext4
2025-10-25 17:04:11,116 [INFO] __main__.RateSetSynchronizer: → 处理子文件: 2G_20M_EXT_rate_set.c
2025-10-25 17:04:11,116 [INFO] __main__.RateSetSynchronizer: 解析出 19 个已有枚举项
2025-10-25 17:04:11,118 [INFO] __main__.RateSetSynchronizer: 从文件中初步匹配到 6 个 rate set 定义块
2025-10-25 17:04:11,118 [INFO] __main__.RateSetSynchronizer: 共成功提取 6 个有效子集
2025-10-25 17:04:11,118 [INFO] __main__.RateSetSynchronizer: 开始构建注入内容,当前最大枚举值 = 19
2025-10-25 17:04:11,118 [INFO] __main__.RateSetSynchronizer: 构建完成:新增 194 个数据项,6 个索引,6 个枚举
2025-10-25 17:04:11,118 [INFO] __main__.RateSetSynchronizer: 开始执行局部块写入操作...
2025-10-25 17:04:11,118 [WARNING] __main__.RateSetSynchronizer: ❌ 处理文件失败 [2G_20M_EXT_rate_set.c]: 未找到枚举定义: rate_set_2g_20m_ext
Traceback (most recent call last):
File "F:\excle_to_clm\rate_set\rate_sync.py", line 353, in inject_new_data
updated_content = self._write_back_in_blocks(
full_content, parsed, new_data, new_indices, new_enums
)
File "F:\excle_to_clm\rate_set\rate_sync.py", line 408, in _write_back_in_blocks
raise ValueError(f"未找到枚举定义: {self.enum_name}")
ValueError: 未找到枚举定义: rate_set_2g_20m_ext
2025-10-25 17:04:11,119 [INFO] __main__.RateSetSynchronizer: → 处理子文件: 2G_20M_rate_set.c
2025-10-25 17:04:11,119 [INFO] __main__.RateSetSynchronizer: 解析出 32 个已有枚举项
2025-10-25 17:04:11,120 [INFO] __main__.RateSetSynchronizer: 从文件中初步匹配到 8 个 rate set 定义块
2025-10-25 17:04:11,120 [INFO] __main__.RateSetSynchronizer: 共成功提取 8 个有效子集
2025-10-25 17:04:11,120 [INFO] __main__.RateSetSynchronizer: 开始构建注入内容,当前最大枚举值 = 32
2025-10-25 17:04:11,120 [INFO] __main__.RateSetSynchronizer: 构建完成:新增 184 个数据项,8 个索引,8 个枚举
2025-10-25 17:04:11,120 [INFO] __main__.RateSetSynchronizer: 开始执行局部块写入操作...
2025-10-25 17:04:11,122 [INFO] __main__.RateSetSynchronizer: 成功构建新内容,总长度变化: 3905640 → 3910047
2025-10-25 17:04:11,122 [INFO] __main__.RateSetSynchronizer: ✅ 成功注入 8 条目到 rate_set_2g_20m
2025-10-25 17:04:11,122 [INFO] __main__.RateSetSynchronizer: → 处理子文件: 2G_40M_EXT4_rate_set.c
2025-10-25 17:04:11,123 [INFO] __main__.RateSetSynchronizer: 解析出 19 个已有枚举项
2025-10-25 17:04:11,125 [INFO] __main__.RateSetSynchronizer: 从文件中初步匹配到 5 个 rate set 定义块
2025-10-25 17:04:11,125 [INFO] __main__.RateSetSynchronizer: 共成功提取 5 个有效子集
2025-10-25 17:04:11,125 [INFO] __main__.RateSetSynchronizer: 开始构建注入内容,当前最大枚举值 = 19
2025-10-25 17:04:11,125 [INFO] __main__.RateSetSynchronizer: 构建完成:新增 241 个数据项,5 个索引,5 个枚举
2025-10-25 17:04:11,125 [INFO] __main__.RateSetSynchronizer: 开始执行局部块写入操作...
2025-10-25 17:04:11,125 [WARNING] __main__.RateSetSynchronizer: ❌ 处理文件失败 [2G_40M_EXT4_rate_set.c]: 未找到枚举定义: rate_set_2g_40m_ext4
Traceback (most recent call last):
File "F:\excle_to_clm\rate_set\rate_sync.py", line 353, in inject_new_data
updated_content = self._write_back_in_blocks(
full_content, parsed, new_data, new_indices, new_enums
)
File "F:\excle_to_clm\rate_set\rate_sync.py", line 408, in _write_back_in_blocks
raise ValueError(f"未找到枚举定义: {self.enum_name}")
ValueError: 未找到枚举定义: rate_set_2g_40m_ext4
2025-10-25 17:04:11,126 [INFO] __main__.RateSetSynchronizer: → 处理子文件: 2G_40M_EXT_rate_set.c
2025-10-25 17:04:11,126 [INFO] __main__.RateSetSynchronizer: 解析出 15 个已有枚举项
2025-10-25 17:04:11,127 [INFO] __main__.RateSetSynchronizer: 从文件中初步匹配到 5 个 rate set 定义块
2025-10-25 17:04:11,127 [INFO] __main__.RateSetSynchronizer: 共成功提取 5 个有效子集
2025-10-25 17:04:11,127 [INFO] __main__.RateSetSynchronizer: 开始构建注入内容,当前最大枚举值 = 15
2025-10-25 17:04:11,127 [INFO] __main__.RateSetSynchronizer: 构建完成:新增 189 个数据项,5 个索引,5 个枚举
2025-10-25 17:04:11,127 [INFO] __main__.RateSetSynchronizer: 开始执行局部块写入操作...
2025-10-25 17:04:11,128 [WARNING] __main__.RateSetSynchronizer: ❌ 处理文件失败 [2G_40M_EXT_rate_set.c]: 未找到枚举定义: rate_set_2g_40m_ext
Traceback (most recent call last):
File "F:\excle_to_clm\rate_set\rate_sync.py", line 353, in inject_new_data
updated_content = self._write_back_in_blocks(
full_content, parsed, new_data, new_indices, new_enums
)
File "F:\excle_to_clm\rate_set\rate_sync.py", line 408, in _write_back_in_blocks
raise ValueError(f"未找到枚举定义: {self.enum_name}")
ValueError: 未找到枚举定义: rate_set_2g_40m_ext
2025-10-25 17:04:11,128 [INFO] __main__.RateSetSynchronizer: → 处理子文件: 2G_40M_rate_set.c
2025-10-25 17:04:11,128 [INFO] __main__.RateSetSynchronizer: 解析出 28 个已有枚举项
2025-10-25 17:04:11,130 [INFO] __main__.RateSetSynchronizer: 从文件中初步匹配到 3 个 rate set 定义块
2025-10-25 17:04:11,130 [INFO] __main__.RateSetSynchronizer: 共成功提取 3 个有效子集
2025-10-25 17:04:11,130 [INFO] __main__.RateSetSynchronizer: 开始构建注入内容,当前最大枚举值 = 28
2025-10-25 17:04:11,130 [INFO] __main__.RateSetSynchronizer: 构建完成:新增 91 个数据项,3 个索引,3 个枚举
2025-10-25 17:04:11,130 [INFO] __main__.RateSetSynchronizer: 开始执行局部块写入操作...
2025-10-25 17:04:11,130 [WARNING] __main__.RateSetSynchronizer: ❌ 处理文件失败 [2G_40M_rate_set.c]: 未找到枚举定义: rate_set_2g_40m
Traceback (most recent call last):
File "F:\excle_to_clm\rate_set\rate_sync.py", line 353, in inject_new_data
updated_content = self._write_back_in_blocks(
full_content, parsed, new_data, new_indices, new_enums
)
File "F:\excle_to_clm\rate_set\rate_sync.py", line 408, in _write_back_in_blocks
raise ValueError(f"未找到枚举定义: {self.enum_name}")
ValueError: 未找到枚举定义: rate_set_2g_40m
2025-10-25 17:04:11,136 [INFO] __main__.RateSetSynchronizer: 原文件已备份为: wlc_clm_data_6726b0_20251025_170411.c.bak
2025-10-25 17:04:11,140 [INFO] __main__.RateSetSynchronizer: ✅ 成功写入更新后的文件: wlc_clm_data_6726b0.c
✅ 同步完成
同步完成!
进程已结束,退出代码为 0
# rate_set/rate_sync.py
import json
import os
import re
import logging
import sys
from pathlib import Path
from utils import resource_path
from datetime import datetime
from typing import Dict, List, Tuple, Any
# -------------------------------
# 日志配置
# -------------------------------
PROJECT_ROOT = Path(__file__).parent.parent.resolve()
LOG_DIR = PROJECT_ROOT / "output" / "log"
LOG_DIR.mkdir(parents=True, exist_ok=True)
LOG_FILE = LOG_DIR / f"rate_sync_{datetime.now().strftime('%Y%m%d_%H%M%S')}.log"
class RateSetSynchronizer:
MAX_ENUM_PER_LINE = 4 # enum 每行最多几个宏
MAX_DATA_ITEMS_PER_LINE = 4 # data 数组每行最多几个值
MAX_INDEX_ITEMS_PER_LINE = 15 # index 数组每行最多几个值
def __init__(self, c_file_path=None, dry_run=False, config_path="config/config.json"):
self.logger = logging.getLogger(f"{__name__}.RateSetSynchronizer")
# 加载配置
self.config_file_path = resource_path(config_path)
if not os.path.exists(self.config_file_path):
raise FileNotFoundError(f"配置文件不存在: {self.config_file_path}")
with open(self.config_file_path, 'r', encoding='utf-8') as f:
self.config = json.load(f)
self.dry_run = dry_run
# C 文件路径
if c_file_path is None:
internal_c_path = self.config["target_c_file"]
self.c_file_path = resource_path(internal_c_path)
else:
self.c_file_path = Path(c_file_path)
if not self.c_file_path.exists():
raise FileNotFoundError(f"找不到 C 源文件: {self.c_file_path}")
# === 单一锚点标记 ===
self.block_start = self.config["STR_RATE_SET_DATA"]
self.block_end = self.config["END_RATE_SET_DATA"]
# 数组与枚举名
self.data_array_name = "rate_sets_2g_20m"
self.index_array_name = "rate_sets_index_2g_20m"
self.enum_name = "rate_set_2g_20m"
# 扫描所有 .c 文件(排除自身)
self.rate_set_dir = Path(__file__).parent
self.rate_files = [
f for f in self.rate_set_dir.iterdir()
if f.is_file() and f.suffix == ".c" and f.name != "rate_sync.py"
]
# 加载文件名和结构映射
self.target_map = self.config.get("target_map")
if not isinstance(self.target_map, dict):
raise ValueError("config.json 中缺少 'target_map' 字段或格式错误")
self._validate_target_map() # ← 添加一致性校验
def _validate_target_map(self):
"""验证 target_map 是否一致,防止多个 full_key 映射到同一数组"""
seen_data = {}
seen_index = {}
seen_enum = {}
for key, cfg in self.target_map.items():
d = cfg["data"]
i = cfg["index"]
e = cfg["enum"]
if d in seen_data:
raise ValueError(f"data 数组冲突: '{d}' 被 '{seen_data[d]}' 和 '{key}' 同时使用")
if i in seen_index:
raise ValueError(f"index 数组冲突: '{i}' 被 '{seen_index[i]}' 和 '{key}' 同时使用")
if e in seen_enum:
raise ValueError(f"enum 名称冲突: '{e}' 被 '{seen_enum[e]}' 和 '{key}' 同时使用")
seen_data[d] = key
seen_index[i] = key
seen_enum[e] = key
def parse_filename(self, filename: str) -> str:
"""
从文件名提取 band_bw_ext 类型键,用于查找 target_map
示例:
2G_20M_rate_set.c → 2G_20M_BASE
2G_20M_EXT_rate_set.c → 2G_20M_EXT
5G_80M_EXT4_rate_set.c → 5G_80M_EXT4
"""
match = re.match(r'^([A-Z0-9]+)_([0-9]+M)(?:_(EXT\d*))?_rate_set\.c$', filename, re.I)
if not match:
raise ValueError(f"无法识别的文件名格式: {filename}")
band, bw, ext = match.groups()
ext_type = ext.upper() if ext else "BASE"
return f"{band.upper()}_{bw.upper()}_{ext_type}"
def extract_sub_rate_sets(self, content: str) -> List[Dict[str, Any]]:
"""
提取 /*NAME*/ N, WL_RATE_xxx... 子集,支持多行、空格、换行等常见格式
"""
sub_sets = []
# 移除所有 ); 结尾符号(不影响结构)
cleaned_content = re.sub(r'[);]', '', content)
# === 第一阶段:用非贪婪方式找出所有 /*...*/ N, ... 块 ===
# 匹配:/*NAME*/ 任意空白 数字 , 任意内容(直到下一个 /* 或结尾)
block_pattern = r'/\*\s*([A-Z0-9_]+)\s*\*/\s*(\d+)\s*,?[\s\n]*((?:(?!\s*/\*\s*[A-Z0-9_]+\s*\*/).)*)'
matches = re.findall(block_pattern, cleaned_content, re.DOTALL | re.IGNORECASE)
self.logger.info(f"从文件中初步匹配到 {len(matches)} 个 rate set 定义块")
for name, count_str, body in matches:
try:
count = int(count_str)
except ValueError:
self.logger.warning(f"计数无效,跳过: {name} = '{count_str}'")
continue
# 从 body 中提取所有 WL_RATE_XXX
rate_items = re.findall(r'WL_RATE_[A-Za-z0-9_]+', body)
if len(rate_items) < count:
self.logger.warning(f"[{name}] 条目不足: 需要 {count}, 实际 {len(rate_items)} → 截断处理")
rate_items = rate_items[:count]
else:
rate_items = rate_items[:count]
self.logger.debug(f" 提取成功: {name} (count={count}) → {len(rate_items)} 项")
sub_sets.append({
"name": name.strip(),
"count": count,
"rates": rate_items
})
self.logger.info(f"共成功提取 {len(sub_sets)} 个有效子集")
return sub_sets
def parse_all_structures(self, full_content: str) -> Dict:
"""
直接从完整 C 文件中解析 enum/data/index 结构
"""
result = {
'existing_enum': {},
'data_entries': [],
'index_values': [],
'data_len': 0
}
# === 解析 enum ===
enum_pattern = rf'enum\s+{re.escape(self.enum_name)}\s*\{{([^}}]+)\}};'
enum_match = re.search(enum_pattern, full_content, re.DOTALL)
if enum_match:
body = enum_match.group(1)
entries = re.findall(r'(RATE_SET_[^=,\s]+)\s*=\s*(\d+)', body)
result['existing_enum'] = {k: int(v) for k, v in entries}
self.logger.info(f"解析出 {len(entries)} 个已有枚举项")
else:
self.logger.warning(f"未找到 enum 定义: {self.enum_name}")
# === 解析 data 数组 ===
data_pattern = rf'static const unsigned char {re.escape(self.data_array_name)}\[\] = \{{([^}}]+)\}};'
data_match = re.search(data_pattern, full_content, re.DOTALL)
if not data_match:
raise ValueError(f"未找到 data 数组: {self.data_array_name}")
data_code = data_match.group(1)
result['data_entries'] = [item.strip() for item in re.split(r'[,\n]+', data_code) if item.strip()]
result['data_len'] = len(result['data_entries'])
# === 解析 index 数组 ===
index_pattern = rf'static const unsigned short {re.escape(self.index_array_name)}\[\] = \{{([^}}]+)\}};'
index_match = re.search(index_pattern, full_content, re.DOTALL)
if not index_match:
raise ValueError(f"未找到 index 数组: {self.index_array_name}")
index_code = index_match.group(1)
result['index_values'] = [int(x.strip()) for x in re.split(r'[,\n]+', index_code) if x.strip()]
return result
def build_injection(self, new_subsets: List[Dict], existing_enum: Dict[str, int],
current_data_len: int) -> Tuple[List[str], List[int], List[str]]:
"""
构建要注入的新内容
返回: (new_data, new_indices, new_enums)
"""
new_data = []
new_indices = []
new_enums = []
current_offset = 0 # 当前相对于新块起始的偏移
next_enum_value = max(existing_enum.values(), default=-1) + 1
self.logger.info(f"开始构建注入内容,当前最大枚举值 = {next_enum_value}")
for subset in new_subsets:
enum_name = subset["name"] # ✅ 使用完整名称,避免前缀冲突!
if enum_name in existing_enum:
self.logger.info(f"跳过已存在的枚举项: {enum_name} = {existing_enum[enum_name]}")
current_offset += 1 + subset["count"]
continue
# 添加长度 + 所有速率
new_data.append(str(subset["count"]))
new_data.extend(subset["rates"])
# 索引是“从旧 data 尾部开始”的全局偏移
global_index = current_data_len + current_offset
new_indices.append(global_index)
# 枚举定义
new_enums.append(f" {enum_name} = {next_enum_value}")
self.logger.debug(f"新增枚举: {enum_name} → value={next_enum_value}, index={global_index}")
next_enum_value += 1
current_offset += 1 + subset["count"]
self.logger.info(f"构建完成:新增 {len(new_data)} 个数据项,{len(new_indices)} 个索引,{len(new_enums)} 个枚举")
return new_data, new_indices, new_enums
def format_list(self, items: List[str], indent: str = " ", width: int = 8) -> str:
"""格式化数组为多行字符串"""
lines = []
for i in range(0, len(items), width):
chunk = items[i:i + width]
lines.append(indent + ", ".join(chunk) + ",")
return "\n".join(lines).rstrip(",")
def _safe_write_back(self, old_content: str, new_content: str) -> bool:
"""安全写回文件,带备份"""
if old_content == new_content:
self.logger.info("主文件内容无变化,无需写入")
return False
if self.dry_run:
self.logger.info("DRY-RUN 模式启用,跳过实际写入")
print("[DRY RUN] 差异预览(前 20 行):")
diff = new_content.splitlines()[:20]
for line in diff:
print(f" {line}")
return True
# 创建备份
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
backup = self.c_file_path.with_name(f"{self.c_file_path.stem}_{timestamp}.c.bak")
try:
self.c_file_path.rename(backup)
self.logger.info(f"原文件已备份为: {backup.name}")
except Exception as e:
self.logger.error(f"备份失败: {e}")
raise
# 写入新内容
try:
self.c_file_path.write_text(new_content, encoding='utf-8')
self.logger.info(f"✅ 成功写入更新后的文件: {self.c_file_path.name}")
return True
except Exception as e:
self.logger.error(f"写入失败: {e}", exc_info=True)
raise
def inject_new_data(self) -> bool:
try:
full_content = self.c_file_path.read_text(encoding='utf-8')
except Exception as e:
self.logger.error(f"读取主 C 文件失败: {e}")
raise
self.logger.info(f"正在处理 C 文件: {self.c_file_path.name}")
start_pos = full_content.find(self.block_start)
end_pos = full_content.find(self.block_end)
if start_pos == -1:
raise ValueError(f"未找到起始锚点: {self.block_start}")
if end_pos == -1:
raise ValueError(f"未找到结束锚点: {self.block_end}")
if end_pos <= start_pos:
raise ValueError("结束锚点位于起始锚点之前")
inner_start = start_pos + len(self.block_start)
block_content = full_content[inner_start:end_pos].strip()
all_changes_made = False
# === 遍历每一个 rate set 子文件 ===
for file_path in self.rate_files:
try:
self.logger.info(f"→ 处理子文件: {file_path.name}")
# --- 1. 解析文件名得到 full_key ---
try:
full_key = self.parse_filename(file_path.name)
self.logger.debug(f" ├─ 解析出 key: {full_key}")
except ValueError as ve:
self.logger.warning(f" └─ 跳过无效文件名: {ve}")
continue
# --- 2. 查找 target_map 映射 ---
target = self.target_map.get(full_key)
if not target:
self.logger.warning(f" └─ 未在 config.json 中定义映射关系: {full_key},跳过")
continue
# --- 3. 动态设置当前注入目标 ---
self.data_array_name = target["data"]
self.index_array_name = target["index"]
self.enum_name = target["enum"]
self.logger.debug(f" ├─ 绑定目标:")
self.logger.debug(f" data: {self.data_array_name}")
self.logger.debug(f" index: {self.index_array_name}")
self.logger.debug(f" enum: {self.enum_name}")
# --- 4. 解析主文件中的当前结构 ---
try:
parsed = self.parse_all_structures(full_content)
except Exception as e:
self.logger.error(f" └─ 解析主文件结构失败: {e}")
continue
# --- 5. 提取该子文件中的 rate sets ---
file_content = file_path.read_text(encoding='utf-8')
subsets = self.extract_sub_rate_sets(file_content)
if not subsets:
self.logger.info(f" └─ 无有效子集数据")
continue
# --- 6. 构建要注入的内容 ---
new_data, new_indices, new_enums = self.build_injection(
subsets,
existing_enum=parsed['existing_enum'],
current_data_len=parsed['data_len']
)
if not new_data:
self.logger.info(f" └─ 无需更新")
continue
# --- 7. 写回新内容(精准插入)---
updated_content = self._write_back_in_blocks(
full_content, parsed, new_data, new_indices, new_enums
)
if updated_content != full_content:
all_changes_made = True
full_content = updated_content # 更新内存内容供后续文件使用
self.logger.info(f"✅ 成功注入 {len(subsets)} 条目到 {self.enum_name}")
except Exception as e:
self.logger.warning(f"❌ 处理文件失败 [{file_path.name}]: {e}", exc_info=True)
continue
# 最终写回磁盘
if all_changes_made:
try:
return self._safe_write_back(self.c_file_path.read_text(encoding='utf-8'), full_content)
except Exception as e:
self.logger.error(f"写入最终文件失败: {e}")
raise
else:
self.logger.info("没有需要更新的内容")
return False
def _write_back_in_blocks(self, full_content: str, parsed: Dict,
new_data: List[str], new_indices: List[int], new_enums: List[str]) -> str:
"""
使用局部块操作策略:只在 /* START */ ... /* END */ 范围内修改内容
避免跨区域误改,无需额外边界校验
"""
self.logger.info("开始执行局部块写入操作...")
# === Step 1: 查找锚点位置并提取 block ===
start_pos = full_content.find(self.block_start)
end_pos = full_content.find(self.block_end)
if start_pos == -1 or end_pos == -1:
raise ValueError(f"未找到锚点标记: {self.block_start} 或 {self.block_end}")
if end_pos <= start_pos:
raise ValueError("结束锚点位于起始锚点之前")
inner_start = start_pos + len(self.block_start)
block_content = full_content[inner_start:end_pos]
replacements = [] # (start_in_block, end_in_block, replacement)
def remove_comments(text: str) -> str:
text = re.sub(r'//.*$', '', text, flags=re.MULTILINE)
text = re.sub(r'/\*.*?\*/', '', text, flags=re.DOTALL)
return text.strip()
# === Step 2: 更新 ENUM ===
if new_enums:
enum_pattern = rf'(enum\s+{re.escape(self.enum_name)}\s*\{{)([^}}]*)\}}\s*;'
match = re.search(enum_pattern, block_content, re.DOTALL | re.IGNORECASE)
if not match:
raise ValueError(f"未找到枚举定义: {self.enum_name}")
header = match.group(1)
body_content = match.group(2)
lines = [ln for ln in body_content.split('\n') if ln.strip()]
last_line = lines[-1] if lines else ""
indent_match = re.match(r'^(\s*)', last_line)
line_indent = indent_match.group(1) if indent_match else " "
clean_last = remove_comments(last_line)
first_macro_match = re.search(r'RATE_SET_[A-Z0-9_]+', clean_last)
eq_match = re.search(r'=\s*\d+', clean_last)
target_eq_col = 30
if first_macro_match and eq_match:
raw_before_eq = last_line[:first_macro_match.start() + eq_match.start()]
expanded_before_eq = raw_before_eq.expandtabs(4)
target_eq_col = len(expanded_before_eq)
new_body = body_content.rstrip()
if not new_body.endswith(','):
new_body += ','
for enum_def in new_enums:
macro_name = enum_def.split('=')[0].strip().split()[-1]
value = enum_def.split('=')[1].strip().rstrip(',')
current_len = len(macro_name.replace('\t', ' '))
padding = max(1, target_eq_col - current_len)
formatted = f"{macro_name}{' ' * padding}= {value}"
visible_macros = len(re.findall(r'RATE_SET_[A-Z0-9_]+', remove_comments(last_line)))
if visible_macros < self.MAX_ENUM_PER_LINE and last_line.strip():
insertion = f" {formatted},"
updated_last = last_line.rstrip() + insertion
new_body = body_content.rsplit(last_line, 1)[0] + updated_last
last_line = updated_last
else:
prefix_padding = ' ' * max(0, len(line_indent.replace('\t', ' ')) - len(line_indent))
new_line = f"\n{line_indent}{prefix_padding}{formatted},"
new_body += new_line
last_line = new_line.strip()
new_enum_code = f"{header}{new_body}\n}};"
replacements.append((match.start(), match.end(), new_enum_code))
self.logger.debug(f"计划更新 enum: 添加 {len(new_enums)} 项")
# === Step 3: 更新 DATA 数组 ===
if new_data:
data_pattern = rf'(static const unsigned char {re.escape(self.data_array_name)}\[\]\s*=\s*\{{)([^}}]*)(\}}\s*;)'
match = re.search(data_pattern, block_content, re.DOTALL)
if not match:
raise ValueError(f"未找到 data 数组: {self.data_array_name}")
header = match.group(1)
body_content = match.group(2).strip()
footer = match.group(3)
lines = body_content.splitlines()
last_line = lines[-1] if lines else ""
indent_match = re.match(r'^(\s*)', last_line)
line_indent = indent_match.group(1) if indent_match else " "
new_body = body_content.rstrip()
if not new_body.endswith(','):
new_body += ','
for i in range(0, len(new_data), self.MAX_DATA_ITEMS_PER_LINE):
chunk = new_data[i:i + self.MAX_DATA_ITEMS_PER_LINE]
line = "\n" + line_indent + ", ".join(chunk) + ","
new_body += line
new_data_code = f"{header}{new_body}\n{footer}"
replacements.append((match.start(), match.end(), new_data_code))
self.logger.debug(f"计划更新 data 数组: 添加 {len(new_data)} 个元素")
# === Step 4: 更新 INDEX 数组 ===
if new_indices:
index_pattern = rf'(static const unsigned short {re.escape(self.index_array_name)}\[\]\s*=\s*\{{)([^}}]*)(\}}\s*;)'
match = re.search(index_pattern, block_content, re.DOTALL)
if not match:
raise ValueError(f"未找到 index 数组: {self.index_array_name}")
header = match.group(1)
body_content = match.group(2).strip()
footer = match.group(3)
lines = body_content.splitlines()
last_line = lines[-1] if lines else ""
indent_match = re.match(r'^(\s*)', last_line)
line_indent = indent_match.group(1) if indent_match else " "
new_body = body_content.rstrip()
if not new_body.endswith(','):
new_body += ','
str_indices = [str(x) for x in new_indices]
chunk_size = self.MAX_INDEX_ITEMS_PER_LINE
for i in range(0, len(str_indices), chunk_size):
chunk = str_indices[i:i + chunk_size]
line = "\n" + line_indent + ", ".join(chunk) + ","
new_body += line
new_index_code = f"{header}{new_body}\n{footer}"
replacements.append((match.start(), match.end(), new_index_code))
self.logger.debug(f"计划更新 index 数组: 添加 {len(new_indices)} 个索引")
# === Step 5: 倒序应用所有替换到 block_content ===
if not replacements:
self.logger.info("无任何变更需要写入")
return full_content
# 倒序避免偏移错乱
for start, end, r in sorted(replacements, key=lambda x: x[0], reverse=True):
block_content = block_content[:start] + r + block_content[end:]
# === Step 6: 拼接回完整文件 ===
final_content = (
full_content[:inner_start] +
block_content +
full_content[end_pos:]
)
self.logger.info(f"成功构建新内容,总长度变化: {len(full_content)} → {len(final_content)}")
return final_content
def run(self):
self.logger.info("开始同步 RATE_SET 数据...")
try:
changed = self.inject_new_data()
if changed:
print("✅ 同步完成")
else:
print("✅ 无新数据,无需更新")
return {
"success": True,
"changed": changed,
"file": str(self.c_file_path),
"backup": f"{self.c_file_path.stem}_{datetime.now().strftime('%Y%m%d_%H%M%S')}.c.bak" if changed and not self.dry_run else None
}
except Exception as e:
self.logger.error(f"同步失败: {e}", exc_info=True)
print("❌ 同步失败,详见日志。")
return {"success": False, "error": str(e)}
def main():
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s [%(levelname)s] %(name)s: %(message)s',
handlers=[
logging.FileHandler(LOG_FILE, encoding='utf-8'),
logging.StreamHandler(sys.stdout)
],
force=True
)
dry_run = False # 设置为 True 可进行试运行
try:
sync = RateSetSynchronizer(dry_run=dry_run)
sync.run()
print("同步完成!")
except FileNotFoundError as e:
logging.error(f"文件未找到: {e}")
print("❌ 文件错误,请检查路径。")
sys.exit(1)
except PermissionError as e:
logging.error(f"权限错误: {e}")
print("❌ 权限不足,请关闭编辑器或以管理员运行。")
sys.exit(1)
except Exception as e:
logging.error(f"程序异常退出: {e}", exc_info=True)
print("❌ 同步失败,详见日志。")
sys.exit(1)
if __name__ == '__main__':
main()
如何让self.logger.warning(f"❌ 处理文件失败 [{file_path.name}]: {e}", exc_info=True)之后不会报错
最新发布