为什么Canonical Import Path注释在Go中不再必要

Go语言自推出以来,一直以其简洁和高效的包管理系统著称。在Go 1.11版本[1]之前,Canonical Import Path注释曾是一个重要的工具,用于防止包路径的导入冲突。然而,随着Go Modules[2]的引入,这一工具的作用逐渐被淡化。那么Canonical Import Path注释是否还有必要存在呢?在这篇文章中,我就来介绍一下Canonical Import Path的历史及作用,并通过在Go Modules环境下的向后兼容性测试,讨论是否仍有必要继续使用这一注释。

1. 什么是Canonical Import Path注释?

Go在1.4版本中增加了Canonical Import Path[3],Canonical Import Path用于解决同一个包可能被通过多个导入路径导入的问题。比如当代码托管在像github.com这样的服务上时,导入路径会包含托管服务的域名,比如“github.com/rsc/pdf。但是Go开发者也可以为同一个包提供一个“自定义”或“vanity”导入路径[4],例如rsc.io/pdf。这样就会产生两个有效的导入路径,这会带来以下问题:

  • 同一个程序中可能会通过不同路径导入同一个包,造成不必要的重复。

  • 使用非官方路径时可能会错过包更新,因为路径没有得到正确识别。

  • 将包迁移到另一个托管服务时,可能会中断使用旧路径的客户端。

为了解决这个问题,Go 1.4引入了Canonical Import Path注释。在包声明中加上注释后,如果通过非Canonical Import Path导入包,Go命令将拒绝编译导入包的程序。

Canonical Import Path的语法很简单,在包声明的注释部分加上标识。例如,对于rsc.io/pdf包,声明可以写成:

package pdf // import "rsc.io/pdf"

这样,Go命令就会拒绝编译任何通过github.com/rsc/pdf路径导入的包,确保代码可以在不破坏用户代码的前提下自由迁移。

2. Go Modules及其对导入路径的影响

Go 1.11引入Go Modules[5]后,Go通过go.mod文件管理包的依赖关系和版本,极大简化了包的管理过程。通过在go.mod中定义模块的根路径,Go Modules可以自动指示项目中所有包的导入路径,并且是唯一的,这使得Canonical Import Path在Go Modules环境下基本没什么必要性了

例如,假设go.mod文件定义了以下模块路径:

// go.mod
module rsc.io/pdf

那么位于项目根目录下的包的导入路径将被自动解析为rsc.io/pdf,避免了包路径冲突问题。因此,在Go Modules的支持下,手动设置Canonical Import Path注释变得不再必要。

Go提供了Go1向后兼容,在Go module下使用Canonical Import Path注释会是什么情况呢?我们接下来来看看。

3. 在Go Modules下使用Canonical Import Path注释

虽然Go Modules简化了包管理,很多老项目仍然保留了Canonical Import Path注释。为了验证在Go Modules环境下继续使用这些注释的兼容性,我进行了以下测试(测试环境使用的是包括Go 1.23.0版本[6]在内的多个Go版本)。

在这个测试中,我们保持项目中的Canonical Import Path注释不变,看看它是否影响在Go Modules环境中的编译和运行。

这里我们直接使用位于github.com/rsc/pdf中的pdf包,该包在read.go文件中使用了Canonical Import Path注释:

// https://github.com/rsc/pdf/blob/master/read.go
package pdf // import "rsc.io/pdf"

我们先用Go 1.11版本之前的Go版本测试一下导入rsc.io/pdf包。由于Go 1.11版本之前依然采用的是GOPATH构建模式,因此需要先将github.com/rsc/pdf下载到的下,因为模式下,编译器回到GOPATH路径下搜寻依赖包。

接下来,我们建立demo1目录,并直接将github.com/rsc/pdf/pdfpasswd/main.go复制到demo1目录下,该main.go导入了"rsc.io/pdf",我们将其改为导入"github.com/rsc/pdf":

// demo1/main.go

package main

import (
 "flag"
 "fmt"
 "log"
 "os"

 "github.com/rsc/pdf"
)

var (
 alphabet  = flag.String("a", "0123456789", "alphabet")
 maxLength = flag.Int("m", 4, "max length")
)

func usage() {
 fmt.Fprintf(os.Stderr, "usage: pdfpasswd [-a alphabet] [-m maxlength] file\n")
 os.Exit(2)
}

func main() {
 log.SetFlags(0)
 log.SetPrefix("pdfpasswd: ")

 flag.Usage = usage
 flag.Parse()
 if flag.NArg() != 1 {
  usage()
 }

 f, err := os.Open(flag.Arg(0))
 if err != nil {
  log.Fatal(err)
 }

 last := ""
 alpha := *alphabet
 ctr := make([]int, *maxLength)
 pw := func() string {
  inc(ctr, len(alpha)+1)
  for !valid(ctr) {
   inc(ctr, len(alpha)+1)
  }
  if done(ctr) {
   return ""
  }
  buf := make([]byte, len(ctr))
  var i int
  for i = 0; i < len(buf); i++ {
   if ctr[i] == 0 {
    break
   }
   buf[i] = alpha[ctr[i]-1]
  }
  last = string(buf[:i])
  println(last)
  return last
 }
 st, err := f.Stat()
 if err != nil {
  log.Fatal(err)
 }
 _, err = pdf.NewReaderEncrypted(f, st.Size(), pw)
 if err != nil {
  if err == pdf.ErrInvalidPassword {
   log.Fatal("password not found")
  }
  log.Fatal("reading pdf: %v", err)
 }
 fmt.Printf("password: %q\n", last)
}

func inc(ctr []int, n int) {
 for i := 0; i < len(ctr); i++ {
  ctr[i]++
  if ctr[i] < n {
   break
  }
  ctr[i] = 0
 }
}

func done(ctr []int) bool {
 for _, x := range ctr {
  if x != 0 {
   return false
  }
 }
 return true
}

func valid(ctr []int) bool {
 i := len(ctr)
 for i > 0 && ctr[i-1] == 0 {
  i--
 }
 for i--; i >= 0; i-- {
  if ctr[i] == 0 {
   return false
  }
 }
 return true
}

然后,我们先用Go 1.10.8版本编译该main.go,得到下面结果:

$go run main.go
main.go:9:2: code in directory /Users/tonybai/Go/src/github.com/rsc/pdf expects import "rsc.io/pdf"

我们看到go 1.11之前的版本对pdf包声明的Canonical Import Path做了检查,如果实际导入路径(github.com/rsc/pdf)与其不符,Go编译器会报错!

接下来,我们来看看切换到go module模式后的编译结果,这里我们使用Go 1.12.7版本。我们创建go.mod文件:

// demo1/go.mod
module demo1

go 1.12

编译执行main.go:

$go run main.go
go: finding github.com/rsc/pdf v0.1.1
go: downloading github.com/rsc/pdf v0.1.1
go: extracting github.com/rsc/pdf v0.1.1
usage: pdfpasswd [-a alphabet] [-m maxlength] file
exit status 2

我们看到,go 1.12.7可以成功编译并运行main.go,即便后者没有使用Canonical Import Path导入pdf包。

而用最新的Go 1.23.0编译和运行,也是没问题的:

$go run main.go
usage: pdfpasswd [-a alphabet] [-m maxlength] file
exit status 2

由此可以得出结论:go module模式下,Go编译器已经不再校验导入包的Canonical Import Path了。

并且,即便main.go同时导入rsc.io/pdf和github.com/rsc/pdf也是没问题的:

import (
    "flag"
    "fmt"
    "log"
    "os"

    "github.com/rsc/pdf"
    _ "rsc.io/pdf"
)

这是因为github.com/rsc/pdf下没有go.mod,go编译器无法识别github.com/rsc/pdf和rsc.io/pdf是同一个包。我们再看一个uber-go/zap的例子:

package main

import (
 "fmt"

 _ "github.com/uber-go/zap"
 _ "go.uber.org/zap"
)

func main() {
 fmt.Println("hello, zap!")
}

针对这个main.go所在的go module进行go mod tidy,我们会得到如下错误结果:

$go mod tidy
go: finding module for package go.uber.org/zap
go: finding module for package github.com/uber-go/zap
go: downloading go.uber.org/zap v1.27.0
go: downloading github.com/uber-go/zap v1.27.0
go: found github.com/uber-go/zap in github.com/uber-go/zap v1.27.0
go: found go.uber.org/zap in go.uber.org/zap v1.27.0
go: demo imports
 github.com/uber-go/zap: github.com/uber-go/zap@v1.27.0: parsing go.mod:
 module declares its path as: go.uber.org/zap
         but was required as: github.com/uber-go/zap

我们看到:go命令检测出了github.com/uber-go/zap仓库下的go module是go.uber.org/zap,我们只能使用go.uber.org/zap作为zap包的导入路径。

4. 是否应移除Canonical Import Path注释?

在Go Modules已经成为Go项目默认包管理方式的背景下,Canonical Import Path的使用显得冗余。虽然保留这些注释不会导致兼容性问题,但移除它们可以让项目代码更加简洁,减少不必要的历史包袱。

对于已经迁移到Go Modules的老项目,开发者可以考虑逐步移除Canonical Import Path注释。对于新项目,则是没有必要添加Canonical Import Path注释,Go Modules已经足够强大,能够管理包路径和依赖;如果项目的用户仍依赖旧版Go工具链(GOPATH模式),保留Canonical Import Path注释则可以作为一种保险措施。

5. 小结

Canonical Import Path注释在Go 1.4引入时是为了解决包路径冲突和包迁移问题。然而,随着Go Modules的引入,包管理和路径控制功能逐渐被自动化,Canonical Import Path的作用显得不再必要。对于现代Go项目,开发者应考虑移除这一冗余的注释,这不仅是代码简化的一部分,也反映了Go生态系统中包管理方式的演进,并使项目更加符合Go语言的现代开发环境。


往期推荐

一文告诉你当module path为main时执行go test失败的真正原因

写Go代码时遇到的那些问题[第1期]

十分钟入门Go语言

Go项目目录该怎么组织?官方终于出指南了!

Go项目组织:在单一repo中管理多个Go module指南


Gopher部落知识星球[7]在2024年将继续致力于打造一个高品质的Go语言学习和交流平台。我们将继续提供优质的Go技术文章首发和阅读体验。同时,我们也会加强代码质量和最佳实践的分享,包括如何编写简洁、可读、可测试的Go代码。此外,我们还会加强星友之间的交流和互动。欢迎大家踊跃提问,分享心得,讨论技术。我会在第一时间进行解答和交流。我衷心希望Gopher部落可以成为大家学习、进步、交流的港湾。让我相聚在Gopher部落,享受coding的快乐! 欢迎大家踊跃加入!

90bb9230552fbae833f4e5d11fe4f9af.jpeg4d66fcecc0df7f36d025811cc6c5130a.png

9539793cd977d9013a054cb3728332f6.png9c640b2420e8218526e785374086f205.jpeg

著名云主机服务厂商DigitalOcean发布最新的主机计划,入门级Droplet配置升级为:1 core CPU、1G内存、25G高速SSD,价格5$/月。有使用DigitalOcean需求的朋友,可以打开这个链接地址[8]:https://m.do.co/c/bff6eed92687 开启你的DO主机之路。

Gopher Daily(Gopher每日新闻) - https://gopherdaily.tonybai.com

我的联系方式:

  • 微博(暂不可用):https://weibo.com/bigwhite20xx

  • 微博2:https://weibo.com/u/6484441286

  • 博客:tonybai.com

  • github: https://github.com/bigwhite

  • Gopher Daily归档 - https://github.com/bigwhite/gopherdaily

  • Gopher Daily Feed订阅 - https://gopherdaily.tonybai.com/feed

e747542728819762d4748c16d31da9d2.jpeg

商务合作方式:撰稿、出书、培训、在线课程、合伙创业、咨询、广告合作。

参考资料

[1] 

Go 1.11版本: https://tonybai.com/2018/11/19/some-changes-in-go-1-11/

[2] 

Go Modules: https://tonybai.com/tag/gomodule

[3] 

Go在1.4版本中增加了Canonical Import Path: https://go.dev/doc/go1.4#canonicalimports

[4] 

“自定义”或“vanity”导入路径: https://tonybai.com/2020/11/15/another-approach-to-customize-package-import-path/

[5] 

Go 1.11引入Go Modules: https://tonybai.com/2018/07/15/hello-go-module/

[6] 

Go 1.23.0版本: https://tonybai.com/2024/08/19/some-changes-in-go-1-23/

[7] 

Gopher部落知识星球: https://public.zsxq.com/groups/51284458844544

[8] 

链接地址: https://m.do.co/c/bff6eed92687

clm_generator/excel_to_clm.py import os from datetime import datetime import re import json from difflib import unified_diff from pathlib import Path from shutil import copy2 from openpyxl import load_workbook import xlrd from jinja2 import Template from clm_config_updater import CLMRangeSynchronizer class ExcelToCLMConverter: import os import json def init(self, config_path=“config/config.json”, output_dir=“output”, locale_id=None, locale_display_name=None): # === Step 1: 解析配置文件路径(基于当前文件位置)=== if not os.path.isabs(config_path): # 获取当前 .py 文件所在目录(例如 F:\issue) current_dir = os.path.dirname(os.path.abspath(file)) config_path = os.path.join(current_dir, config_path) self.config_file_path = os.path.abspath(config_path) if not os.path.exists(self.config_file_path): raise FileNotFoundError(f"配置文件不存在: {self.config_file_path}“) with open(self.config_file_path, ‘r’, encoding=‘utf-8’) as f: self.config = json.load(f) print(f”✅ 配置文件已加载: {self.config_file_path}“) # === 计算项目根目录:config 文件所在的父目录(通常是项目根)=== project_root = os.path.dirname(os.path.dirname(self.config_file_path)) # 如果 config 在 config/ 子目录,则 parent 就是项目根 F:\issue # === Step 2: 处理 target_c_file === rel_c_path = self.config.get(“target_c_file”, “input/wlc_clm_data_6726b0.c”) if not os.path.isabs(rel_c_path): # 使用 project_root 作为基准 self.target_c_file = os.path.normpath(os.path.join(project_root, rel_c_path)) else: self.target_c_file = os.path.normpath(rel_c_path) if not os.path.exists(self.target_c_file): raise FileNotFoundError(f"配置中指定的 C 源文件不存在: {self.target_c_file}”) print(f"🔧 已定位目标 C 文件: {self.target_c_file}“) # === Step 3: 初始化其他属性 === self.output_dir = output_dir if not os.path.isabs(self.output_dir): self.output_dir = os.path.normpath(os.path.join(project_root, self.output_dir)) os.makedirs(self.output_dir, exist_ok=True) print(f"📁 输出目录: {self.output_dir}”) self.locale_id = locale_id or self.config.get(“DEFAULT_LOCALE_ID”, “DEFAULT”) self.locale_display_name = ( locale_display_name or self.config.get(“DEFAULT_DISPLAY_NAME”) or self.locale_id.replace(‘-’, ‘').upper() ) # === Step 4: 加载 channel_set_map === persisted_map = self.config.get(“channel_set_map”) if persisted_map is None: raise KeyError( “❌ 配置文件缺少必需字段 ‘channel_set_map’。\n” “请在 config.json 中显式添加该字段,例如:\n” ‘“channel_set_map”: {}\n’ “💡 提示:禁止通过 channel_sets 自动重建,防止状态混乱。” ) if not isinstance(persisted_map, dict): raise TypeError(f"❌ channel_set_map 必须是字典类型,当前类型: {type(persisted_map)}“) self.channel_set_map = {str(k): int(v) for k, v in persisted_map.items()} print(f"🔁 成功加载 channel_set_map (共 {len(self.channel_set_map)} 项): {dict(self.channel_set_map)}”) # === 其他初始化 === self.tx_limit_entries = [] self.eirp_entries = [] self.global_ch_min = None self.global_ch_max = None self.generated_ranges = [] # ==================== 新增工具方法:大小写安全查询 ==================== def ci_get(self, data_dict, key): “”" Case-insensitive 字典查找 “”" for k, v in data_dict.items(): if k.lower() == key.lower(): return v return None def ci_contains(self, data_list, item): “”" Case-insensitive 判断元素是否在列表中 “”" return any(x.lower() == item.lower() for x in data_list) # ==================== 原有 parse_mode_cell 方法保持不变 ==================== def parse_mode_cell(self, cell_value): if not cell_value: return None val = str(cell_value).strip() val = re.sub(r’\s+‘, ’ ‘, val.replace(’\n’, ’ ‘).replace(’\r’, ’ ‘)) val_upper = val.upper() found_modes = [] # ✅ 改进:使用 match + 允许后续内容(比如 20M),不再要求全匹配 if re.match(r’^11AC\s*/\sAX’, val_upper) or re.match(r’^11AX\s/\sAC’, val_upper): found_modes = [‘11AC’, ‘11AX’] print(f"🔍 解析复合模式 ‘{val}’ → {found_modes}") # ======== 一般情况:正则匹配标准模式 ======== else: mode_patterns = [ (r’\b11BE\b|\bEHT\b’, ‘11BE’), (r’\b11AX\b|\bHE\b’, ‘11AX’), (r’\b11AC\b|\bVHT\b’, ‘11AC’), # 自动匹配 11AC 或 VHT (r’\b11N\b|\bHT\b’, ‘11N’), (r’\b11G\b|\bERP\b’, ‘11G’), (r’\b11B\b|\bDSSS\b|\bCCK\b’, ‘11B’) ] for pattern, canonical in mode_patterns: if re.search(pattern, val_upper) and canonical not in found_modes: found_modes.append(canonical) # ======== 提取带宽 ======== bw_match = re.search(r’\b(20|40|80|160)\s(?:MHZ|M)?\b’, val_upper) bw = bw_match.group(1) if bw_match else None # fallback 带宽 if not bw: if all(m in [‘11B’, ‘11G’] for m in found_modes): bw = ‘20’ else: bw = ‘20’ if not found_modes: print(f"🟡 无法识别物理模式: ‘{cell_value}’“) return None return { “phy_mode_list”: found_modes, “bw”: bw } def format_phy_mode(self, mode: str) -> str: “”” 自定义物理层模式输出格式: - 11B/G/N 输出为小写:11b / 11g / 11n - 其他保持原样(如 11AC, 11BE) “”" return { ‘11B’: ‘11b’, ‘11G’: ‘11g’, ‘11N’: ‘11n’ }.get(mode, mode) def load_config(self, path=“config/config.json”): “”“加载配置文件,并在此时设置默认 locale_id”“” if not os.path.exists(path): raise FileNotFoundError(f"配置文件不存在: {path}“) with open(path, ‘r’, encoding=‘utf-8’) as f: self.config = json.load(f) # ✅ 只有在这里才安全地使用 self.config.get() if not self.locale_id: self.locale_id = self.config.get(“DEFAULT_LOCALE_ID”, “DEFAULT”) print(f”✅ 配置文件加载成功: {path}“) print(f"🌍 使用 Locale ID: {self.locale_id}”) def col_to_letter(self, col): col += 1 result = “” while col > 0: col -= 1 result = chr(col % 26 + ord(‘A’)) + result col //= 26 return result def is_valid_power(self, value): try: float(value) return True except (ValueError, TypeError): return False def get_cell_value(self, ws_obj, row_idx, col_idx): fmt = ws_obj[“format”] if fmt == “xls”: return str(ws_obj[“sheet”].cell_value(row_idx, col_idx)).strip() else: cell = ws_obj[“sheet”].cell(row=row_idx + 1, column=col_idx + 1) val = cell.value return str(val).strip() if val is not None else “” def find_table_header_row(self, ws_obj): “”“查找包含 ‘Mode’ 和 ‘Rate’ 的表头行”“” fmt = ws_obj[“format”] ws = ws_obj[“sheet”] for r in range(15): mode_col = rate_col = None if fmt == “xlsx”: if r + 1 > ws.max_row: continue for c in range(1, ws.max_column + 1): cell = ws.cell(row=r + 1, column=c) if not cell.value: continue val = str(cell.value).strip() if val == “Mode”: mode_col = c elif val == “Rate”: rate_col = c if mode_col and rate_col and abs(mode_col - rate_col) == 1: print(f"✅ 找到表头行: 第 {r+1} 行") return r, mode_col - 1, rate_col - 1 # 转为 0-based else: if r >= ws.nrows: continue for c in range(ws.ncols): val = ws.cell_value(r, c) if not val: continue val = str(val).strip() if val == “Mode”: mode_col = c elif val == “Rate”: rate_col = c if mode_col and rate_col and abs(mode_col - rate_col) == 1: print(f"✅ 找到表头行: 第 {r+1} 行") return r, mode_col, rate_col return None, None, None def find_auth_power_above_row(self, ws_obj, start_row): “”“查找 ‘认证功率’ 所在的合并单元格及其列范围”“” fmt = ws_obj[“format”] ws = ws_obj[“sheet”] print(f"🔍 开始向上查找 ‘认证功率’,扫描第 0 ~ {start_row} 行…“) if fmt == “xlsx”: for mr in ws.merged_cells.ranges: top_left = ws.cell(row=mr.min_row, column=mr.min_col) val = str(top_left.value) if top_left.value else “” if “证功率” in val or “Cert” in val: r_idx = mr.min_row - 1 if r_idx <= start_row: start_col = mr.min_col - 1 end_col = mr.max_col - 1 print(f"📌 发现合并单元格含 ‘证功率’: ‘{val}’ → {self.col_to_letter(start_col)}{mr.min_row}”) return start_col, end_col, r_idx # fallback:普通单元格 for r in range(start_row + 1): for c in range(1, ws.max_column + 1): cell = ws.cell(row=r + 1, column=c) if cell.value and (“证功率” in str(cell.value)): print(f"📌 普通单元格发现 ‘证功率’: ‘{cell.value}’ @ R{r+1}C{c}“) return c - 1, c - 1, r else: for r in range(min(ws.nrows, start_row + 1)): for c in range(ws.ncols): val = ws.cell_value(r, c) if val and (“证功率” in str(val)): print(f"📌 发现 ‘证功率’: ‘{val}’ @ R{r+1}C{c+1}”) return c, c, r return None, None, None def parse_ch_columns_under_auth(self, ws_obj, ch_row_idx, auth_start_col, auth_end_col): “”" 只解析位于 [auth_start_col, auth_end_col] 区间内的 CHx 列 “”" fmt = ws_obj[“format”] ws = ws_obj[“sheet”] ch_map = {} print(f"🔍 解析 CH 行(第 {ch_row_idx + 1} 行),限定列范围: Col {auth_start_col} ~ {auth_end_col}“) if fmt == “xlsx”: for c in range(auth_start_col, auth_end_col + 1): cell = ws.cell(row=ch_row_idx + 1, column=c + 1) val = self.get_cell_value(ws_obj, ch_row_idx, c) match = re.search(r"CH(\d+)”, val, re.I) if match: ch_num = int(match.group(1)) ch_map[ch_num] = c print(f" 👉 发现 CH{ch_num} @ Col{c}“) else: for c in range(auth_start_col, auth_end_col + 1): val = self.get_cell_value(ws_obj, ch_row_idx, c) match = re.search(r"CH(\d+)”, val, re.I) if match: ch_num = int(match.group(1)) ch_map[ch_num] = c print(f" 👉 发现 CH{ch_num} @ Col{c}“) if not ch_map: print(”❌ 在指定区域内未找到任何 CHx 列") else: chs = sorted(ch_map.keys()) print(f"✔️ 成功提取 CH{min(chs)}-{max(chs)} 共 {len(chs)} 个信道") return ch_map def encode_power(self, dbm): return int(round((float(dbm) + 1.5) * 4)) def merge_consecutive_channels(self, ch_list): if not ch_list: return [] sorted_ch = sorted(ch_list) ranges = [] start = end = sorted_ch[0] for ch in sorted_ch[1:]: if ch == end + 1: end = ch else: ranges.append((start, end)) start = end = ch ranges.append((start, end)) return ranges # ==================== 修改 collect_tx_limit_data ==================== def collect_tx_limit_data(self, ws_obj, sheet_config, header_row_idx, auth_row, auth_start, auth_end, mode_col, rate_col): ch_row_idx = auth_row + 2 nrows = ws_obj[“sheet”].nrows if ws_obj[“format”] == “xls” else ws_obj[“sheet”].max_row if ch_row_idx >= nrows: print(f"❌ CH 行 ({ch_row_idx + 1}) 超出范围") return [] # ✅ 提取认证功率下方的 CH 列映射 ch_map = self.parse_ch_columns_under_auth(ws_obj, ch_row_idx, auth_start, auth_end) if not ch_map: return [] entries = [] row_mode_info = {} # {row_index: parsed_mode_info} fmt = ws_obj[“format”] ws = ws_obj[“sheet”] # ======== 第一步:构建 row_mode_info —— 使用新解析器 ======== if fmt == “xlsx”: merged_cells_map = {} for mr in ws.merged_cells.ranges: for r in range(mr.min_row - 1, mr.max_row): for c in range(mr.min_col - 1, mr.max_col): merged_cells_map[(r, c)] = mr for row_idx in range(header_row_idx + 1, nrows): cell_value = None is_merged = (row_idx, mode_col) in merged_cells_map if is_merged: mr = merged_cells_map[(row_idx, mode_col)] top_cell = ws.cell(row=mr.min_row, column=mr.min_col) cell_value = top_cell.value else: raw_cell = ws.cell(row=row_idx + 1, column=mode_col + 1) cell_value = raw_cell.value mode_info = self.parse_mode_cell(cell_value) if mode_info: if is_merged: mr = merged_cells_map[(row_idx, mode_col)] for r in range(mr.min_row - 1, mr.max_row): if header_row_idx < r < nrows: row_mode_info[r] = mode_info.copy() else: row_mode_info[row_idx] = mode_info.copy() else: for row_idx in range(header_row_idx + 1, ws.nrows): cell_value = self.get_cell_value(ws_obj, row_idx, mode_col) mode_info = self.parse_mode_cell(cell_value) if mode_info: row_mode_info[row_idx] = mode_info.copy() # ======== 第二步:生成条目(关键修改区)======== for row_idx in range(header_row_idx + 1, nrows): mode_info = row_mode_info.get(row_idx) if not mode_info: continue bw_clean = mode_info[“bw”] has_valid_power = False for ch, col_idx in ch_map.items(): power_val = self.get_cell_value(ws_obj, row_idx, col_idx) if self.is_valid_power(power_val): has_valid_power = True break if not has_valid_power: print(f"🗑️ 跳过空行: 第 {row_idx + 1} 行(无任何有效功率值)“) continue # ---- 遍历每个 phy_mode ---- for phy_mode in mode_info[“phy_mode_list”]: formatted_mode = self.format_phy_mode(phy_mode) mode_key = f”{formatted_mode}M" # ✅ 改为大小写不敏感判断 if not self.ci_contains(sheet_config.get(“modes”, []), mode_key): print(f"⚠️ 忽略不支持的模式: {mode_key}") continue # === 获取 rate_set 定义(可能是 str 或 list)=== raw_rate_set = self.ci_get(sheet_config[“rate_set_map”], mode_key) if not raw_rate_set: print(f"❌ 找不到 rate_set 映射: {mode_key}“) continue # 统一转为 list 处理 if isinstance(raw_rate_set, str): rate_set_list = [raw_rate_set] elif isinstance(raw_rate_set, list): rate_set_list = raw_rate_set else: continue # 非法类型跳过 for rate_set_macro in rate_set_list: ch_count = 0 for ch, col_idx in ch_map.items(): power_val = self.get_cell_value(ws_obj, row_idx, col_idx) if not self.is_valid_power(power_val): continue try: power_dbm = float(power_val) except: continue encoded_power = self.encode_power(power_dbm) entries.append({ “ch”: ch, “power_dbm”: round(power_dbm, 2), “encoded_power”: encoded_power, “rate_set_macro”: rate_set_macro, # <<< 每个 macro 单独一条记录 “mode”: phy_mode, “bw”: bw_clean, “src_row”: row_idx + 1, “band”: sheet_config[“band”] }) ch_count += 1 print( f"📊 已采集第 {row_idx + 1} 行 → {formatted_mode} {bw_clean}M, {ch_count} 个信道, 使用宏: {rate_set_macro}” ) return entries def compress_tx_limit_entries(self, raw_entries, sheet_config): “”" 压缩TX限制条目。 Args: raw_entries (list): 原始条目列表。 sheet_config (dict): Excel表格配置字典。 Returns: list: 压缩后的条目列表。 “”" from collections import defaultdict modes_order = sheet_config[“modes”] # ✅ 构建小写映射用于排序(key: “11n_20M”) mode_lower_to_index = {mode.lower(): idx for idx, mode in enumerate(modes_order)} range_template = sheet_config[“range_macro_template”] group_key = lambda e: (e[“encoded_power”], e[“rate_set_macro”]) groups = defaultdict(list) for e in raw_entries: groups[group_key(e)].append(e) compressed = [] for (encoded_power, rate_set_macro), entries_in_group in groups.items(): first = entries_in_group[0] power_dbm = first[“power_dbm”] mode = first[“mode”] # 如 ‘11N’ bw = first[“bw”] # 如 ‘20’ 或 ‘40’ ch_list = sorted(e[“ch”] for e in entries_in_group) for start, end in self.merge_consecutive_channels(ch_list): range_macro = range_template.format( band=sheet_config[“band”], bw=bw, start=start, end=end ) # === 🔥 新增:查找或分配 CHANNEL_SET_ID === assigned_id = -1 # 表示:这不是 regulatory 范围,无需映射 # === 🔥 新增:记录到 generated_ranges === segment_ch_list = list(range(start, end + 1)) self.record_generated_range( range_macro=range_macro, band=sheet_config[“band”], bw=bw, ch_start=start, ch_end=end, channels=segment_ch_list ) # ✅ 格式化物理层模式(如 ‘11N’ -> ‘11n’) formatted_mode = self.format_phy_mode(mode) # ✅ 构造 mode_key 用于查找排序优先级 mode_key = f"{formatted_mode}M" mode_order_idx = mode_lower_to_index.get(mode_key.lower(), 999) # ✅ 生成注释 comment = f"/* {power_dbm:5.2f}dBm, CH{start}-{end}, {formatted_mode} @ {bw}MHz */" # ✅ 新增:生成该段落的实际信道列表 segment_ch_list = list(range(start, end + 1)) compressed.append({ “encoded_power”: encoded_power, “range_macro”: range_macro, “rate_set_macro”: rate_set_macro, “comment”: comment, "mode_order": mode_order_idx, # — 👇 新增:保留关键字段供模板使用 — “bw”: bw, # 带宽数字(字符串) “mode”: formatted_mode, # 统一格式化的模式名 “ch_start”: start, “ch_end”: end, “power_dbm”: round(power_dbm, 2), “ch_list”: segment_ch_list, # ✅ 关键!用于 global_ch_min/max 统计 }) # 排序后删除临时字段 compressed.sort(key=lambda x: x["mode_order"]) for item in compressed: del item["mode_order"] return compressed def record_generated_range(self, range_macro, band, bw, ch_start, ch_end, channels): “”" 记录生成的 RANGE 宏信息,供后续输出 manifest 使用 注意:不再记录 channel_set_id “”" self.generated_ranges.append({ “range_macro”: range_macro, “band”: band, “bandwidth”: int(bw), “channels”: sorted(channels), “start_channel”: ch_start, “end_channel”: int(ch_end), “source_sheet”: getattr(self, ‘current_sheet_name’, ‘unknown’) }) def clean_sheet_name(self, name): cleaned = re.sub(r’\w.=\u4e00-\u9fa5’, ‘’, str(name)) return cleaned def match_sheet_to_config(self, sheet_name): cleaned = self.clean_sheet_name(sheet_name) for cfg in self.config[“sheets”]: for pat in cfg[“pattern”]: if re.search(pat, cleaned, re.I): print(f"🧹 ‘{sheet_name}’ → 清洗后: ‘{cleaned}’“) print(f”✅ 匹配成功!‘{sheet_name}’ → [{cfg[‘band’]}] 配置") return cfg print(f"🧹 ‘{sheet_name}’ → 清洗后: ‘{cleaned}’“) print(f"🟡 未匹配到 ‘{sheet_name}’ 的模式,跳过…”) return None def convert_sheet_with_config(self, ws_obj, sheet_name, sheet_config): self.current_sheet_name = sheet_name # 👈 设置当前 sheet 名,供 record_generated_range 使用 header_row_idx, mode_col, rate_col = self.find_table_header_row(ws_obj) if header_row_idx is None: print(f"🟡 跳过 ‘{sheet_name}’:未找到 ‘Mode’ 和 ‘Rate’“) return auth_start, auth_end, auth_row = self.find_auth_power_above_row(ws_obj, header_row_idx) if auth_start is None: print(f"🟡 跳过 ‘{sheet_name}’:未找到 ‘认证功率’”) return raw_entries = self.collect_tx_limit_data( ws_obj, sheet_config, header_row_idx, auth_row, auth_start, auth_end, mode_col, rate_col ) if not raw_entries: print(f"⚠️ 从 ‘{sheet_name}’ 未收集到有效数据") return compressed = self.compress_tx_limit_entries(raw_entries, sheet_config) # ✅ 新增:仅对 2.4G 频段进行信道边界统计 band = str(sheet_config.get(“band”, “”)).strip().upper() if band in [“2G”, “2.4G”, “2.4GHZ”, “BGN”]: # 执行信道统计 for entry in compressed: ch_range = entry.get(“ch_list”) or [] if not ch_range: continue ch_start = min(ch_range) ch_end = max(ch_range) # 更新全局最小最大值 if self.global_ch_min is None or ch_start < self.global_ch_min: self.global_ch_min = ch_start if self.global_ch_max is None or ch_end > self.global_ch_max: self.global_ch_max = ch_end # ✅ 强制打印当前状态 print(f"📊 [Band={band}] 累计 2.4G 信道范围: CH{self.global_ch_min} – CH{self.global_ch_max}“) self.tx_limit_entries.extend(compressed) print(f”✔️ 成功从 ‘{sheet_name}’ 添加 {len(compressed)} 条压缩后 TX 限幅条目") # 可选调试输出 if band == “2G” and self.global_ch_min is not None: print(f"📊 当前累计 2.4G 信道范围: CH{self.global_ch_min} – CH{self.global_ch_max}“) def render_from_template(self, template_path, context, output_path): “”” 根据模板生成文件。 Args: template_path (str): 模板文件路径。 context (dict): 渲染模板所需的上下文数据。 output_path (str): 输出文件的路径。 Returns: None Raises: FileNotFoundError: 如果指定的模板文件不存在。 IOError: 如果在读取或写入文件时发生错误。 “”" with open(template_path, ‘r’, encoding=‘utf-8’) as f: template = Template(f.read()) content = template.render(**context) os.makedirs(os.path.dirname(output_path), exist_ok=True) with open(output_path, ‘w’, encoding=‘utf-8’) as f: f.write(content) print(f"🎉 已生成: {output_path}“) def generate_outputs(self): print(“🔧 正在执行 generate_outputs()…”) if not self.tx_limit_entries: print(”⚠️ 无 TX 限幅数据可输出") return # === Step 1: 使用 “HT” 分类 entries === normal_entries = [] ht_entries = [] for e in self.tx_limit_entries: macro = e[“rate_set_macro”] if “HT” in macro: ht_entries.append(e) else: normal_entries.append(e) print(f"📊 自动分类结果:“) print(f” ├─ Normal 模式(不含 HT): {len(normal_entries)} 条") print(f" └─ HT 模式(含 HT): {len(ht_entries)} 条") # === Step 2: 构建 g_tx_limit_normal 结构(按 bw 排序)=== def build_normal_structure(entries): from collections import defaultdict grouped = defaultdict(list) for e in entries: grouped[e[“bw”]].append(e) result = [] for bw in [“20”, “40”, “80”, “160”]: if bw in grouped: sorted_entries = sorted(grouped[bw], key=lambda x: (x[“ch_start”], x[“encoded_power”])) result.append((bw, sorted_entries)) return result normal_struct = build_normal_structure(normal_entries) # === Step 3: 构建 g_tx_limit_ht 结构(严格顺序)=== def build_ht_structure(entries): from collections import defaultdict groups = defaultdict(list) for e in entries: if “EXT4” in e[“rate_set_macro”]: level = “ext4” elif “EXT” in e[“rate_set_macro”]: level = “ext” else: level = “base” groups[(level, e[“bw”])].append(e) order = [ (“base”, “20”), (“base”, “40”), (“ext”, “20”), (“ext”, “40”), (“ext4”, “20”), (“ext4”, “40”) ] segments = [] active_segment_count = sum(1 for key in order if key in groups) for idx, (level, bw) in enumerate(order): key = (level, bw) if key not in groups: continue seg_entries = sorted(groups[key], key=lambda x: (x[“ch_start”], x[“encoded_power”])) count = len(seg_entries) header_flags = f"CLM_DATA_FLAG_WIDTH | CLM_DATA_FLAG_MEAS_COND" if idx < active_segment_count - 1: header_flags += " | CLM_DATA_FLAG_MORE" if level != “base”: header_flags += " | CLM_DATA_FLAG_FLAG2" segment = { “header_flags”: header_flags, “count”: count, “entries”: seg_entries } if level == “ext”: segment[“flag2”] = “CLM_DATA_FLAG2_RATE_TYPE_EXT” elif level == “ext4”: segment[“flag2”] = “CLM_DATA_FLAG2_RATE_TYPE_EXT4” segments.append(segment) return segments ht_segments = build_ht_structure(ht_entries) # === Step 4: fallback range 和 CHANNEL_SET 自动创建逻辑 === fallback_range_macro = “RANGE_EIRP_DUMMY” fallback_ch_start = fallback_ch_end = 1 fallback_channel_set_id = 1 channel_set_comment = “Fallback 2.4GHz channel set (default)” if self.global_ch_min is not None and self.global_ch_max is not None: fallback_range_macro = f"RANGE_2G_20M" fallback_ch_start = self.global_ch_min fallback_ch_end = self.global_ch_max print(f"🔧 正在设置监管 fallback 范围: {fallback_range_macro}“) fallback_channel_set_id = 1 self.channel_set_map[fallback_range_macro] = fallback_channel_set_id print(f”✅ 已绑定监管 fallback: {fallback_range_macro} → CHANNEL_SET") else: fallback_range_macro = “RANGE_2G_20M_1_11” fallback_ch_start = 1 fallback_ch_end = 11 fallback_channel_set_id = 1 self.channel_set_map[fallback_range_macro] = fallback_channel_set_id print(“⚠️ 未检测到有效的 2.4G 信道范围,使用默认 fallback: RANGE_2G_20M_1_11 → CHANNEL_SET_1”) # === Step 5: 渲染上下文集合 === timestamp = datetime.now().strftime(“%Y-%m-%d %H:%M:%S”) locale_id_safe = self.locale_id.replace(‘-’, '‘) context_clm = { “locale_id”: locale_id_safe, “eirp_entries”: self.eirp_entries or [], “fallback_encoded_eirp”: 30, “fallback_range_macro”: fallback_range_macro, “fallback_ch_start”: fallback_ch_start, “fallback_ch_end”: fallback_ch_end, “entries_grouped_by_bw”: normal_struct, } context_tables = { “timestamp”: timestamp, “locale_id”: locale_id_safe, “locale_display_name”: self.locale_display_name, “normal_table”: normal_struct, “ht_segments”: ht_segments, “fallback_encoded_eirp”: 30, “fallback_range_macro”: fallback_range_macro, “fallback_ch_start”: fallback_ch_start, “fallback_ch_end”: fallback_ch_end, “fallback_channel_set_id”: fallback_channel_set_id, “channel_set_comment”: channel_set_comment, } os.makedirs(self.output_dir, exist_ok=True) # ================================ # 🔍 新增:分析 tx_limit_table.c 的变更 # ================================ output_path = Path(self.output_dir) / “tx_limit_table.c” template_path = “templates/tx_limit_table.c.j2” # 1️⃣ 读取原始文件内容(如果存在) if output_path.exists(): original_lines = output_path.read_text(encoding=‘utf-8’).splitlines() else: original_lines = [] print(f"🆕 将创建新文件: {output_path}") # 2️⃣ 生成新内容(不直接写入) new_content = self.render_from_template_string( template_path=template_path, context=context_tables ) new_lines = new_content.splitlines() # 3️⃣ 计算差异 diff = list(unified_diff( original_lines, new_lines, fromfile=“before.c”, tofile=“after.c”, lineterm=’', n=0 )) # 4️⃣ 提取变更信息 changes = { ‘added_ranges’: set(), ‘removed_ranges’: set(), ‘modified_ranges’: set(), ‘other_additions’: [], ‘other_deletions’: [] } range_pattern = re.compile(r’RANGE\w+‘) for line in diff: if line.startswith(’—‘) or line.startswith(’+‘): continue matches = range_pattern.findall(line) if line.startswith(’+ ‘) and not line.startswith(’+‘): for m in matches: changes[‘added_ranges’].add(m) if not matches: changes[‘other_additions’].append(line[2:]) elif line.startswith(’- ‘) and not line.startswith(’—'): for m in matches: changes[‘removed_ranges’].add(m) if not matches: changes[‘other_deletions’].append(line[2:]) # 推断修改:出现在删除和添加中的 RANGE common = changes[‘added_ranges’] & changes[‘removed_ranges’] changes[‘modified_ranges’] = common changes[‘added_ranges’] -= common changes[‘removed_ranges’] -= common # 5️⃣ 输出变更摘要 print("\n" + “=” * 60) print(“📝 变更摘要 - tx_limit_table.c”) print(“=” * 60) total_changes = ( len(changes[‘added_ranges’]) + len(changes[‘removed_ranges’]) + len(changes[‘modified_ranges’]) + len(changes[‘other_additions’]) + len(changes[‘other_deletions’]) ) if total_changes == 0: print(“🟢 文件无变化,已是最新状态”) else: if changes[‘added_ranges’]: print(f"🟢 新增 {len(changes[‘added_ranges’])} 个 RANGE:“) for r in sorted(changes[‘added_ranges’]): print(f” → {r}“) if changes[‘removed_ranges’]: print(f"🔴 删除 {len(changes[‘removed_ranges’])} 个 RANGE:”) for r in sorted(changes[‘removed_ranges’]): print(f" → {r}“) if changes[‘modified_ranges’]: print(f"🟡 修改 {len(changes[‘modified_ranges’])} 个 RANGE:”) for r in sorted(changes[‘modified_ranges’]): print(f" → {r}“) other_count = len(changes[‘other_additions’]) + len(changes[‘other_deletions’]) if other_count > 0: print(f"🔵 其他变更 ({other_count} 处)😊 for add in changes[‘other_additions’][:3]: print(f” ➕ {add}“) for rem in changes[‘other_deletions’][:3]: print(f” ➖ {rem}“) if len(changes[‘other_additions’]) > 3 or len(changes[‘other_deletions’]) > 3: print(f” … 还有 {other_count - 6} 处未显示") print(“=” * 60) # === 🔁 新增:写入日志文件 === self.log_changes_to_file( changes=changes, output_dir=self.output_dir, locale_id=self.locale_id, total_entries=len(self.tx_limit_entries) ) # === Step 6: 实际渲染并写入文件 === self.render_from_template( “templates/clm_locale.c.j2”, context_clm, os.path.join(self.output_dir, f"locale.c") ) # 写入新内容 output_path.write_text(new_content, encoding=‘utf-8’) print(f"💾 已写入 → {output_path}“) self.render_from_template( “templates/clm_macros.h.j2”, context_tables, os.path.join(self.output_dir, “clm_macros.h”) ) # === manifest:提取所有在 TX 表中实际使用的 RANGE 宏(去重)=== used_range_macros = sorted(set(entry[“range_macro”] for entry in self.tx_limit_entries)) manifest_data = { “timestamp”: datetime.now().isoformat(), “used_ranges”: used_range_macros, “count”: len(used_range_macros) } manifest_path = os.path.join(self.output_dir, “generated_ranges_manifest.json”) with open(manifest_path, ‘w’, encoding=‘utf-8’) as f: json.dump(manifest_data, f, indent=4, ensure_ascii=False) print(f”✅ 已生成精简 manifest 文件: {manifest_path}“) print(f"📊 共 {len(used_range_macros)} 个唯一 RANGE 宏被使用:”) for macro in used_range_macros: print(f" - {macro}“) # ✅ 暴露 manifest 路径,供外部 UI 或模块使用 self.last_generated_manifest = manifest_path print(”✅ 所有输出文件生成完成。“) self.save_channel_set_map_to_config() def render_from_template_string(self, template_path, context): “”“仅渲染模板为字符串,不写入文件””" import jinja2 env = jinja2.Environment(loader=jinja2.FileSystemLoader(searchpath=“.”)) template = env.get_template(template_path) return template.render(**context) def log_changes_to_file(self, changes, output_dir, locale_id, total_entries): “”“将变更摘要写入日志文件”“” log_dir = Path(output_dir) / “change_logs” log_dir.mkdir(exist_ok=True) # ✅ 使用时间戳生成唯一文件名 timestamp_str = datetime.now().strftime("%Y%m%d%H%M%S") log_path = log_dir / f"change{timestamp_str}.log" timestamp = datetime.now().strftime(“%Y-%m-%d %H:%M:%S”) with open(log_path, ‘w’, encoding=‘utf-8’) as f: # 覆盖写入最新变更 f.write(f"\n") f.write(f"CLM 变更日志\n") f.write(f"\n") f.write(f"时间: {timestamp}\n") f.write(f"地区码: {locale_id}\n") f.write(f"总 TX 条目数: {total_entries}\n") f.write(f"\n") if not any(changes.values()): f.write(“🟢 本次运行无任何变更,所有文件已是最新状态。\n”) else: if changes[‘added_ranges’]: f.write(f"🟢 新增 RANGE ({len(changes[‘added_ranges’])}):\n") for r in sorted(changes[‘added_ranges’]): f.write(f" → {r}\n") f.write(f"\n") if changes[‘removed_ranges’]: f.write(f"🔴 删除 RANGE ({len(changes[‘removed_ranges’])}):\n") for r in sorted(changes[‘removed_ranges’]): f.write(f" → {r}\n") f.write(f"\n") if changes[‘modified_ranges’]: f.write(f"🟡 修改 RANGE ({len(changes[‘modified_ranges’])}):\n") for r in sorted(changes[‘modified_ranges’]): f.write(f" → {r}\n") f.write(f"\n") other_adds = changes[‘other_additions’] other_dels = changes[‘other_deletions’] if other_adds or other_dels: f.write(f"🔵 其他变更:\n") for line in other_adds[:10]: f.write(f" ➕ {line}\n") for line in other_dels[:10]: f.write(f" ➖ {line}\n") if len(other_adds) > 10 or len(other_dels) > 10: f.write(f" … 还有 {len(other_adds) + len(other_dels) - 20} 处未显示\n") f.write(f"\n") f.write(f"📌 输出目录: {output_dir}\n") f.write(f"备份文件: {Path(self.target_c_file).with_suffix(‘.c.bak’)}\n") f.write(f"========================================\n") print(f"📄 已保存变更日志 → {log_path}“) def save_channel_set_map_to_config(self): “”“将当前 channel_set_map 写回 config.json 的 channel_set_map 字段””" try: # 🔽 清理:只保留 fallback 类型的 RANGE(可正则匹配) valid_keys = [ k for k in self.channel_set_map.keys() if re.match(r’RANGE[\dA-Z]+\d+M\d+_\d+’, k) # 如 RANGE_2G_20M_1_11 ] filtered_map = {k: v for k, v in self.channel_set_map.items() if k in valid_keys} # 🔼 更新主配置中的字段 self.config[“channel_set_map”] = filtered_map # 使用过滤后的版本 with open(self.config_file_path, ‘w’, encoding=‘utf-8’) as f: json.dump(self.config, f, indent=4, ensure_ascii=False) print(f"💾 已成功将精简后的 channel_set_map 写回配置文件: {filtered_map}“) except Exception as e: print(f”❌ 写入配置文件失败: {e}“) raise def convert(self, file_path): # =============== 🔐 每次都更新备份 C 文件 =============== c_source = Path(self.target_c_file) c_backup = c_source.with_suffix(c_source.suffix + “.bak”) if not c_source.exists(): raise FileNotFoundError(f"目标 C 文件不存在: {c_source}”) ext = os.path.splitext(file_path)[-1].lower() if ext == “.xlsx”: wb = load_workbook(file_path, data_only=True) sheets = [{“sheet”: ws, “format”: “xlsx”} for ws in wb.worksheets] elif ext == “.xls”: wb = xlrd.open_workbook(file_path) sheets = [{“sheet”: ws, “format”: “xls”} for ws in wb.sheets()] else: raise ValueError(“仅支持 .xls 或 .xlsx 文件”) for i, ws_obj in enumerate(sheets): sheet_name = wb.sheet_names()[i] if ext == “.xls” else ws_obj[“sheet”].title config = self.match_sheet_to_config(sheet_name) if config: self.convert_sheet_with_config(ws_obj, sheet_name, config) self.generate_outputs() def parse_excel(self): “”" 【UI 兼容】供 PyQt UI 调用的入口方法 将当前 self.input_file 中的数据解析并填充到 tx_limit_entries “”" if not hasattr(self, ‘input_file’) or not self.input_file: raise ValueError(“未设置 input_file 属性!”) if not os.path.exists(self.input_file): raise FileNotFoundError(f"文件不存在: {self.input_file}“) print(f"📊 开始解析 Excel 文件: {self.input_file}”) try: self.convert(self.input_file) # 调用已有逻辑 print(f"✅ Excel 解析完成,共生成 {len(self.tx_limit_entries)} 条 TX 限幅记录") except Exception as e: print(f"❌ 解析失败: {e}") raise if name == “main”: import argparse import os 切换到脚本所在目录 script_dir = os.path.dirname(file) os.chdir(script_dir) # 定义命令行参数解析器 parser = argparse.ArgumentParser(description=“Convert Excel to CLM C code.”) parser.add_argument( “input”, nargs=“?”, default=“input/Archer BE900US 2.xlsx”, help=“Input Excel file (default: input/Archer BE900US 2.xlsx)” ) parser.add_argument( “–config”, default=“config/config.json”, help=“Path to config.json (default: config/config.json)” ) parser.add_argument( “–output-dir”, default=“output”, help=“Output directory (default: output)” ) parser.add_argument( “–locale-id”, default=None, help=‘Locale ID, e.g., “US”, “CN-2G” (default: from config or “DEFAULT”)’ ) parser.add_argument( “–display-name”, default=None, help=‘Display name in generated code, e.g., “FCC_Core” (default: derived from locale_id)’ ) args = parser.parse_args() # 创建转换器实例,并传入所有参数 converter = ExcelToCLMConverter( config_path=args.config, output_dir=args.output_dir, locale_id=args.locale_id, locale_display_name=args.display_name ) # 执行转换 converter.convert(args.input)排个版发给我,不要修改任何东西
10-15
# clm_generator/excel_to_clm.py import os from datetime import datetime import re import json from difflib import unified_diff from pathlib import Path from shutil import copy2 from openpyxl import load_workbook import xlrd from jinja2 import Template from clm_config_updater import CLMRangeSynchronizer class ExcelToCLMConverter: import os import json def __init__(self, config_path="config/config.json", output_dir="output", locale_id=None, locale_display_name=None): # === Step 1: 解析配置文件路径(基于当前文件位置)=== if not os.path.isabs(config_path): # 获取当前 .py 文件所在目录(例如 F:\issue) current_dir = os.path.dirname(os.path.abspath(__file__)) config_path = os.path.join(current_dir, config_path) self.config_file_path = os.path.abspath(config_path) if not os.path.exists(self.config_file_path): raise FileNotFoundError(f"配置文件不存在: {self.config_file_path}") with open(self.config_file_path, 'r', encoding='utf-8') as f: self.config = json.load(f) print(f"✅ 配置文件已加载: {self.config_file_path}") # === 计算项目根目录:config 文件所在的父目录(通常是项目根)=== project_root = os.path.dirname(os.path.dirname(self.config_file_path)) # 如果 config 在 config/ 子目录,则 parent 就是项目根 F:\issue # === Step 2: 处理 target_c_file === rel_c_path = self.config.get("target_c_file", "input/wlc_clm_data_6726b0.c") if not os.path.isabs(rel_c_path): # 使用 project_root 作为基准 self.target_c_file = os.path.normpath(os.path.join(project_root, rel_c_path)) else: self.target_c_file = os.path.normpath(rel_c_path) if not os.path.exists(self.target_c_file): raise FileNotFoundError(f"配置中指定的 C 源文件不存在: {self.target_c_file}") print(f"🔧 已定位目标 C 文件: {self.target_c_file}") # === Step 3: 初始化其他属性 === self.output_dir = output_dir if not os.path.isabs(self.output_dir): self.output_dir = os.path.normpath(os.path.join(project_root, self.output_dir)) os.makedirs(self.output_dir, exist_ok=True) print(f"📁 输出目录: {self.output_dir}") self.locale_id = locale_id or self.config.get("DEFAULT_LOCALE_ID", "DEFAULT") self.locale_display_name = ( locale_display_name or self.config.get("DEFAULT_DISPLAY_NAME") or self.locale_id.replace('-', '_').upper() ) # === Step 4: 加载 channel_set_map === persisted_map = self.config.get("channel_set_map") if persisted_map is None: raise KeyError( "❌ 配置文件缺少必需字段 'channel_set_map'。\n" "请在 config.json 中显式添加该字段,例如:\n" '"channel_set_map": {}\n' "💡 提示:禁止通过 channel_sets 自动重建,防止状态混乱。" ) if not isinstance(persisted_map, dict): raise TypeError(f"❌ channel_set_map 必须是字典类型,当前类型: {type(persisted_map)}") self.channel_set_map = {str(k): int(v) for k, v in persisted_map.items()} print(f"🔁 成功加载 channel_set_map (共 {len(self.channel_set_map)} 项): {dict(self.channel_set_map)}") # === 其他初始化 === self.tx_limit_entries = [] self.eirp_entries = [] self.global_ch_min = None self.global_ch_max = None self.generated_ranges = [] # ==================== 新增工具方法:大小写安全查询 ==================== def _ci_get(self, data_dict, key): """ Case-insensitive 字典查找 """ for k, v in data_dict.items(): if k.lower() == key.lower(): return v return None def _ci_contains(self, data_list, item): """ Case-insensitive 判断元素是否在列表中 """ return any(x.lower() == item.lower() for x in data_list) # ==================== 原有 parse_mode_cell 方法保持不变 ==================== def parse_mode_cell(self, cell_value): if not cell_value: return None val = str(cell_value).strip() val = re.sub(r'\s+', ' ', val.replace('\n', ' ').replace('\r', ' ')) val_upper = val.upper() found_modes = [] # ✅ 改进:使用 match + 允许后续内容(比如 20M),不再要求全匹配 if re.match(r'^11AC\s*/\s*AX', val_upper) or re.match(r'^11AX\s*/\s*AC', val_upper): found_modes = ['11AC', '11AX'] print(f"🔍 解析复合模式 '{val}' → {found_modes}") # ======== 一般情况:正则匹配标准模式 ======== else: mode_patterns = [ (r'\b11BE\b|\bEHT\b', '11BE'), (r'\b11AX\b|\bHE\b', '11AX'), (r'\b11AC\b|\bVHT\b', '11AC'), # 自动匹配 11AC 或 VHT (r'\b11N\b|\bHT\b', '11N'), (r'\b11G\b|\bERP\b', '11G'), (r'\b11B\b|\bDSSS\b|\bCCK\b', '11B') ] for pattern, canonical in mode_patterns: if re.search(pattern, val_upper) and canonical not in found_modes: found_modes.append(canonical) # ======== 提取带宽 ======== bw_match = re.search(r'\b(20|40|80|160)\s*(?:MHZ|M)?\b', val_upper) bw = bw_match.group(1) if bw_match else None # fallback 带宽 if not bw: if all(m in ['11B', '11G'] for m in found_modes): bw = '20' else: bw = '20' if not found_modes: print(f"🟡 无法识别物理模式: '{cell_value}'") return None return { "phy_mode_list": found_modes, "bw": bw } def format_phy_mode(self, mode: str) -> str: """ 自定义物理层模式输出格式: - 11B/G/N 输出为小写:11b / 11g / 11n - 其他保持原样(如 11AC, 11BE) """ return { '11B': '11b', '11G': '11g', '11N': '11n' }.get(mode, mode) def load_config(self, path="config/config.json"): """加载配置文件,并在此时设置默认 locale_id""" if not os.path.exists(path): raise FileNotFoundError(f"配置文件不存在: {path}") with open(path, 'r', encoding='utf-8') as f: self.config = json.load(f) # ✅ 只有在这里才安全地使用 self.config.get() if not self.locale_id: self.locale_id = self.config.get("DEFAULT_LOCALE_ID", "DEFAULT") print(f"✅ 配置文件加载成功: {path}") print(f"🌍 使用 Locale ID: {self.locale_id}") def col_to_letter(self, col): col += 1 result = "" while col > 0: col -= 1 result = chr(col % 26 + ord('A')) + result col //= 26 return result def is_valid_power(self, value): try: float(value) return True except (ValueError, TypeError): return False def get_cell_value(self, ws_obj, row_idx, col_idx): fmt = ws_obj["format"] if fmt == "xls": return str(ws_obj["sheet"].cell_value(row_idx, col_idx)).strip() else: cell = ws_obj["sheet"].cell(row=row_idx + 1, column=col_idx + 1) val = cell.value return str(val).strip() if val is not None else "" def find_table_header_row(self, ws_obj): """查找包含 'Mode' 和 'Rate' 的表头行""" fmt = ws_obj["format"] ws = ws_obj["sheet"] for r in range(15): mode_col = rate_col = None if fmt == "xlsx": if r + 1 > ws.max_row: continue for c in range(1, ws.max_column + 1): cell = ws.cell(row=r + 1, column=c) if not cell.value: continue val = str(cell.value).strip() if val == "Mode": mode_col = c elif val == "Rate": rate_col = c if mode_col and rate_col and abs(mode_col - rate_col) == 1: print(f"✅ 找到表头行: 第 {r+1} 行") return r, mode_col - 1, rate_col - 1 # 转为 0-based else: if r >= ws.nrows: continue for c in range(ws.ncols): val = ws.cell_value(r, c) if not val: continue val = str(val).strip() if val == "Mode": mode_col = c elif val == "Rate": rate_col = c if mode_col and rate_col and abs(mode_col - rate_col) == 1: print(f"✅ 找到表头行: 第 {r+1} 行") return r, mode_col, rate_col return None, None, None def find_auth_power_above_row(self, ws_obj, start_row): """查找 '认证功率' 所在的合并单元格及其列范围""" fmt = ws_obj["format"] ws = ws_obj["sheet"] print(f"🔍 开始向上查找 '认证功率',扫描第 0 ~ {start_row} 行...") if fmt == "xlsx": for mr in ws.merged_cells.ranges: top_left = ws.cell(row=mr.min_row, column=mr.min_col) val = str(top_left.value) if top_left.value else "" if "证功率" in val or "Cert" in val: r_idx = mr.min_row - 1 if r_idx <= start_row: start_col = mr.min_col - 1 end_col = mr.max_col - 1 print(f"📌 发现合并单元格含 '证功率': '{val}' → {self.col_to_letter(start_col)}{mr.min_row}") return start_col, end_col, r_idx # fallback:普通单元格 for r in range(start_row + 1): for c in range(1, ws.max_column + 1): cell = ws.cell(row=r + 1, column=c) if cell.value and ("证功率" in str(cell.value)): print(f"📌 普通单元格发现 '证功率': '{cell.value}' @ R{r+1}C{c}") return c - 1, c - 1, r else: for r in range(min(ws.nrows, start_row + 1)): for c in range(ws.ncols): val = ws.cell_value(r, c) if val and ("证功率" in str(val)): print(f"📌 发现 '证功率': '{val}' @ R{r+1}C{c+1}") return c, c, r return None, None, None def parse_ch_columns_under_auth(self, ws_obj, ch_row_idx, auth_start_col, auth_end_col): """ 只解析位于 [auth_start_col, auth_end_col] 区间内的 CHx 列 """ fmt = ws_obj["format"] ws = ws_obj["sheet"] ch_map = {} print(f"🔍 解析 CH 行(第 {ch_row_idx + 1} 行),限定列范围: Col {auth_start_col} ~ {auth_end_col}") if fmt == "xlsx": for c in range(auth_start_col, auth_end_col + 1): cell = ws.cell(row=ch_row_idx + 1, column=c + 1) val = self.get_cell_value(ws_obj, ch_row_idx, c) match = re.search(r"CH(\d+)", val, re.I) if match: ch_num = int(match.group(1)) ch_map[ch_num] = c print(f" 👉 发现 CH{ch_num} @ Col{c}") else: for c in range(auth_start_col, auth_end_col + 1): val = self.get_cell_value(ws_obj, ch_row_idx, c) match = re.search(r"CH(\d+)", val, re.I) if match: ch_num = int(match.group(1)) ch_map[ch_num] = c print(f" 👉 发现 CH{ch_num} @ Col{c}") if not ch_map: print("❌ 在指定区域内未找到任何 CHx 列") else: chs = sorted(ch_map.keys()) print(f"✔️ 成功提取 CH{min(chs)}-{max(chs)} 共 {len(chs)} 个信道") return ch_map def encode_power(self, dbm): return int(round((float(dbm) + 1.5) * 4)) def merge_consecutive_channels(self, ch_list): if not ch_list: return [] sorted_ch = sorted(ch_list) ranges = [] start = end = sorted_ch[0] for ch in sorted_ch[1:]: if ch == end + 1: end = ch else: ranges.append((start, end)) start = end = ch ranges.append((start, end)) return ranges # ==================== 修改 collect_tx_limit_data ==================== def collect_tx_limit_data(self, ws_obj, sheet_config, header_row_idx, auth_row, auth_start, auth_end, mode_col, rate_col): ch_row_idx = auth_row + 2 nrows = ws_obj["sheet"].nrows if ws_obj["format"] == "xls" else ws_obj["sheet"].max_row if ch_row_idx >= nrows: print(f"❌ CH 行 ({ch_row_idx + 1}) 超出范围") return [] # ✅ 提取认证功率下方的 CH 列映射 ch_map = self.parse_ch_columns_under_auth(ws_obj, ch_row_idx, auth_start, auth_end) if not ch_map: return [] entries = [] row_mode_info = {} # {row_index: parsed_mode_info} fmt = ws_obj["format"] ws = ws_obj["sheet"] # ======== 第一步:构建 row_mode_info —— 使用新解析器 ======== if fmt == "xlsx": merged_cells_map = {} for mr in ws.merged_cells.ranges: for r in range(mr.min_row - 1, mr.max_row): for c in range(mr.min_col - 1, mr.max_col): merged_cells_map[(r, c)] = mr for row_idx in range(header_row_idx + 1, nrows): cell_value = None is_merged = (row_idx, mode_col) in merged_cells_map if is_merged: mr = merged_cells_map[(row_idx, mode_col)] top_cell = ws.cell(row=mr.min_row, column=mr.min_col) cell_value = top_cell.value else: raw_cell = ws.cell(row=row_idx + 1, column=mode_col + 1) cell_value = raw_cell.value mode_info = self.parse_mode_cell(cell_value) if mode_info: if is_merged: mr = merged_cells_map[(row_idx, mode_col)] for r in range(mr.min_row - 1, mr.max_row): if header_row_idx < r < nrows: row_mode_info[r] = mode_info.copy() else: row_mode_info[row_idx] = mode_info.copy() else: for row_idx in range(header_row_idx + 1, ws.nrows): cell_value = self.get_cell_value(ws_obj, row_idx, mode_col) mode_info = self.parse_mode_cell(cell_value) if mode_info: row_mode_info[row_idx] = mode_info.copy() # ======== 第二步:生成条目(关键修改区)======== for row_idx in range(header_row_idx + 1, nrows): mode_info = row_mode_info.get(row_idx) if not mode_info: continue bw_clean = mode_info["bw"] has_valid_power = False for ch, col_idx in ch_map.items(): power_val = self.get_cell_value(ws_obj, row_idx, col_idx) if self.is_valid_power(power_val): has_valid_power = True break if not has_valid_power: print(f"🗑️ 跳过空行: 第 {row_idx + 1} 行(无任何有效功率值)") continue # ---- 遍历每个 phy_mode ---- for phy_mode in mode_info["phy_mode_list"]: formatted_mode = self.format_phy_mode(phy_mode) mode_key = f"{formatted_mode}_{bw_clean}M" # ✅ 改为大小写不敏感判断 if not self._ci_contains(sheet_config.get("modes", []), mode_key): print(f"⚠️ 忽略不支持的模式: {mode_key}") continue # === 获取 rate_set 定义(可能是 str 或 list)=== raw_rate_set = self._ci_get(sheet_config["rate_set_map"], mode_key) if not raw_rate_set: print(f"❌ 找不到 rate_set 映射: {mode_key}") continue # 统一转为 list 处理 if isinstance(raw_rate_set, str): rate_set_list = [raw_rate_set] elif isinstance(raw_rate_set, list): rate_set_list = raw_rate_set else: continue # 非法类型跳过 for rate_set_macro in rate_set_list: ch_count = 0 for ch, col_idx in ch_map.items(): power_val = self.get_cell_value(ws_obj, row_idx, col_idx) if not self.is_valid_power(power_val): continue try: power_dbm = float(power_val) except: continue encoded_power = self.encode_power(power_dbm) entries.append({ "ch": ch, "power_dbm": round(power_dbm, 2), "encoded_power": encoded_power, "rate_set_macro": rate_set_macro, # <<< 每个 macro 单独一条记录 "mode": phy_mode, "bw": bw_clean, "src_row": row_idx + 1, "band": sheet_config["band"] }) ch_count += 1 print( f"📊 已采集第 {row_idx + 1} 行 → {formatted_mode} {bw_clean}M, {ch_count} 个信道, 使用宏: {rate_set_macro}" ) return entries def compress_tx_limit_entries(self, raw_entries, sheet_config): """ 压缩TX限制条目。 Args: raw_entries (list): 原始条目列表。 sheet_config (dict): Excel表格配置字典。 Returns: list: 压缩后的条目列表。 """ from collections import defaultdict modes_order = sheet_config["modes"] # ✅ 构建小写映射用于排序(key: "11n_20M") mode_lower_to_index = {mode.lower(): idx for idx, mode in enumerate(modes_order)} range_template = sheet_config["range_macro_template"] group_key = lambda e: (e["encoded_power"], e["rate_set_macro"]) groups = defaultdict(list) for e in raw_entries: groups[group_key(e)].append(e) compressed = [] for (encoded_power, rate_set_macro), entries_in_group in groups.items(): first = entries_in_group[0] power_dbm = first["power_dbm"] mode = first["mode"] # 如 '11N' bw = first["bw"] # 如 '20' 或 '40' ch_list = sorted(e["ch"] for e in entries_in_group) for start, end in self.merge_consecutive_channels(ch_list): range_macro = range_template.format( band=sheet_config["band"], bw=bw, start=start, end=end ) # === 🔥 新增:查找或分配 CHANNEL_SET_ID === assigned_id = -1 # 表示:这不是 regulatory 范围,无需映射 # === 🔥 新增:记录到 generated_ranges === segment_ch_list = list(range(start, end + 1)) self._record_generated_range( range_macro=range_macro, band=sheet_config["band"], bw=bw, ch_start=start, ch_end=end, channels=segment_ch_list ) # ✅ 格式化物理层模式(如 '11N' -> '11n') formatted_mode = self.format_phy_mode(mode) # ✅ 构造 mode_key 用于查找排序优先级 mode_key = f"{formatted_mode}_{bw}M" mode_order_idx = mode_lower_to_index.get(mode_key.lower(), 999) # ✅ 生成注释 comment = f"/* {power_dbm:5.2f}dBm, CH{start}-{end}, {formatted_mode} @ {bw}MHz */" # ✅ 新增:生成该段落的实际信道列表 segment_ch_list = list(range(start, end + 1)) compressed.append({ "encoded_power": encoded_power, "range_macro": range_macro, "rate_set_macro": rate_set_macro, "comment": comment, "_mode_order": mode_order_idx, # --- 👇 新增:保留关键字段供模板使用 --- "bw": bw, # 带宽数字(字符串) "mode": formatted_mode, # 统一格式化的模式名 "ch_start": start, "ch_end": end, "power_dbm": round(power_dbm, 2), "ch_list": segment_ch_list, # ✅ 关键!用于 global_ch_min/max 统计 }) # 排序后删除临时字段 compressed.sort(key=lambda x: x["_mode_order"]) for item in compressed: del item["_mode_order"] return compressed def _record_generated_range(self, range_macro, band, bw, ch_start, ch_end, channels): """ 记录生成的 RANGE 宏信息,供后续输出 manifest 使用 注意:不再记录 channel_set_id """ self.generated_ranges.append({ "range_macro": range_macro, "band": band, "bandwidth": int(bw), "channels": sorted(channels), "start_channel": ch_start, "end_channel": int(ch_end), "source_sheet": getattr(self, 'current_sheet_name', 'unknown') }) def clean_sheet_name(self, name): cleaned = re.sub(r'[^\w\.\=\u4e00-\u9fa5]', '', str(name)) return cleaned def match_sheet_to_config(self, sheet_name): cleaned = self.clean_sheet_name(sheet_name) for cfg in self.config["sheets"]: for pat in cfg["pattern"]: if re.search(pat, cleaned, re.I): print(f"🧹 '{sheet_name}' → 清洗后: '{cleaned}'") print(f"✅ 匹配成功!'{sheet_name}' → [{cfg['band']}] 配置") return cfg print(f"🧹 '{sheet_name}' → 清洗后: '{cleaned}'") print(f"🟡 未匹配到 '{sheet_name}' 的模式,跳过...") return None def convert_sheet_with_config(self, ws_obj, sheet_name, sheet_config): self.current_sheet_name = sheet_name # 👈 设置当前 sheet 名,供 _record_generated_range 使用 header_row_idx, mode_col, rate_col = self.find_table_header_row(ws_obj) if header_row_idx is None: print(f"🟡 跳过 '{sheet_name}':未找到 'Mode' 和 'Rate'") return auth_start, auth_end, auth_row = self.find_auth_power_above_row(ws_obj, header_row_idx) if auth_start is None: print(f"🟡 跳过 '{sheet_name}':未找到 '认证功率'") return raw_entries = self.collect_tx_limit_data( ws_obj, sheet_config, header_row_idx, auth_row, auth_start, auth_end, mode_col, rate_col ) if not raw_entries: print(f"⚠️ 从 '{sheet_name}' 未收集到有效数据") return compressed = self.compress_tx_limit_entries(raw_entries, sheet_config) # ✅ 新增:仅对 2.4G 频段进行信道边界统计 band = str(sheet_config.get("band", "")).strip().upper() if band in ["2G", "2.4G", "2.4GHZ", "BGN"]: # 执行信道统计 for entry in compressed: ch_range = entry.get("ch_list") or [] if not ch_range: continue ch_start = min(ch_range) ch_end = max(ch_range) # 更新全局最小最大值 if self.global_ch_min is None or ch_start < self.global_ch_min: self.global_ch_min = ch_start if self.global_ch_max is None or ch_end > self.global_ch_max: self.global_ch_max = ch_end # ✅ 强制打印当前状态 print(f"📊 [Band={band}] 累计 2.4G 信道范围: CH{self.global_ch_min} – CH{self.global_ch_max}") self.tx_limit_entries.extend(compressed) print(f"✔️ 成功从 '{sheet_name}' 添加 {len(compressed)} 条压缩后 TX 限幅条目") # 可选调试输出 if band == "2G" and self.global_ch_min is not None: print(f"📊 当前累计 2.4G 信道范围: CH{self.global_ch_min} – CH{self.global_ch_max}") def render_from_template(self, template_path, context, output_path): """ 根据模板生成文件。 Args: template_path (str): 模板文件路径。 context (dict): 渲染模板所需的上下文数据。 output_path (str): 输出文件的路径。 Returns: None Raises: FileNotFoundError: 如果指定的模板文件不存在。 IOError: 如果在读取或写入文件时发生错误。 """ with open(template_path, 'r', encoding='utf-8') as f: template = Template(f.read()) content = template.render(**context) os.makedirs(os.path.dirname(output_path), exist_ok=True) with open(output_path, 'w', encoding='utf-8') as f: f.write(content) print(f"🎉 已生成: {output_path}") def generate_outputs(self): print("🔧 正在执行 generate_outputs()...") if not self.tx_limit_entries: print("⚠️ 无 TX 限幅数据可输出") return # === Step 1: 使用 "HT" 分类 entries === normal_entries = [] ht_entries = [] for e in self.tx_limit_entries: macro = e["rate_set_macro"] if "HT" in macro: ht_entries.append(e) else: normal_entries.append(e) print(f"📊 自动分类结果:") print(f" ├─ Normal 模式(不含 HT): {len(normal_entries)} 条") print(f" └─ HT 模式(含 HT): {len(ht_entries)} 条") # === Step 2: 构建 g_tx_limit_normal 结构(按 bw 排序)=== def build_normal_structure(entries): from collections import defaultdict grouped = defaultdict(list) for e in entries: grouped[e["bw"]].append(e) result = [] for bw in ["20", "40", "80", "160"]: if bw in grouped: sorted_entries = sorted(grouped[bw], key=lambda x: (x["ch_start"], x["encoded_power"])) result.append((bw, sorted_entries)) return result normal_struct = build_normal_structure(normal_entries) # === Step 3: 构建 g_tx_limit_ht 结构(严格顺序)=== def build_ht_structure(entries): from collections import defaultdict groups = defaultdict(list) for e in entries: if "EXT4" in e["rate_set_macro"]: level = "ext4" elif "EXT" in e["rate_set_macro"]: level = "ext" else: level = "base" groups[(level, e["bw"])].append(e) order = [ ("base", "20"), ("base", "40"), ("ext", "20"), ("ext", "40"), ("ext4", "20"), ("ext4", "40") ] segments = [] active_segment_count = sum(1 for key in order if key in groups) for idx, (level, bw) in enumerate(order): key = (level, bw) if key not in groups: continue seg_entries = sorted(groups[key], key=lambda x: (x["ch_start"], x["encoded_power"])) count = len(seg_entries) header_flags = f"CLM_DATA_FLAG_WIDTH_{bw} | CLM_DATA_FLAG_MEAS_COND" if idx < active_segment_count - 1: header_flags += " | CLM_DATA_FLAG_MORE" if level != "base": header_flags += " | CLM_DATA_FLAG_FLAG2" segment = { "header_flags": header_flags, "count": count, "entries": seg_entries } if level == "ext": segment["flag2"] = "CLM_DATA_FLAG2_RATE_TYPE_EXT" elif level == "ext4": segment["flag2"] = "CLM_DATA_FLAG2_RATE_TYPE_EXT4" segments.append(segment) return segments ht_segments = build_ht_structure(ht_entries) # === Step 4: fallback range 和 CHANNEL_SET 自动创建逻辑 === fallback_range_macro = "RANGE_EIRP_DUMMY" fallback_ch_start = fallback_ch_end = 1 fallback_channel_set_id = 1 channel_set_comment = "Fallback 2.4GHz channel set (default)" if self.global_ch_min is not None and self.global_ch_max is not None: fallback_range_macro = f"RANGE_2G_20M_{self.global_ch_min}_{self.global_ch_max}" fallback_ch_start = self.global_ch_min fallback_ch_end = self.global_ch_max print(f"🔧 正在设置监管 fallback 范围: {fallback_range_macro}") fallback_channel_set_id = 1 self.channel_set_map[fallback_range_macro] = fallback_channel_set_id print(f"✅ 已绑定监管 fallback: {fallback_range_macro} → CHANNEL_SET_{fallback_channel_set_id}") else: fallback_range_macro = "RANGE_2G_20M_1_11" fallback_ch_start = 1 fallback_ch_end = 11 fallback_channel_set_id = 1 self.channel_set_map[fallback_range_macro] = fallback_channel_set_id print("⚠️ 未检测到有效的 2.4G 信道范围,使用默认 fallback: RANGE_2G_20M_1_11 → CHANNEL_SET_1") # === Step 5: 渲染上下文集合 === timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S") locale_id_safe = self.locale_id.replace('-', '_') context_clm = { "locale_id": locale_id_safe, "eirp_entries": self.eirp_entries or [], "fallback_encoded_eirp": 30, "fallback_range_macro": fallback_range_macro, "fallback_ch_start": fallback_ch_start, "fallback_ch_end": fallback_ch_end, "entries_grouped_by_bw": normal_struct, } context_tables = { "timestamp": timestamp, "locale_id": locale_id_safe, "locale_display_name": self.locale_display_name, "normal_table": normal_struct, "ht_segments": ht_segments, "fallback_encoded_eirp": 30, "fallback_range_macro": fallback_range_macro, "fallback_ch_start": fallback_ch_start, "fallback_ch_end": fallback_ch_end, "fallback_channel_set_id": fallback_channel_set_id, "channel_set_comment": channel_set_comment, } os.makedirs(self.output_dir, exist_ok=True) # ================================ # 🔍 新增:分析 tx_limit_table.c 的变更 # ================================ output_path = Path(self.output_dir) / "tx_limit_table.c" template_path = "templates/tx_limit_table.c.j2" # 1️⃣ 读取原始文件内容(如果存在) if output_path.exists(): original_lines = output_path.read_text(encoding='utf-8').splitlines() else: original_lines = [] print(f"🆕 将创建新文件: {output_path}") # 2️⃣ 生成新内容(不直接写入) new_content = self.render_from_template_string( template_path=template_path, context=context_tables ) new_lines = new_content.splitlines() # 3️⃣ 计算差异 diff = list(unified_diff( original_lines, new_lines, fromfile="before.c", tofile="after.c", lineterm='', n=0 )) # 4️⃣ 提取变更信息 changes = { 'added_ranges': set(), 'removed_ranges': set(), 'modified_ranges': set(), 'other_additions': [], 'other_deletions': [] } range_pattern = re.compile(r'RANGE_\w+') for line in diff: if line.startswith('---') or line.startswith('+++'): continue matches = range_pattern.findall(line) if line.startswith('+ ') and not line.startswith('+++'): for m in matches: changes['added_ranges'].add(m) if not matches: changes['other_additions'].append(line[2:]) elif line.startswith('- ') and not line.startswith('---'): for m in matches: changes['removed_ranges'].add(m) if not matches: changes['other_deletions'].append(line[2:]) # 推断修改:出现在删除和添加中的 RANGE common = changes['added_ranges'] & changes['removed_ranges'] changes['modified_ranges'] = common changes['added_ranges'] -= common changes['removed_ranges'] -= common # 5️⃣ 输出变更摘要 print("\n" + "=" * 60) print("📝 变更摘要 - tx_limit_table.c") print("=" * 60) total_changes = ( len(changes['added_ranges']) + len(changes['removed_ranges']) + len(changes['modified_ranges']) + len(changes['other_additions']) + len(changes['other_deletions']) ) if total_changes == 0: print("🟢 文件无变化,已是最新状态") else: if changes['added_ranges']: print(f"🟢 新增 {len(changes['added_ranges'])} 个 RANGE:") for r in sorted(changes['added_ranges']): print(f" → {r}") if changes['removed_ranges']: print(f"🔴 删除 {len(changes['removed_ranges'])} 个 RANGE:") for r in sorted(changes['removed_ranges']): print(f" → {r}") if changes['modified_ranges']: print(f"🟡 修改 {len(changes['modified_ranges'])} 个 RANGE:") for r in sorted(changes['modified_ranges']): print(f" → {r}") other_count = len(changes['other_additions']) + len(changes['other_deletions']) if other_count > 0: print(f"🔵 其他变更 ({other_count} 处):") for add in changes['other_additions'][:3]: print(f" ➕ {add}") for rem in changes['other_deletions'][:3]: print(f" ➖ {rem}") if len(changes['other_additions']) > 3 or len(changes['other_deletions']) > 3: print(f" ... 还有 {other_count - 6} 处未显示") print("=" * 60) # === 🔁 新增:写入日志文件 === self.log_changes_to_file( changes=changes, output_dir=self.output_dir, locale_id=self.locale_id, total_entries=len(self.tx_limit_entries) ) # === Step 6: 实际渲染并写入文件 === self.render_from_template( "templates/clm_locale.c.j2", context_clm, os.path.join(self.output_dir, f"locale_{self.locale_id.lower()}.c") ) # 写入新内容 output_path.write_text(new_content, encoding='utf-8') print(f"💾 已写入 → {output_path}") self.render_from_template( "templates/clm_macros.h.j2", context_tables, os.path.join(self.output_dir, "clm_macros.h") ) # === manifest:提取所有在 TX 表中实际使用的 RANGE 宏(去重)=== used_range_macros = sorted(set(entry["range_macro"] for entry in self.tx_limit_entries)) manifest_data = { "timestamp": datetime.now().isoformat(), "used_ranges": used_range_macros, "count": len(used_range_macros) } manifest_path = os.path.join(self.output_dir, "generated_ranges_manifest.json") with open(manifest_path, 'w', encoding='utf-8') as f: json.dump(manifest_data, f, indent=4, ensure_ascii=False) print(f"✅ 已生成精简 manifest 文件: {manifest_path}") print(f"📊 共 {len(used_range_macros)} 个唯一 RANGE 宏被使用:") for macro in used_range_macros: print(f" - {macro}") # ✅ 暴露 manifest 路径,供外部 UI 或模块使用 self.last_generated_manifest = manifest_path print("✅ 所有输出文件生成完成。") self.save_channel_set_map_to_config() def render_from_template_string(self, template_path, context): """仅渲染模板为字符串,不写入文件""" import jinja2 env = jinja2.Environment(loader=jinja2.FileSystemLoader(searchpath=".")) template = env.get_template(template_path) return template.render(**context) def log_changes_to_file(self, changes, output_dir, locale_id, total_entries): """将变更摘要写入日志文件""" log_dir = Path(output_dir) / "change_logs" log_dir.mkdir(exist_ok=True) # ✅ 使用时间戳生成唯一文件名 timestamp_str = datetime.now().strftime("%Y%m%d_%H%M%S") log_path = log_dir / f"change_{locale_id}_{timestamp_str}.log" timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S") with open(log_path, 'w', encoding='utf-8') as f: # 覆盖写入最新变更 f.write(f"========================================\n") f.write(f"CLM 变更日志\n") f.write(f"========================================\n") f.write(f"时间: {timestamp}\n") f.write(f"地区码: {locale_id}\n") f.write(f"总 TX 条目数: {total_entries}\n") f.write(f"\n") if not any(changes.values()): f.write("🟢 本次运行无任何变更,所有文件已是最新状态。\n") else: if changes['added_ranges']: f.write(f"🟢 新增 RANGE ({len(changes['added_ranges'])}):\n") for r in sorted(changes['added_ranges']): f.write(f" → {r}\n") f.write(f"\n") if changes['removed_ranges']: f.write(f"🔴 删除 RANGE ({len(changes['removed_ranges'])}):\n") for r in sorted(changes['removed_ranges']): f.write(f" → {r}\n") f.write(f"\n") if changes['modified_ranges']: f.write(f"🟡 修改 RANGE ({len(changes['modified_ranges'])}):\n") for r in sorted(changes['modified_ranges']): f.write(f" → {r}\n") f.write(f"\n") other_adds = changes['other_additions'] other_dels = changes['other_deletions'] if other_adds or other_dels: f.write(f"🔵 其他变更:\n") for line in other_adds[:10]: f.write(f" ➕ {line}\n") for line in other_dels[:10]: f.write(f" ➖ {line}\n") if len(other_adds) > 10 or len(other_dels) > 10: f.write(f" ... 还有 {len(other_adds) + len(other_dels) - 20} 处未显示\n") f.write(f"\n") f.write(f"📌 输出目录: {output_dir}\n") f.write(f"备份文件: {Path(self.target_c_file).with_suffix('.c.bak')}\n") f.write(f"========================================\n") print(f"📄 已保存变更日志 → {log_path}") def save_channel_set_map_to_config(self): """将当前 channel_set_map 写回 config.json 的 channel_set_map 字段""" try: # 🔽 清理:只保留 fallback 类型的 RANGE(可正则匹配) valid_keys = [ k for k in self.channel_set_map.keys() if re.match(r'RANGE_[\dA-Z]+_\d+M_\d+_\d+', k) # 如 RANGE_2G_20M_1_11 ] filtered_map = {k: v for k, v in self.channel_set_map.items() if k in valid_keys} # 🔼 更新主配置中的字段 self.config["channel_set_map"] = filtered_map # 使用过滤后的版本 with open(self.config_file_path, 'w', encoding='utf-8') as f: json.dump(self.config, f, indent=4, ensure_ascii=False) print(f"💾 已成功将精简后的 channel_set_map 写回配置文件: {filtered_map}") except Exception as e: print(f"❌ 写入配置文件失败: {e}") raise def convert(self, file_path): # =============== 🔐 每次都更新备份 C 文件 =============== c_source = Path(self.target_c_file) c_backup = c_source.with_suffix(c_source.suffix + ".bak") if not c_source.exists(): raise FileNotFoundError(f"目标 C 文件不存在: {c_source}") ext = os.path.splitext(file_path)[-1].lower() if ext == ".xlsx": wb = load_workbook(file_path, data_only=True) sheets = [{"sheet": ws, "format": "xlsx"} for ws in wb.worksheets] elif ext == ".xls": wb = xlrd.open_workbook(file_path) sheets = [{"sheet": ws, "format": "xls"} for ws in wb.sheets()] else: raise ValueError("仅支持 .xls 或 .xlsx 文件") for i, ws_obj in enumerate(sheets): sheet_name = wb.sheet_names()[i] if ext == ".xls" else ws_obj["sheet"].title config = self.match_sheet_to_config(sheet_name) if config: self.convert_sheet_with_config(ws_obj, sheet_name, config) self.generate_outputs() def parse_excel(self): """ 【UI 兼容】供 PyQt UI 调用的入口方法 将当前 self.input_file 中的数据解析并填充到 tx_limit_entries """ if not hasattr(self, 'input_file') or not self.input_file: raise ValueError("未设置 input_file 属性!") if not os.path.exists(self.input_file): raise FileNotFoundError(f"文件不存在: {self.input_file}") print(f"📊 开始解析 Excel 文件: {self.input_file}") try: self.convert(self.input_file) # 调用已有逻辑 print(f"✅ Excel 解析完成,共生成 {len(self.tx_limit_entries)} 条 TX 限幅记录") except Exception as e: print(f"❌ 解析失败: {e}") raise if __name__ == "__main__": import argparse import os # 切换到脚本所在目录 script_dir = os.path.dirname(__file__) os.chdir(script_dir) # 定义命令行参数解析器 parser = argparse.ArgumentParser(description="Convert Excel to CLM C code.") parser.add_argument( "input", nargs="?", default="input/Archer BE900US 2.xlsx", help="Input Excel file (default: input/Archer BE900US 2.xlsx)" ) parser.add_argument( "--config", default="config/config.json", help="Path to config.json (default: config/config.json)" ) parser.add_argument( "--output-dir", default="output", help="Output directory (default: output)" ) parser.add_argument( "--locale-id", default=None, help='Locale ID, e.g., "US", "CN-2G" (default: from config or "DEFAULT")' ) parser.add_argument( "--display-name", default=None, help='Display name in generated code, e.g., "FCC_Core" (default: derived from locale_id)' ) args = parser.parse_args() # 创建转换器实例,并传入所有参数 converter = ExcelToCLMConverter( config_path=args.config, output_dir=args.output_dir, locale_id=args.locale_id, locale_display_name=args.display_name ) # 执行转换 converter.convert(args.input) 找你你说的需要修改的地方
10-15
# clm_generator/excel_to_clm.py import os from datetime import datetime import re import json from pathlib import Path from openpyxl import load_workbook import xlrd from jinja2 import Template from collections import defaultdict from utils import resource_path, get_output_dir from pathlib import Path class ExcelToCLMConverter: def __init__(self, config_path="config/config.json", output_dir="output", locale_id=None, locale_display_name=None): self.last_generated_manifest = None self.used_ranges = [] # 改成 list 更合理(保持顺序) # === Step 1: 使用 resource_path 解析所有路径 === self.config_file_path = resource_path(config_path) if not os.path.exists(self.config_file_path): raise FileNotFoundError(f"配置文件不存在: {self.config_file_path}") with open(self.config_file_path, 'r', encoding='utf-8') as f: self.config = json.load(f) print(f" 配置文件已加载: {self.config_file_path}") # === 计算项目根目录:config 文件所在目录的父级 === project_root = Path(self.config_file_path).parent.parent # 即 F:\issue # === Step 2: 处理 target_c_file === rel_c_path = self.config.get("target_c_file", "input/wlc_clm_data_6726b0.c") self.target_c_file = resource_path(rel_c_path) if not os.path.exists(self.target_c_file): raise FileNotFoundError(f"配置中指定的 C 源文件不存在: {self.target_c_file}") print(f" 已定位目标 C 文件: {self.target_c_file}") # === Step 3: 初始化输出目录 === if os.path.isabs(output_dir): self.output_dir = output_dir else: self.output_dir = str(get_output_dir()) Path(self.output_dir).mkdir(parents=True, exist_ok=True) print(f" 输出目录: {self.output_dir}") # === Step 4: locale 设置 === self.locale_id = locale_id or self.config.get("DEFAULT_LOCALE_ID", "DEFAULT") self.locale_display_name = ( locale_display_name or self.config.get("DEFAULT_DISPLAY_NAME") or self.locale_id.replace('-', '_').upper() ) # === Step 5: channel_set_map 加载 === persisted_map = self.config.get("channel_set_map") if persisted_map is None: raise KeyError("配置文件缺少必需字段 'channel_set_map'") if not isinstance(persisted_map, dict): raise TypeError(f"channel_set_map 必须是字典类型,当前类型: {type(persisted_map)}") self.channel_set_map = {str(k): int(v) for k, v in persisted_map.items()} print(f" 成功加载 channel_set_map (共 {len(self.channel_set_map)} 项): {dict(self.channel_set_map)}") # === 初始化数据容器 === self.tx_power_data = [] self.tx_limit_entries = [] self.eirp_entries = [] self.global_ch_min = None self.global_ch_max = None self.generated_ranges = [] def reset(self): """重置所有运行时数据,便于多次生成""" self.tx_limit_entries.clear() self.eirp_entries.clear() self.used_ranges.clear() self.tx_power_data.clear() # 同步清空 self.generated_ranges.clear() self.last_generated_manifest = None print(" 所有生成数据已重置") # ==================== 新增工具方法:大小写安全查询 ==================== def _ci_get(self, data_dict, key): """ Case-insensitive 字典查找 """ for k, v in data_dict.items(): if k.lower() == key.lower(): return v return None def _ci_contains(self, data_list, item): """ Case-insensitive 判断元素是否在列表中 """ return any(x.lower() == item.lower() for x in data_list) def parse_mode_cell(self, cell_value): if not cell_value: return None val = str(cell_value).strip() val = re.sub(r'\s+', ' ', val.replace('\n', ' ').replace('\r', ' ')) val_upper = val.upper() found_modes = [] # 改进:使用 match + 允许后续内容(比如 20M),不要求全匹配 if re.match(r'^11AC\s*/\s*AX', val_upper) or re.match(r'^11AX\s*/\s*AC', val_upper): found_modes = ['11AC', '11AX'] print(f" 解析复合模式 '{val}' → {found_modes}") # ======== 一般情况:正则匹配标准模式 ======== else: mode_patterns = [ (r'\b11BE\b|\bEHT\b', '11BE'), (r'\b11AX\b|\bHE\b', '11AX'), (r'\b11AC\b|\bVHT\b', '11AC'), (r'\b11N\b|\bHT\b', '11N'), (r'\b11G\b|\bERP\b', '11G'), (r'\b11B\b|\bDSSS\b|\bCCK\b', '11B') ] for pattern, canonical in mode_patterns: if re.search(pattern, val_upper) and canonical not in found_modes: found_modes.append(canonical) # ======== 提取带宽 ======== bw_match = re.search(r'(20|40|80|160)\s*(?:MHZ|M)?\b', val_upper) bw = bw_match.group(1) if bw_match else None # fallback 带宽 if not bw: if all(m in ['11B', '11G'] for m in found_modes): bw = '20' else: bw = '20' if not found_modes: print(f" 无法识别物理模式: '{cell_value}'") return None return { "phy_mode_list": found_modes, "bw": bw } def format_phy_mode(self, mode: str) -> str: """ 自定义物理层模式输出格式: - 11B/G/N 输出为小写:11b / 11g / 11n - 其他保持原样(如 11AC, 11BE) """ return { '11B': '11b', '11G': '11g', '11N': '11n' }.get(mode, mode) def col_to_letter(self, col): col += 1 result = "" while col > 0: col -= 1 result = chr(col % 26 + ord('A')) + result col //= 26 return result def is_valid_power(self, value): try: float(value) return True except (ValueError, TypeError): return False def get_cell_value(self, ws_obj, row_idx, col_idx): fmt = ws_obj["format"] if fmt == "xls": return str(ws_obj["sheet"].cell_value(row_idx, col_idx)).strip() else: cell = ws_obj["sheet"].cell(row=row_idx + 1, column=col_idx + 1) val = cell.value return str(val).strip() if val is not None else "" def find_table_header_row(self, ws_obj): """查找包含 'Mode' 和 'Rate' 的表头行""" fmt = ws_obj["format"] ws = ws_obj["sheet"] for r in range(15): mode_col = rate_col = None if fmt == "xlsx": if r + 1 > ws.max_row: continue for c in range(1, ws.max_column + 1): cell = ws.cell(row=r + 1, column=c) if not cell.value: continue val = str(cell.value).strip() if val == "Mode": mode_col = c elif val == "Rate": rate_col = c if mode_col and rate_col and abs(mode_col - rate_col) == 1: print(f" 找到表头行: 第 {r+1} 行") return r, mode_col - 1, rate_col - 1 # 转为 0-based else: if r >= ws.nrows: continue for c in range(ws.ncols): val = ws.cell_value(r, c) if not val: continue val = str(val).strip() if val == "Mode": mode_col = c elif val == "Rate": rate_col = c if mode_col and rate_col and abs(mode_col - rate_col) == 1: print(f" 找到表头行: 第 {r+1} 行") return r, mode_col, rate_col return None, None, None def find_auth_power_above_row(self, ws_obj, start_row): """查找 '认证功率' 所在的合并单元格及其列范围""" fmt = ws_obj["format"] ws = ws_obj["sheet"] print(f" 开始向上查找 '认证功率',扫描第 0 ~ {start_row} 行...") if fmt == "xlsx": for mr in ws.merged_cells.ranges: top_left = ws.cell(row=mr.min_row, column=mr.min_col) val = str(top_left.value) if top_left.value else "" if "证功率" in val or "Cert" in val: r_idx = mr.min_row - 1 if r_idx <= start_row: start_col = mr.min_col - 1 end_col = mr.max_col - 1 print(f" 发现合并单元格含 '证功率': '{val}' → {self.col_to_letter(start_col)}{mr.min_row}") return start_col, end_col, r_idx # fallback:普通单元格 for r in range(start_row + 1): for c in range(1, ws.max_column + 1): cell = ws.cell(row=r + 1, column=c) if cell.value and ("证功率" in str(cell.value)): print(f" 普通单元格发现 '证功率': '{cell.value}' @ R{r+1}C{c}") return c - 1, c - 1, r else: for r in range(min(ws.nrows, start_row + 1)): for c in range(ws.ncols): val = ws.cell_value(r, c) if val and ("证功率" in str(val)): print(f" 发现 '证功率': '{val}' @ R{r+1}C{c+1}") return c, c, r return None, None, None def parse_ch_columns_under_auth(self, ws_obj, ch_row_idx, auth_start_col, auth_end_col): """ 只解析位于 [auth_start_col, auth_end_col] 区间内的 CHx 列 """ fmt = ws_obj["format"] ws = ws_obj["sheet"] ch_map = {} print(f" 解析 CH 行(第 {ch_row_idx + 1} 行),限定列范围: Col {auth_start_col} ~ {auth_end_col}") if fmt == "xlsx": for c in range(auth_start_col, auth_end_col + 1): cell = ws.cell(row=ch_row_idx + 1, column=c + 1) val = self.get_cell_value(ws_obj, ch_row_idx, c) match = re.search(r"CH(\d+)", val, re.I) if match: ch_num = int(match.group(1)) ch_map[ch_num] = c print(f" 发现 CH{ch_num} @ Col{c}") else: for c in range(auth_start_col, auth_end_col + 1): val = self.get_cell_value(ws_obj, ch_row_idx, c) match = re.search(r"CH(\d+)", val, re.I) if match: ch_num = int(match.group(1)) ch_map[ch_num] = c print(f" 发现 CH{ch_num} @ Col{c}") if not ch_map: print(" 在指定区域内未找到任何 CHx 列") else: chs = sorted(ch_map.keys()) print(f" 成功提取 CH{min(chs)}-{max(chs)} 共 {len(chs)} 个信道") return ch_map def encode_power(self, dbm): return int(round((float(dbm) + 1.5) * 4)) def merge_consecutive_channels(self, ch_list): if not ch_list: return [] sorted_ch = sorted(ch_list) ranges = [] start = end = sorted_ch[0] for ch in sorted_ch[1:]: if ch == end + 1: end = ch else: ranges.append((start, end)) start = end = ch ranges.append((start, end)) return ranges def collect_tx_limit_data(self, ws_obj, sheet_config, header_row_idx, auth_row, auth_start, auth_end, mode_col, rate_col): ch_row_idx = auth_row + 2 nrows = ws_obj["sheet"].nrows if ws_obj["format"] == "xls" else ws_obj["sheet"].max_row if ch_row_idx >= nrows: print(f" CH 行 ({ch_row_idx + 1}) 超出范围") return [] # 提取认证功率下方的 CH 列映射 ch_map = self.parse_ch_columns_under_auth(ws_obj, ch_row_idx, auth_start, auth_end) if not ch_map: return [] entries = [] row_mode_info = {} # {row_index: parsed_mode_info} fmt = ws_obj["format"] ws = ws_obj["sheet"] # ======== 第一步:构建 row_mode_info —— 使用新解析器 ======== if fmt == "xlsx": merged_cells_map = {} for mr in ws.merged_cells.ranges: for r in range(mr.min_row - 1, mr.max_row): for c in range(mr.min_col - 1, mr.max_col): merged_cells_map[(r, c)] = mr for row_idx in range(header_row_idx + 1, nrows): cell_value = None is_merged = (row_idx, mode_col) in merged_cells_map if is_merged: mr = merged_cells_map[(row_idx, mode_col)] top_cell = ws.cell(row=mr.min_row, column=mr.min_col) cell_value = top_cell.value else: raw_cell = ws.cell(row=row_idx + 1, column=mode_col + 1) cell_value = raw_cell.value mode_info = self.parse_mode_cell(cell_value) if mode_info: if is_merged: mr = merged_cells_map[(row_idx, mode_col)] for r in range(mr.min_row - 1, mr.max_row): if header_row_idx < r < nrows: row_mode_info[r] = mode_info.copy() else: row_mode_info[row_idx] = mode_info.copy() else: for row_idx in range(header_row_idx + 1, ws.nrows): cell_value = self.get_cell_value(ws_obj, row_idx, mode_col) mode_info = self.parse_mode_cell(cell_value) if mode_info: row_mode_info[row_idx] = mode_info.copy() # ======== 第二步:生成条目======== for row_idx in range(header_row_idx + 1, nrows): mode_info = row_mode_info.get(row_idx) if not mode_info: continue bw_clean = mode_info["bw"] has_valid_power = False for ch, col_idx in ch_map.items(): power_val = self.get_cell_value(ws_obj, row_idx, col_idx) if self.is_valid_power(power_val): has_valid_power = True break if not has_valid_power: print(f" 跳过空行: 第 {row_idx + 1} 行(无任何有效功率值)") continue # ---- 遍历每个 phy_mode ---- for phy_mode in mode_info["phy_mode_list"]: formatted_mode = self.format_phy_mode(phy_mode) mode_key = f"{formatted_mode}_{bw_clean}M" # 改为大小写不敏感判断 if not self._ci_contains(sheet_config.get("modes", []), mode_key): print(f" 忽略不支持的模式: {mode_key}") continue # === 获取 rate_set 定义(可能是 str 或 list)=== raw_rate_set = self._ci_get(sheet_config["rate_set_map"], mode_key) if not raw_rate_set: print(f" 找不到 rate_set 映射: {mode_key}") continue # 统一转为 list 处理 if isinstance(raw_rate_set, str): rate_set_list = [raw_rate_set] elif isinstance(raw_rate_set, list): rate_set_list = raw_rate_set else: continue # 非法类型跳过 for rate_set_macro in rate_set_list: ch_count = 0 for ch, col_idx in ch_map.items(): power_val = self.get_cell_value(ws_obj, row_idx, col_idx) if not self.is_valid_power(power_val): continue try: power_dbm = float(power_val) except: continue encoded_power = self.encode_power(power_dbm) entries.append({ "ch": ch, "power_dbm": round(power_dbm, 2), "encoded_power": encoded_power, "rate_set_macro": rate_set_macro, # <<< 每个 macro 单独一条记录 "mode": phy_mode, "bw": bw_clean, "src_row": row_idx + 1, "band": sheet_config["band"] }) ch_count += 1 print( f"📊 已采集第 {row_idx + 1} 行 → {formatted_mode} {bw_clean}M, {ch_count} 个信道, 使用宏: {rate_set_macro}" ) return entries def compress_tx_limit_entries(self, raw_entries, sheet_config): """ 压缩TX限制条目。 Args: raw_entries (list): 原始条目列表。 sheet_config (dict): Excel表格配置字典。 Returns: list: 压缩后的条目列表。 """ from collections import defaultdict modes_order = sheet_config["modes"] # 构建小写映射用于排序(key: "11n_20M") mode_lower_to_index = {mode.lower(): idx for idx, mode in enumerate(modes_order)} range_template = sheet_config["range_macro_template"] group_key = lambda e: (e["encoded_power"], e["rate_set_macro"]) groups = defaultdict(list) for e in raw_entries: groups[group_key(e)].append(e) compressed = [] for (encoded_power, rate_set_macro), entries_in_group in groups.items(): first = entries_in_group[0] power_dbm = first["power_dbm"] mode = first["mode"] # 如 '11N' bw = first["bw"] # 如 '20' 或 '40' ch_list = sorted(e["ch"] for e in entries_in_group) for start, end in self.merge_consecutive_channels(ch_list): range_macro = range_template.format( band=sheet_config["band"], bw=bw, start=start, end=end ) # === 新增:查找或分配 CHANNEL_SET_ID === assigned_id = -1 # 表示:这不是 regulatory 范围,无需映射 # === 新增:记录到 generated_ranges === segment_ch_list = list(range(start, end + 1)) self._record_generated_range( range_macro=range_macro, band=sheet_config["band"], bw=bw, ch_start=start, ch_end=end, channels=segment_ch_list ) # 格式化物理层模式(如 '11N' -> '11n') formatted_mode = self.format_phy_mode(mode) # 构造 mode_key 用于查找排序优先级 mode_key = f"{formatted_mode}_{bw}M" mode_order_idx = mode_lower_to_index.get(mode_key.lower(), 999) # 生成注释 comment = f"/* {power_dbm:5.2f}dBm, CH{start}-{end}, {formatted_mode} @ {bw}MHz */" # 新增:生成该段落的实际信道列表 segment_ch_list = list(range(start, end + 1)) compressed.append({ "encoded_power": encoded_power, "range_macro": range_macro, "rate_set_macro": rate_set_macro, "comment": comment, "_mode_order": mode_order_idx, "bw": bw, # 带宽数字(字符串) "mode": formatted_mode, # 统一格式化的模式名 "ch_start": start, "ch_end": end, "power_dbm": round(power_dbm, 2), "ch_list": segment_ch_list, # 关键!用于 global_ch_min/max 统计 }) # 排序后删除临时字段 compressed.sort(key=lambda x: x["_mode_order"]) for item in compressed: del item["_mode_order"] return compressed def _record_generated_range(self, range_macro, band, bw, ch_start, ch_end, channels): """ 记录生成的 RANGE 宏信息,供后续输出 manifest 使用 """ self.generated_ranges.append({ "range_macro": range_macro, "band": band, "bandwidth": int(bw), "channels": sorted(channels), "start_channel": ch_start, "end_channel": int(ch_end), "source_sheet": getattr(self, 'current_sheet_name', 'unknown') }) def clean_sheet_name(self, name): cleaned = re.sub(r'[^\w\.\=\u4e00-\u9fa5]', '', str(name)) return cleaned def match_sheet_to_config(self, sheet_name): cleaned = self.clean_sheet_name(sheet_name) for cfg in self.config["sheets"]: for pat in cfg["pattern"]: if re.search(pat, cleaned, re.I): print(f" '{sheet_name}' → 清洗后: '{cleaned}'") print(f" 匹配成功!'{sheet_name}' → [{cfg['band']}] 配置") return cfg print(f" '{sheet_name}' → 清洗后: '{cleaned}'") print(f"未匹配到 '{sheet_name}' 的模式,跳过...") return None def convert_sheet_with_config(self, ws_obj, sheet_name, sheet_config): self.current_sheet_name = sheet_name # 设置当前 sheet 名,供 _record_generated_range 使用 header_row_idx, mode_col, rate_col = self.find_table_header_row(ws_obj) if header_row_idx is None: print(f" 跳过 '{sheet_name}':未找到 'Mode' 和 'Rate'") return auth_start, auth_end, auth_row = self.find_auth_power_above_row(ws_obj, header_row_idx) if auth_start is None: print(f" 跳过 '{sheet_name}':未找到 '认证功率'") return raw_entries = self.collect_tx_limit_data( ws_obj, sheet_config, header_row_idx, auth_row, auth_start, auth_end, mode_col, rate_col ) if not raw_entries: print(f" 从 '{sheet_name}' 未收集到有效数据") return compressed = self.compress_tx_limit_entries(raw_entries, sheet_config) # 仅对 2.4G 频段进行信道边界统计 band = str(sheet_config.get("band", "")).strip().upper() if band in ["2G", "2.4G", "2.4GHZ", "BGN"]: # 执行信道统计 for entry in compressed: ch_range = entry.get("ch_list") or [] if not ch_range: continue ch_start = min(ch_range) ch_end = max(ch_range) # 更新全局最小最大值 if self.global_ch_min is None or ch_start < self.global_ch_min: self.global_ch_min = ch_start if self.global_ch_max is None or ch_end > self.global_ch_max: self.global_ch_max = ch_end # 强制打印当前状态 print(f" [Band={band}] 累计 2.4G 信道范围: CH{self.global_ch_min} – CH{self.global_ch_max}") self.tx_limit_entries.extend(compressed) print(f" 成功从 '{sheet_name}' 添加 {len(compressed)} 条压缩后 TX 限幅条目") # 可选调试输出 if band == "2G" and self.global_ch_min is not None: print(f" 当前累计 2.4G 信道范围: CH{self.global_ch_min} – CH{self.global_ch_max}") def render_from_template(self, template_path, context, output_path): """ 根据模板生成文件。 Args: template_path (str): 模板文件路径。 context (dict): 渲染模板所需的上下文数据。 output_path (str): 输出文件的路径。 Returns: None Raises: FileNotFoundError: 如果指定的模板文件不存在。 IOError: 如果在读取或写入文件时发生错误。 """ with open(template_path, 'r', encoding='utf-8') as f: template = Template(f.read()) content = template.render(**context) os.makedirs(os.path.dirname(output_path), exist_ok=True) with open(output_path, 'w', encoding='utf-8') as f: f.write(content) print(f" 已生成: {output_path}") def generate_outputs(self, finalize_manifest=True): print(" 正在执行 generate_outputs()...") if not self.tx_limit_entries: print(" 无 TX 限幅数据可输出") return # === Step 1: 使用 "HT" 分类 entries === normal_entries = [] ht_entries = [] for e in self.tx_limit_entries: macro = e.get("rate_set_macro", "") if "HT" in macro: ht_entries.append(e) else: normal_entries.append(e) print(f" 自动分类结果:") print(f" ├─ Normal 模式(不含 HT): {len(normal_entries)} 条") print(f" └─ HT 模式(含 HT): {len(ht_entries)} 条") # === Step 2: 构建 g_tx_limit_normal 结构(按 bw 排序)=== def build_normal_structure(entries): grouped = defaultdict(list) for e in entries: bw = str(e["bw"]) grouped[bw].append(e) result = [] for bw in ["20", "40", "80", "160"]: if bw in grouped: sorted_entries = sorted(grouped[bw], key=lambda x: (x["ch_start"], x["encoded_power"])) result.append((bw, sorted_entries)) return result normal_struct = build_normal_structure(normal_entries) # === Step 3: 构建 g_tx_limit_ht 结构(严格顺序)=== def build_ht_structure(entries): groups = defaultdict(list) for e in entries: bw = str(e["bw"]) if "EXT4" in e["rate_set_macro"]: level = "ext4" elif "EXT" in e["rate_set_macro"]: level = "ext" else: level = "base" groups[(level, bw)].append(e) order = [ ("base", "20"), ("base", "40"), ("ext", "20"), ("ext", "40"), ("ext4", "20"), ("ext4", "40") ] segments = [] active_segment_count = sum(1 for key in order if key in groups) for idx, (level, bw) in enumerate(order): key = (level, bw) if key not in groups: continue seg_entries = sorted(groups[key], key=lambda x: (x["ch_start"], x["encoded_power"])) count = len(seg_entries) header_flags = f"CLM_DATA_FLAG_WIDTH_{bw} | CLM_DATA_FLAG_MEAS_COND" if idx < active_segment_count - 1: header_flags += " | CLM_DATA_FLAG_MORE" if level != "base": header_flags += " | CLM_DATA_FLAG_FLAG2" segment = { "header_flags": header_flags, "count": count, "entries": seg_entries } if level == "ext": segment["flag2"] = "CLM_DATA_FLAG2_RATE_TYPE_EXT" elif level == "ext4": segment["flag2"] = "CLM_DATA_FLAG2_RATE_TYPE_EXT4" segments.append(segment) return segments ht_segments = build_ht_structure(ht_entries) # === Step 4: fallback range 和 CHANNEL_SET 自动创建逻辑 === channel_set_comment = "Fallback 2.4GHz channel set (default)" if self.global_ch_min is not None and self.global_ch_max is not None: fallback_range_macro = f"RANGE_2G_20M_{self.global_ch_min}_{self.global_ch_max}" fallback_ch_start = self.global_ch_min fallback_ch_end = self.global_ch_max # 待修改 print(f" 正在设置监管 fallback 范围: {fallback_range_macro}") fallback_channel_set_id = 1 self.channel_set_map[fallback_range_macro] = fallback_channel_set_id print(f" 已绑定监管 fallback: {fallback_range_macro} → CHANNEL_SET_{fallback_channel_set_id}") else: fallback_range_macro = "RANGE_2G_20M_1_11" fallback_ch_start = 1 fallback_ch_end = 11 fallback_channel_set_id = 1 self.channel_set_map[fallback_range_macro] = fallback_channel_set_id print(" 未检测到有效的 2.4G 信道范围,使用默认 fallback: RANGE_2G_20M_1_11 → CHANNEL_SET_1") # 待修改 # === Step 5: 渲染上下文集合 === timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S") locale_id_safe = self.locale_id.replace('-', '_') context_clm = { "locale_id": locale_id_safe, "eirp_entries": self.eirp_entries or [], "fallback_encoded_eirp": 30, "fallback_range_macro": fallback_range_macro, "fallback_ch_start": fallback_ch_start, "fallback_ch_end": fallback_ch_end, "entries_grouped_by_bw": normal_struct, } context_tables = { "timestamp": timestamp, "locale_id": locale_id_safe, "locale_display_name": self.locale_display_name, "normal_table": normal_struct, "ht_segments": ht_segments, "fallback_encoded_eirp": 30, "fallback_range_macro": fallback_range_macro, "fallback_ch_start": fallback_ch_start, "fallback_ch_end": fallback_ch_end, "fallback_channel_set_id": fallback_channel_set_id, "channel_set_comment": channel_set_comment, } # 确保输出目录存在 output_dir = Path(self.output_dir) output_dir.mkdir(parents=True, exist_ok=True) #待修改 # 分析 tx_limit_table.c 的变更 output_path = output_dir / "tx_limit_table.c" template_path = "templates/tx_limit_table.c.j2" # 待修改 # 读取原始文件内容(如果存在) original_lines = [] file_existed = output_path.exists() if file_existed: try: original_lines = output_path.read_text(encoding='utf-8').splitlines() except Exception as e: print(f" 无法读取旧文件 {output_path}: {e}") # 生成新内容 try: new_content = self.render_from_template_string( template_path=template_path, context=context_tables ) new_lines = new_content.splitlines() except Exception as e: print(f" 模板渲染失败 ({template_path}): {e}") raise # 比较差异并决定是否写入 if not file_existed: print(f" 将创建新文件: {output_path}") elif original_lines != new_lines: print(f" 检测到变更,将更新文件: {output_path}") else: print(f" 文件内容未变,跳过写入: {output_path}") # 即使不写也要继续后续流程(如 manifest) # 写入新内容(除非完全一致且已存在) if not file_existed or original_lines != new_lines: try: output_path.write_text(new_content, encoding='utf-8') print(f" 已写入 → {output_path}") except Exception as e: print(f" 写入文件失败 {output_path}: {e}") raise #待修改 # === Step 6: 实际渲染其他模板文件 === try: self.render_from_template( "templates/clm_locale.c.j2", context_clm, str(output_dir / f"locale_{self.locale_id.lower()}.c") ) print(f" 生成 locale 文件 → locale_{self.locale_id.lower()}.c") self.render_from_template( "templates/clm_macros.h.j2", context_tables, str(output_dir / "clm_macros.h") ) print(f" 生成宏定义头文件 → clm_macros.h") except Exception as e: print(f" 模板渲染失败: {e}") raise # 待修改 # === Step 7: 生成 manifest.json(仅当 finalize_manifest=True)=== if finalize_manifest: used_range_macros = sorted(set(entry["range_macro"] for entry in self.tx_limit_entries)) self.used_ranges = used_range_macros manifest_data = { "timestamp": datetime.now().isoformat(), "locale_id": self.locale_id, "locale_display_name": self.locale_display_name, "used_ranges_count": len(used_range_macros), "used_ranges": used_range_macros } manifest_path = output_dir / "generated_ranges_manifest.json" try: with open(manifest_path, 'w', encoding='utf-8') as f: json.dump(manifest_data, f, indent=4, ensure_ascii=False) print(f" 已生成精简 manifest 文件: {manifest_path}") print(f" 共 {len(used_range_macros)} 个唯一 RANGE 宏被使用:") for macro in used_range_macros: print(f" - {macro}") self.last_generated_manifest = str(manifest_path) except Exception as e: print(f" 写入 manifest 失败: {e}") else: print(" 跳过 manifest 文件生成 (finalize_manifest=False)") # === Final Step: 保存 channel_set 映射配置 === self.save_channel_set_map_to_config() #待修改 # 最终总结 print(f" 所有输出文件生成完成。") print(f" 输出路径: {self.output_dir}") print(f" 功率表名称: {self.locale_display_name} ({self.locale_id})") # 待修改 def render_from_template_string(self, template_path, context): """仅渲染模板为字符串,不写入文件""" import jinja2 env = jinja2.Environment(loader=jinja2.FileSystemLoader(searchpath=".")) template = env.get_template(template_path) return template.render(**context) def log_changes_to_file(self, changes, output_dir, locale_id, total_entries): """将变更摘要写入日志文件""" log_dir = Path(output_dir) / "change_logs" log_dir.mkdir(exist_ok=True) # 使用时间戳生成唯一文件名 timestamp_str = datetime.now().strftime("%Y%m%d_%H%M%S") log_path = log_dir / f"change_{locale_id}_{timestamp_str}.log" timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S") with open(log_path, 'w', encoding='utf-8') as f: # 覆盖写入最新变更 f.write(f"========================================\n") f.write(f"CLM 变更日志\n") f.write(f"========================================\n") f.write(f"时间: {timestamp}\n") f.write(f"地区码: {locale_id}\n") f.write(f"总 TX 条目数: {total_entries}\n") f.write(f"\n") if not any(changes.values()): f.write(" 本次运行无任何变更,所有文件已是最新状态。\n") else: if changes['added_ranges']: f.write(f" 新增 RANGE ({len(changes['added_ranges'])}):\n") for r in sorted(changes['added_ranges']): f.write(f" → {r}\n") f.write(f"\n") if changes['removed_ranges']: f.write(f" 删除 RANGE ({len(changes['removed_ranges'])}):\n") for r in sorted(changes['removed_ranges']): f.write(f" → {r}\n") f.write(f"\n") if changes['modified_ranges']: f.write(f" 修改 RANGE ({len(changes['modified_ranges'])}):\n") for r in sorted(changes['modified_ranges']): f.write(f" → {r}\n") f.write(f"\n") other_adds = changes['other_additions'] other_dels = changes['other_deletions'] if other_adds or other_dels: f.write(f" 其他变更:\n") for line in other_adds[:10]: f.write(f" ➕ {line}\n") for line in other_dels[:10]: f.write(f" ➖ {line}\n") if len(other_adds) > 10 or len(other_dels) > 10: f.write(f" ... 还有 {len(other_adds) + len(other_dels) - 20} 处未显示\n") f.write(f"\n") f.write(f" 输出目录: {output_dir}\n") f.write(f"备份文件: {Path(self.target_c_file).with_suffix('.c.bak')}\n") f.write(f"========================================\n") print(f" 已保存变更日志 → {log_path}") def save_channel_set_map_to_config(self): """将当前 channel_set_map 写回 config.json 的 channel_set_map 字段""" try: # 清理:只保留 fallback 类型的 RANGE(可正则匹配) valid_keys = [ k for k in self.channel_set_map.keys() if re.match(r'RANGE_[\dA-Z]+_\d+M_\d+_\d+', k) # 如 RANGE_2G_20M_1_11 ] filtered_map = {k: v for k, v in self.channel_set_map.items() if k in valid_keys} # 更新主配置中的字段 self.config["channel_set_map"] = filtered_map # 使用过滤后的版本 with open(self.config_file_path, 'w', encoding='utf-8') as f: json.dump(self.config, f, indent=4, ensure_ascii=False) print(f" 已成功将精简后的 channel_set_map 写回配置文件: {filtered_map}") except Exception as e: print(f" 写入配置文件失败: {e}") raise def convert(self, file_path): # =============== 每次都更新备份 C 文件 =============== c_source = Path(self.target_c_file) c_backup = c_source.with_suffix(c_source.suffix + ".bak") if not c_source.exists(): raise FileNotFoundError(f"目标 C 文件不存在: {c_source}") ext = os.path.splitext(file_path)[-1].lower() if ext == ".xlsx": wb = load_workbook(file_path, data_only=True) sheets = [{"sheet": ws, "format": "xlsx"} for ws in wb.worksheets] elif ext == ".xls": wb = xlrd.open_workbook(file_path) sheets = [{"sheet": ws, "format": "xls"} for ws in wb.sheets()] else: raise ValueError("仅支持 .xls 或 .xlsx 文件") for i, ws_obj in enumerate(sheets): sheet_name = wb.sheet_names()[i] if ext == ".xls" else ws_obj["sheet"].title config = self.match_sheet_to_config(sheet_name) if config: self.convert_sheet_with_config(ws_obj, sheet_name, config) self.generate_outputs() def parse_excel(self): """ 【UI 兼容】供 PyQt UI 调用的入口方法 将当前 self.input_file 中的数据解析并填充到 tx_limit_entries """ if not hasattr(self, 'input_file') or not self.input_file: raise ValueError("未设置 input_file 属性!") if not os.path.exists(self.input_file): raise FileNotFoundError(f"文件不存在: {self.input_file}") print(f" 开始解析 Excel 文件: {self.input_file}") try: self.convert(self.input_file) # 调用已有逻辑 print(f" Excel 解析完成,共生成 {len(self.tx_limit_entries)} 条 TX 限幅记录") except Exception as e: print(f" 解析失败: {e}") raise if __name__ == "__main__": import os # 切换到脚本所在目录(可选,根据实际需求) script_dir = os.path.dirname(__file__) os.chdir(script_dir) # 直接使用默认参数(或从其他地方获取) config_path = "config/config.json" output_dir = "output" locale_id = None # 或指定默认值,如 "DEFAULT" display_name = None # 或指定默认值 input_file = "input/Archer BE900US 2.xlsx" # 创建转换器实例并执行 converter = ExcelToCLMConverter( config_path=config_path, output_dir=output_dir, locale_id=locale_id, locale_display_name=display_name ) converter.convert(input_file) 找到打包后可能路径出错的地方
10-21
为什么在这个打包文件无法识别,但是在文件excel_to_clm就能执行# clm_generator/excel_to_clm.py import os from datetime import datetime import re import json from pathlib import Path from openpyxl import load_workbook import xlrd from jinja2 import Template from collections import defaultdict from utils import resource_path, get_output_dir from pathlib import Path class ExcelToCLMConverter: def __init__(self, config_path="config/config.json", output_dir="output", locale_id=None, locale_display_name=None): self.last_generated_manifest = None self.used_ranges = [] # 改成 list 更合理(保持顺序) # === Step 1: 使用 resource_path 解析所有路径 === self.config_file_path = resource_path(config_path) if not os.path.exists(self.config_file_path): raise FileNotFoundError(f"配置文件不存在: {self.config_file_path}") with open(self.config_file_path, 'r', encoding='utf-8') as f: self.config = json.load(f) print(f" 配置文件已加载: {self.config_file_path}") # === 计算项目根目录:config 文件所在目录的父级 === project_root = Path(self.config_file_path).parent.parent # 即 F:\issue # === Step 2: 处理 target_c_file === rel_c_path = self.config.get("target_c_file", "input/wlc_clm_data_6726b0.c") self.target_c_file = resource_path(rel_c_path) if not os.path.exists(self.target_c_file): raise FileNotFoundError(f"配置中指定的 C 源文件不存在: {self.target_c_file}") print(f" 已定位目标 C 文件: {self.target_c_file}") # === Step 3: 初始化输出目录 === if os.path.isabs(output_dir): self.output_dir = output_dir else: self.output_dir = str(get_output_dir()) Path(self.output_dir).mkdir(parents=True, exist_ok=True) print(f" 输出目录: {self.output_dir}") # === Step 4: locale 设置 === self.locale_id = locale_id or self.config.get("DEFAULT_LOCALE_ID", "DEFAULT") self.locale_display_name = ( locale_display_name or self.config.get("DEFAULT_DISPLAY_NAME") or self.locale_id.replace('-', '_').upper() ) # === Step 5: channel_set_map 加载 === persisted_map = self.config.get("channel_set_map") if persisted_map is None: raise KeyError("配置文件缺少必需字段 'channel_set_map'") if not isinstance(persisted_map, dict): raise TypeError(f"channel_set_map 必须是字典类型,当前类型: {type(persisted_map)}") self.channel_set_map = {str(k): int(v) for k, v in persisted_map.items()} print(f" 成功加载 channel_set_map (共 {len(self.channel_set_map)} 项): {dict(self.channel_set_map)}") # === 初始化数据容器 === self.tx_power_data = [] self.tx_limit_entries = [] self.eirp_entries = [] self.global_ch_min = None self.global_ch_max = None self.generated_ranges = [] def reset(self): """重置所有运行时数据,便于多次生成""" self.tx_limit_entries.clear() self.eirp_entries.clear() self.used_ranges.clear() self.tx_power_data.clear() # 同步清空 self.generated_ranges.clear() self.last_generated_manifest = None print(" 所有生成数据已重置") print(f"🔧 初始化完成。目标C文件: {self.target_c_file}") print(f"📁 输出目录: {self.output_dir}") print(f"🌐 Locale ID: {self.locale_id}") # ==================== 新增工具方法:大小写安全查询 ==================== def _ci_get(self, data_dict, key): """ Case-insensitive 字典查找 """ for k, v in data_dict.items(): if k.lower() == key.lower(): return v return None def _ci_contains(self, data_list, item): """ Case-insensitive 判断元素是否在列表中 """ return any(x.lower() == item.lower() for x in data_list) def parse_mode_cell(self, cell_value): if not cell_value: return None val = str(cell_value).strip() val = re.sub(r'\s+', ' ', val.replace('\n', ' ').replace('\r', ' ')) val_upper = val.upper() found_modes = [] # 改进:使用 match + 允许后续内容(比如 20M),不要求全匹配 if re.match(r'^11AC\s*/\s*AX', val_upper) or re.match(r'^11AX\s*/\s*AC', val_upper): found_modes = ['11AC', '11AX'] print(f" 解析复合模式 '{val}' → {found_modes}") # ======== 一般情况:正则匹配标准模式 ======== else: mode_patterns = [ (r'\b11BE\b|\bEHT\b', '11BE'), (r'\b11AX\b|\bHE\b', '11AX'), (r'\b11AC\b|\bVHT\b', '11AC'), (r'\b11N\b|\bHT\b', '11N'), (r'\b11G\b|\bERP\b', '11G'), (r'\b11B\b|\bDSSS\b|\bCCK\b', '11B') ] for pattern, canonical in mode_patterns: if re.search(pattern, val_upper) and canonical not in found_modes: found_modes.append(canonical) # ======== 提取带宽 ======== bw_match = re.search(r'(20|40|80|160)\s*(?:MHZ|M)?\b', val_upper) bw = bw_match.group(1) if bw_match else None # fallback 带宽 if not bw: if all(m in ['11B', '11G'] for m in found_modes): bw = '20' else: bw = '20' if not found_modes: print(f" 无法识别物理模式: '{cell_value}'") return None return { "phy_mode_list": found_modes, "bw": bw } def format_phy_mode(self, mode: str) -> str: """ 自定义物理层模式输出格式: - 11B/G/N 输出为小写:11b / 11g / 11n - 其他保持原样(如 11AC, 11BE) """ return { '11B': '11b', '11G': '11g', '11N': '11n' }.get(mode, mode) def col_to_letter(self, col): col += 1 result = "" while col > 0: col -= 1 result = chr(col % 26 + ord('A')) + result col //= 26 return result def is_valid_power(self, value): try: float(value) return True except (ValueError, TypeError): return False def get_cell_value(self, ws_obj, row_idx, col_idx): fmt = ws_obj["format"] if fmt == "xls": return str(ws_obj["sheet"].cell_value(row_idx, col_idx)).strip() else: cell = ws_obj["sheet"].cell(row=row_idx + 1, column=col_idx + 1) val = cell.value return str(val).strip() if val is not None else "" def find_table_header_row(self, ws_obj): """查找包含 'Mode' 和 'Rate' 的表头行""" fmt = ws_obj["format"] ws = ws_obj["sheet"] for r in range(15): mode_col = rate_col = None if fmt == "xlsx": if r + 1 > ws.max_row: continue for c in range(1, ws.max_column + 1): cell = ws.cell(row=r + 1, column=c) if not cell.value: continue val = str(cell.value).strip() if val == "Mode": mode_col = c elif val == "Rate": rate_col = c if mode_col and rate_col and abs(mode_col - rate_col) == 1: print(f" 找到表头行: 第 {r+1} 行") return r, mode_col - 1, rate_col - 1 # 转为 0-based else: if r >= ws.nrows: continue for c in range(ws.ncols): val = ws.cell_value(r, c) if not val: continue val = str(val).strip() if val == "Mode": mode_col = c elif val == "Rate": rate_col = c if mode_col and rate_col and abs(mode_col - rate_col) == 1: print(f" 找到表头行: 第 {r+1} 行") return r, mode_col, rate_col return None, None, None def find_auth_power_above_row(self, ws_obj, start_row): """查找 '认证功率' 所在的合并单元格及其列范围""" fmt = ws_obj["format"] ws = ws_obj["sheet"] print(f" 开始向上查找 '认证功率',扫描第 0 ~ {start_row} 行...") if fmt == "xlsx": for mr in ws.merged_cells.ranges: top_left = ws.cell(row=mr.min_row, column=mr.min_col) val = str(top_left.value) if top_left.value else "" if "证功率" in val or "Cert" in val: r_idx = mr.min_row - 1 if r_idx <= start_row: start_col = mr.min_col - 1 end_col = mr.max_col - 1 print(f" 发现合并单元格含 '证功率': '{val}' → {self.col_to_letter(start_col)}{mr.min_row}") return start_col, end_col, r_idx # fallback:普通单元格 for r in range(start_row + 1): for c in range(1, ws.max_column + 1): cell = ws.cell(row=r + 1, column=c) if cell.value and ("证功率" in str(cell.value)): print(f" 普通单元格发现 '证功率': '{cell.value}' @ R{r+1}C{c}") return c - 1, c - 1, r else: for r in range(min(ws.nrows, start_row + 1)): for c in range(ws.ncols): val = ws.cell_value(r, c) if val and ("证功率" in str(val)): print(f" 发现 '证功率': '{val}' @ R{r+1}C{c+1}") return c, c, r return None, None, None def parse_ch_columns_under_auth(self, ws_obj, ch_row_idx, auth_start_col, auth_end_col): """ 只解析位于 [auth_start_col, auth_end_col] 区间内的 CHx 列 """ fmt = ws_obj["format"] ws = ws_obj["sheet"] ch_map = {} print(f" 解析 CH 行(第 {ch_row_idx + 1} 行),限定列范围: Col {auth_start_col} ~ {auth_end_col}") if fmt == "xlsx": for c in range(auth_start_col, auth_end_col + 1): cell = ws.cell(row=ch_row_idx + 1, column=c + 1) val = self.get_cell_value(ws_obj, ch_row_idx, c) match = re.search(r"CH(\d+)", val, re.I) if match: ch_num = int(match.group(1)) ch_map[ch_num] = c print(f" 发现 CH{ch_num} @ Col{c}") else: for c in range(auth_start_col, auth_end_col + 1): val = self.get_cell_value(ws_obj, ch_row_idx, c) match = re.search(r"CH(\d+)", val, re.I) if match: ch_num = int(match.group(1)) ch_map[ch_num] = c print(f" 发现 CH{ch_num} @ Col{c}") if not ch_map: print(" 在指定区域内未找到任何 CHx 列") else: chs = sorted(ch_map.keys()) print(f" 成功提取 CH{min(chs)}-{max(chs)} 共 {len(chs)} 个信道") return ch_map def encode_power(self, dbm): return int(round((float(dbm) + 1.5) * 4)) def merge_consecutive_channels(self, ch_list): if not ch_list: return [] sorted_ch = sorted(ch_list) ranges = [] start = end = sorted_ch[0] for ch in sorted_ch[1:]: if ch == end + 1: end = ch else: ranges.append((start, end)) start = end = ch ranges.append((start, end)) return ranges def collect_tx_limit_data(self, ws_obj, sheet_config, header_row_idx, auth_row, auth_start, auth_end, mode_col, rate_col): ch_row_idx = auth_row + 2 nrows = ws_obj["sheet"].nrows if ws_obj["format"] == "xls" else ws_obj["sheet"].max_row if ch_row_idx >= nrows: print(f" CH 行 ({ch_row_idx + 1}) 超出范围") return [] # 提取认证功率下方的 CH 列映射 ch_map = self.parse_ch_columns_under_auth(ws_obj, ch_row_idx, auth_start, auth_end) if not ch_map: return [] entries = [] row_mode_info = {} # {row_index: parsed_mode_info} fmt = ws_obj["format"] ws = ws_obj["sheet"] # ======== 第一步:构建 row_mode_info —— 使用新解析器 ======== if fmt == "xlsx": merged_cells_map = {} for mr in ws.merged_cells.ranges: for r in range(mr.min_row - 1, mr.max_row): for c in range(mr.min_col - 1, mr.max_col): merged_cells_map[(r, c)] = mr for row_idx in range(header_row_idx + 1, nrows): cell_value = None is_merged = (row_idx, mode_col) in merged_cells_map if is_merged: mr = merged_cells_map[(row_idx, mode_col)] top_cell = ws.cell(row=mr.min_row, column=mr.min_col) cell_value = top_cell.value else: raw_cell = ws.cell(row=row_idx + 1, column=mode_col + 1) cell_value = raw_cell.value mode_info = self.parse_mode_cell(cell_value) if mode_info: if is_merged: mr = merged_cells_map[(row_idx, mode_col)] for r in range(mr.min_row - 1, mr.max_row): if header_row_idx < r < nrows: row_mode_info[r] = mode_info.copy() else: row_mode_info[row_idx] = mode_info.copy() else: for row_idx in range(header_row_idx + 1, ws.nrows): cell_value = self.get_cell_value(ws_obj, row_idx, mode_col) mode_info = self.parse_mode_cell(cell_value) if mode_info: row_mode_info[row_idx] = mode_info.copy() # ======== 第二步:生成条目======== for row_idx in range(header_row_idx + 1, nrows): mode_info = row_mode_info.get(row_idx) if not mode_info: continue bw_clean = mode_info["bw"] has_valid_power = False for ch, col_idx in ch_map.items(): power_val = self.get_cell_value(ws_obj, row_idx, col_idx) if self.is_valid_power(power_val): has_valid_power = True break if not has_valid_power: print(f" 跳过空行: 第 {row_idx + 1} 行(无任何有效功率值)") continue # ---- 遍历每个 phy_mode ---- for phy_mode in mode_info["phy_mode_list"]: formatted_mode = self.format_phy_mode(phy_mode) mode_key = f"{formatted_mode}_{bw_clean}M" # 改为大小写不敏感判断 if not self._ci_contains(sheet_config.get("modes", []), mode_key): print(f" 忽略不支持的模式: {mode_key}") continue # === 获取 rate_set 定义(可能是 str 或 list)=== raw_rate_set = self._ci_get(sheet_config["rate_set_map"], mode_key) if not raw_rate_set: print(f" 找不到 rate_set 映射: {mode_key}") continue # 统一转为 list 处理 if isinstance(raw_rate_set, str): rate_set_list = [raw_rate_set] elif isinstance(raw_rate_set, list): rate_set_list = raw_rate_set else: continue # 非法类型跳过 for rate_set_macro in rate_set_list: ch_count = 0 for ch, col_idx in ch_map.items(): power_val = self.get_cell_value(ws_obj, row_idx, col_idx) if not self.is_valid_power(power_val): continue try: power_dbm = float(power_val) except: continue encoded_power = self.encode_power(power_dbm) entries.append({ "ch": ch, "power_dbm": round(power_dbm, 2), "encoded_power": encoded_power, "rate_set_macro": rate_set_macro, # <<< 每个 macro 单独一条记录 "mode": phy_mode, "bw": bw_clean, "src_row": row_idx + 1, "band": sheet_config["band"] }) ch_count += 1 print( f"📊 已采集第 {row_idx + 1} 行 → {formatted_mode} {bw_clean}M, {ch_count} 个信道, 使用宏: {rate_set_macro}" ) return entries def compress_tx_limit_entries(self, raw_entries, sheet_config): """ 压缩TX限制条目。 Args: raw_entries (list): 原始条目列表。 sheet_config (dict): Excel表格配置字典。 Returns: list: 压缩后的条目列表。 """ from collections import defaultdict modes_order = sheet_config["modes"] # 构建小写映射用于排序(key: "11n_20M") mode_lower_to_index = {mode.lower(): idx for idx, mode in enumerate(modes_order)} range_template = sheet_config["range_macro_template"] group_key = lambda e: (e["encoded_power"], e["rate_set_macro"]) groups = defaultdict(list) for e in raw_entries: groups[group_key(e)].append(e) compressed = [] for (encoded_power, rate_set_macro), entries_in_group in groups.items(): first = entries_in_group[0] power_dbm = first["power_dbm"] mode = first["mode"] # 如 '11N' bw = first["bw"] # 如 '20' 或 '40' ch_list = sorted(e["ch"] for e in entries_in_group) for start, end in self.merge_consecutive_channels(ch_list): range_macro = range_template.format( band=sheet_config["band"], bw=bw, start=start, end=end ) # === 新增:查找或分配 CHANNEL_SET_ID === assigned_id = -1 # 表示:这不是 regulatory 范围,无需映射 # === 新增:记录到 generated_ranges === segment_ch_list = list(range(start, end + 1)) self._record_generated_range( range_macro=range_macro, band=sheet_config["band"], bw=bw, ch_start=start, ch_end=end, channels=segment_ch_list ) # 格式化物理层模式(如 '11N' -> '11n') formatted_mode = self.format_phy_mode(mode) # 构造 mode_key 用于查找排序优先级 mode_key = f"{formatted_mode}_{bw}M" mode_order_idx = mode_lower_to_index.get(mode_key.lower(), 999) # 生成注释 comment = f"/* {power_dbm:5.2f}dBm, CH{start}-{end}, {formatted_mode} @ {bw}MHz */" # 新增:生成该段落的实际信道列表 segment_ch_list = list(range(start, end + 1)) compressed.append({ "encoded_power": encoded_power, "range_macro": range_macro, "rate_set_macro": rate_set_macro, "comment": comment, "_mode_order": mode_order_idx, "bw": bw, # 带宽数字(字符串) "mode": formatted_mode, # 统一格式化的模式名 "ch_start": start, "ch_end": end, "power_dbm": round(power_dbm, 2), "ch_list": segment_ch_list, # 关键!用于 global_ch_min/max 统计 }) # 排序后删除临时字段 compressed.sort(key=lambda x: x["_mode_order"]) for item in compressed: del item["_mode_order"] return compressed def _record_generated_range(self, range_macro, band, bw, ch_start, ch_end, channels): """ 记录生成的 RANGE 宏信息,供后续输出 manifest 使用 """ self.generated_ranges.append({ "range_macro": range_macro, "band": band, "bandwidth": int(bw), "channels": sorted(channels), "start_channel": ch_start, "end_channel": int(ch_end), "source_sheet": getattr(self, 'current_sheet_name', 'unknown') }) def clean_sheet_name(self, name): cleaned = re.sub(r'[^\w\.\=\u4e00-\u9fa5]', '', str(name)) return cleaned def match_sheet_to_config(self, sheet_name): cleaned = self.clean_sheet_name(sheet_name) for cfg in self.config["sheets"]: for pat in cfg["pattern"]: if re.search(pat, cleaned, re.I): print(f" '{sheet_name}' → 清洗后: '{cleaned}'") print(f" 匹配成功!'{sheet_name}' → [{cfg['band']}] 配置") return cfg print(f" '{sheet_name}' → 清洗后: '{cleaned}'") print(f"未匹配到 '{sheet_name}' 的模式,跳过...") return None def convert_sheet_with_config(self, ws_obj, sheet_name, sheet_config): self.current_sheet_name = sheet_name # 设置当前 sheet 名,供 _record_generated_range 使用 header_row_idx, mode_col, rate_col = self.find_table_header_row(ws_obj) if header_row_idx is None: print(f" 跳过 '{sheet_name}':未找到 'Mode' 和 'Rate'") return auth_start, auth_end, auth_row = self.find_auth_power_above_row(ws_obj, header_row_idx) if auth_start is None: print(f" 跳过 '{sheet_name}':未找到 '认证功率'") return raw_entries = self.collect_tx_limit_data( ws_obj, sheet_config, header_row_idx, auth_row, auth_start, auth_end, mode_col, rate_col ) if not raw_entries: print(f" 从 '{sheet_name}' 未收集到有效数据") return compressed = self.compress_tx_limit_entries(raw_entries, sheet_config) # 仅对 2.4G 频段进行信道边界统计 band = str(sheet_config.get("band", "")).strip().upper() if band in ["2G", "2.4G", "2.4GHZ", "BGN"]: # 执行信道统计 for entry in compressed: ch_range = entry.get("ch_list") or [] if not ch_range: continue ch_start = min(ch_range) ch_end = max(ch_range) # 更新全局最小最大值 if self.global_ch_min is None or ch_start < self.global_ch_min: self.global_ch_min = ch_start if self.global_ch_max is None or ch_end > self.global_ch_max: self.global_ch_max = ch_end # 强制打印当前状态 print(f" [Band={band}] 累计 2.4G 信道范围: CH{self.global_ch_min} – CH{self.global_ch_max}") self.tx_limit_entries.extend(compressed) print(f" 成功从 '{sheet_name}' 添加 {len(compressed)} 条压缩后 TX 限幅条目") # 可选调试输出 if band == "2G" and self.global_ch_min is not None: print(f" 当前累计 2.4G 信道范围: CH{self.global_ch_min} – CH{self.global_ch_max}") def render_from_template(self, template_path, context, output_path): """ 根据模板生成文件。 Args: template_path (str): 模板文件路径。 context (dict): 渲染模板所需的上下文数据。 output_path (str): 输出文件的路径。 Returns: None Raises: FileNotFoundError: 如果指定的模板文件不存在。 IOError: 如果在读取或写入文件时发生错误。 """ template_path = resource_path(template_path) with open(template_path, 'r', encoding='utf-8') as f: template = Template(f.read()) content = template.render(**context) os.makedirs(os.path.dirname(output_path), exist_ok=True) with open(output_path, 'w', encoding='utf-8') as f: f.write(content) print(f" 已生成: {output_path}") def generate_outputs(self, finalize_manifest=True): print(" 正在执行 generate_outputs()...") if not self.tx_limit_entries: print(" 无 TX 限幅数据可输出") return # === Step 1: 使用 "HT" 分类 entries === normal_entries = [] ht_entries = [] for e in self.tx_limit_entries: macro = e.get("rate_set_macro", "") if "HT" in macro: ht_entries.append(e) else: normal_entries.append(e) print(f" 自动分类结果:") print(f" ├─ Normal 模式(不含 HT): {len(normal_entries)} 条") print(f" └─ HT 模式(含 HT): {len(ht_entries)} 条") # === Step 2: 构建 g_tx_limit_normal 结构(按 bw 排序)=== def build_normal_structure(entries): grouped = defaultdict(list) for e in entries: bw = str(e["bw"]) grouped[bw].append(e) result = [] for bw in ["20", "40", "80", "160"]: if bw in grouped: sorted_entries = sorted(grouped[bw], key=lambda x: (x["ch_start"], x["encoded_power"])) result.append((bw, sorted_entries)) return result normal_struct = build_normal_structure(normal_entries) # === Step 3: 构建 g_tx_limit_ht 结构(严格顺序)=== def build_ht_structure(entries): groups = defaultdict(list) for e in entries: bw = str(e["bw"]) if "EXT4" in e["rate_set_macro"]: level = "ext4" elif "EXT" in e["rate_set_macro"]: level = "ext" else: level = "base" groups[(level, bw)].append(e) order = [ ("base", "20"), ("base", "40"), ("ext", "20"), ("ext", "40"), ("ext4", "20"), ("ext4", "40") ] segments = [] active_segment_count = sum(1 for key in order if key in groups) for idx, (level, bw) in enumerate(order): key = (level, bw) if key not in groups: continue seg_entries = sorted(groups[key], key=lambda x: (x["ch_start"], x["encoded_power"])) count = len(seg_entries) header_flags = f"CLM_DATA_FLAG_WIDTH_{bw} | CLM_DATA_FLAG_MEAS_COND" if idx < active_segment_count - 1: header_flags += " | CLM_DATA_FLAG_MORE" if level != "base": header_flags += " | CLM_DATA_FLAG_FLAG2" segment = { "header_flags": header_flags, "count": count, "entries": seg_entries } if level == "ext": segment["flag2"] = "CLM_DATA_FLAG2_RATE_TYPE_EXT" elif level == "ext4": segment["flag2"] = "CLM_DATA_FLAG2_RATE_TYPE_EXT4" segments.append(segment) return segments ht_segments = build_ht_structure(ht_entries) # === Step 4: fallback range 和 CHANNEL_SET 自动创建逻辑 === channel_set_comment = "Fallback 2.4GHz channel set (default)" if self.global_ch_min is not None and self.global_ch_max is not None: fallback_range_macro = f"RANGE_2G_20M_{self.global_ch_min}_{self.global_ch_max}" fallback_ch_start = self.global_ch_min fallback_ch_end = self.global_ch_max # 待修改 print(f" 正在设置监管 fallback 范围: {fallback_range_macro}") fallback_channel_set_id = 1 self.channel_set_map[fallback_range_macro] = fallback_channel_set_id print(f" 已绑定监管 fallback: {fallback_range_macro} → CHANNEL_SET_{fallback_channel_set_id}") else: fallback_range_macro = "RANGE_2G_20M_1_11" fallback_ch_start = 1 fallback_ch_end = 11 fallback_channel_set_id = 1 self.channel_set_map[fallback_range_macro] = fallback_channel_set_id print(" 未检测到有效的 2.4G 信道范围,使用默认 fallback: RANGE_2G_20M_1_11 → CHANNEL_SET_1") # 待修改 # === Step 5: 渲染上下文集合 === timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S") locale_id_safe = self.locale_id.replace('-', '_') context_clm = { "locale_id": locale_id_safe, "eirp_entries": self.eirp_entries or [], "fallback_encoded_eirp": 30, "fallback_range_macro": fallback_range_macro, "fallback_ch_start": fallback_ch_start, "fallback_ch_end": fallback_ch_end, "entries_grouped_by_bw": normal_struct, } context_tables = { "timestamp": timestamp, "locale_id": locale_id_safe, "locale_display_name": self.locale_display_name, "normal_table": normal_struct, "ht_segments": ht_segments, "fallback_encoded_eirp": 30, "fallback_range_macro": fallback_range_macro, "fallback_ch_start": fallback_ch_start, "fallback_ch_end": fallback_ch_end, "fallback_channel_set_id": fallback_channel_set_id, "channel_set_comment": channel_set_comment, } # 确保输出目录存在 output_dir = Path(self.output_dir) output_dir.mkdir(parents=True, exist_ok=True) #待修改 # 分析 tx_limit_table.c 的变更 output_path = output_dir / "tx_limit_table.c" template_path = "templates/tx_limit_table.c.j2" # 待修改 # 读取原始文件内容(如果存在) original_lines = [] file_existed = output_path.exists() if file_existed: try: original_lines = output_path.read_text(encoding='utf-8').splitlines() except Exception as e: print(f" 无法读取旧文件 {output_path}: {e}") # 生成新内容 try: new_content = self.render_from_template_string( template_path=template_path, context=context_tables ) new_lines = new_content.splitlines() except Exception as e: print(f" 模板渲染失败 ({template_path}): {e}") raise # 比较差异并决定是否写入 if not file_existed: print(f" 将创建新文件: {output_path}") elif original_lines != new_lines: print(f" 检测到变更,将更新文件: {output_path}") else: print(f" 文件内容未变,跳过写入: {output_path}") # 即使不写也要继续后续流程(如 manifest) # 写入新内容(除非完全一致且已存在) if not file_existed or original_lines != new_lines: try: output_path.write_text(new_content, encoding='utf-8') print(f" 已写入 → {output_path}") except Exception as e: print(f" 写入文件失败 {output_path}: {e}") raise #待修改 # === Step 6: 实际渲染其他模板文件 === try: self.render_from_template( "templates/clm_locale.c.j2", context_clm, str(output_dir / f"locale_{self.locale_id.lower()}.c") ) print(f" 生成 locale 文件 → locale_{self.locale_id.lower()}.c") self.render_from_template( "templates/clm_macros.h.j2", context_tables, str(output_dir / "clm_macros.h") ) print(f" 生成宏定义头文件 → clm_macros.h") except Exception as e: print(f" 模板渲染失败: {e}") raise # 待修改 # === Step 7: 生成 manifest.json(仅当 finalize_manifest=True)=== if finalize_manifest: used_range_macros = sorted(set(entry["range_macro"] for entry in self.tx_limit_entries)) self.used_ranges = used_range_macros manifest_data = { "timestamp": datetime.now().isoformat(), "locale_id": self.locale_id, "locale_display_name": self.locale_display_name, "used_ranges_count": len(used_range_macros), "used_ranges": used_range_macros } manifest_path = output_dir / "generated_ranges_manifest.json" try: with open(manifest_path, 'w', encoding='utf-8') as f: json.dump(manifest_data, f, indent=4, ensure_ascii=False) print(f" 已生成精简 manifest 文件: {manifest_path}") print(f" 共 {len(used_range_macros)} 个唯一 RANGE 宏被使用:") for macro in used_range_macros: print(f" - {macro}") self.last_generated_manifest = str(manifest_path) except Exception as e: print(f" 写入 manifest 失败: {e}") else: print(" 跳过 manifest 文件生成 (finalize_manifest=False)") # === Final Step: 保存 channel_set 映射配置 === self.save_channel_set_map_to_config() #待修改 # 最终总结 print(f" 所有输出文件生成完成。") print(f" 输出路径: {self.output_dir}") print(f" 功率表名称: {self.locale_display_name} ({self.locale_id})") # 待修改 def render_from_template_string(self, template_path, context): from jinja2 import Environment, FileSystemLoader import os # 解析模板目录 template_dir = os.path.dirname(resource_path(template_path)) loader = FileSystemLoader(template_dir) env = Environment(loader=loader) filename = os.path.basename(template_path) template = env.get_template(filename) return template.render(**context) def log_changes_to_file(self, changes, output_dir, locale_id, total_entries): """将变更摘要写入日志文件""" log_dir = Path(output_dir) / "change_logs" log_dir.mkdir(exist_ok=True) # 使用时间戳生成唯一文件名 timestamp_str = datetime.now().strftime("%Y%m%d_%H%M%S") log_path = log_dir / f"change_{locale_id}_{timestamp_str}.log" timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S") with open(log_path, 'w', encoding='utf-8') as f: # 覆盖写入最新变更 f.write(f"========================================\n") f.write(f"CLM 变更日志\n") f.write(f"========================================\n") f.write(f"时间: {timestamp}\n") f.write(f"地区码: {locale_id}\n") f.write(f"总 TX 条目数: {total_entries}\n") f.write(f"\n") if not any(changes.values()): f.write(" 本次运行无任何变更,所有文件已是最新状态。\n") else: if changes['added_ranges']: f.write(f" 新增 RANGE ({len(changes['added_ranges'])}):\n") for r in sorted(changes['added_ranges']): f.write(f" → {r}\n") f.write(f"\n") if changes['removed_ranges']: f.write(f" 删除 RANGE ({len(changes['removed_ranges'])}):\n") for r in sorted(changes['removed_ranges']): f.write(f" → {r}\n") f.write(f"\n") if changes['modified_ranges']: f.write(f" 修改 RANGE ({len(changes['modified_ranges'])}):\n") for r in sorted(changes['modified_ranges']): f.write(f" → {r}\n") f.write(f"\n") other_adds = changes['other_additions'] other_dels = changes['other_deletions'] if other_adds or other_dels: f.write(f" 其他变更:\n") for line in other_adds[:10]: f.write(f" ➕ {line}\n") for line in other_dels[:10]: f.write(f" ➖ {line}\n") if len(other_adds) > 10 or len(other_dels) > 10: f.write(f" ... 还有 {len(other_adds) + len(other_dels) - 20} 处未显示\n") f.write(f"\n") f.write(f" 输出目录: {output_dir}\n") f.write(f"备份文件: {Path(self.target_c_file).with_suffix('.c.bak')}\n") f.write(f"========================================\n") print(f" 已保存变更日志 → {log_path}") def save_channel_set_map_to_config(self): """将当前 channel_set_map 写回 config.json 的 channel_set_map 字段""" try: # 清理:只保留 fallback 类型的 RANGE(可正则匹配) valid_keys = [ k for k in self.channel_set_map.keys() if re.match(r'RANGE_[\dA-Z]+_\d+M_\d+_\d+', k) # 如 RANGE_2G_20M_1_11 ] filtered_map = {k: v for k, v in self.channel_set_map.items() if k in valid_keys} # 更新主配置中的字段 self.config["channel_set_map"] = filtered_map # 使用过滤后的版本 with open(self.config_file_path, 'w', encoding='utf-8') as f: json.dump(self.config, f, indent=4, ensure_ascii=False) print(f" 已成功将精简后的 channel_set_map 写回配置文件: {filtered_map}") except Exception as e: print(f" 写入配置文件失败: {e}") raise def convert(self, file_path): # =============== 每次都更新备份 C 文件 =============== c_source = Path(self.target_c_file) c_backup = c_source.with_suffix(c_source.suffix + ".bak") if not c_source.exists(): raise FileNotFoundError(f"目标 C 文件不存在: {c_source}") ext = os.path.splitext(file_path)[-1].lower() if ext == ".xlsx": wb = load_workbook(file_path, data_only=True) sheets = [{"sheet": ws, "format": "xlsx"} for ws in wb.worksheets] elif ext == ".xls": wb = xlrd.open_workbook(file_path) sheets = [{"sheet": ws, "format": "xls"} for ws in wb.sheets()] else: raise ValueError("仅支持 .xls 或 .xlsx 文件") for i, ws_obj in enumerate(sheets): sheet_name = wb.sheet_names()[i] if ext == ".xls" else ws_obj["sheet"].title config = self.match_sheet_to_config(sheet_name) if config: self.convert_sheet_with_config(ws_obj, sheet_name, config) self.generate_outputs() def parse_excel(self): """ 【UI 兼容】供 PyQt UI 调用的入口方法 将当前 self.input_file 中的数据解析并填充到 tx_limit_entries """ print(f"📂 开始解析: {self.input_file}") if not os.path.exists(self.input_file): print(f"❌ 文件不存在: {self.input_file}") raise FileNotFoundError(...) else: print(f"✅ 文件已找到,大小: {os.path.getsize(self.input_file)} 字节") if not hasattr(self, 'input_file') or not self.input_file: raise ValueError("未设置 input_file 属性!") if not os.path.exists(self.input_file): raise FileNotFoundError(f"文件不存在: {self.input_file}") print(f" 开始解析 Excel 文件: {self.input_file}") try: self.convert(self.input_file) # 调用已有逻辑 print(f" Excel 解析完成,共生成 {len(self.tx_limit_entries)} 条 TX 限幅记录") except Exception as e: print(f" 解析失败: {e}") raise if __name__ == "__main__": import os # 切换到脚本所在目录(可选,根据实际需求) script_dir = os.path.dirname(__file__) os.chdir(script_dir) # 直接使用默认参数(或从其他地方获取) config_path = "config/config.json" output_dir = "output" locale_id = None # 或指定默认值,如 "DEFAULT" display_name = None # 或指定默认值 input_file = "input/Archer BE900US 2.xlsx" # 创建转换器实例并执行 converter = ExcelToCLMConverter( config_path=config_path, output_dir=output_dir, locale_id=locale_id, locale_display_name=display_name ) converter.convert(input_file)
10-21
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值