pd_process.c 文件源码分析

一 pd_process.c 是SVT-AV1编码器中的Picture Decision处理模块,主要功能包括

核心功能

1 图片决策上下文管理,创建和管理PD处理所需要的数据结构

--创建和销毁Picture Decision上下文

--管理图片决策过程中的各种数据结构和缓冲区

2 场景转换检测:使用直方图差异检测场景切换和突然变化

--检测视频序列中的场景切换

--检测突然变化(如闪光灯效果)

--使用直方图差异(AHD - Average Historgram Difference)进行检测

3 参考图片集管理:填充,统计信息收集等

--生成的管理参考图片集信息

--设置参考图片列表List0和List1

--处理参考图片的POC Picture Order Count 和显示顺序

4 图片预处理:填充,统计信息收集等

--图片填充(padding)到SuperBlock尺寸的倍数

--图片填充到最小块尺寸的倍数

--图片统计信息收集

5 Mini GOP管理:管理图片组的结构和配置

--管理Mini GOP的结构和配置

--处理Mini GOP内的图片顺序和依赖关系

--评估Mini GOP的活动度(Activity)

6 时域滤波处理:管理时域滤波相关的图片缓冲区

--管理时域滤波相关的图片缓冲区

--处理低延迟模式下的时域滤波

--多级时域滤波 MCTF

7 预测结构设置:根据配置设置预测结构

--根据编码配置设置预测结构

--支持多种预测结构:Flat, Hierarchical等

--设置关键帧 Key Frame, 和S帧 S-Frame

8 图片决策主循环:处理图片并发送到后续模块

--处理来自Picture Analysis的结果

--为每个图片设置编码参数

--将处理后的图片发送到后续处理阶段ME,TF等

9 屏幕内容检测

--检测输入内容是否为屏幕内容,Screen Content

-- 平米内容需要特殊的编码策略

10 自适应量化 Adaptive Quantization

-- 初始化循环刷新 Cycle Refresh

-- 管理AQ相关的参数

在编码流程中的位置

输入:来自Picture Analysis Porcess的结果

输出:发送到Motion Estimation和Temporal Filtering模块

关键数据结构

-PictureDecisionContext: 图片决策上下文,包含所有决策相关的状态信息

PictureParentControlSet: 图片父控制集,包含图片的所有编码参数

SequenceControlSet 序列控制集,包含序列级别的配置。

与其他模块的关系

输入:来自Picture Analysis Process的结果

输出:发送到Motion Estimation 和Temporal Filterting 模块

处理流程:

1 接收Picture Analysis 的结果

2 进行场景转换检测

3 设置预测结构和参考图片集

4 进行图片预处理和统计信息收集

5 管理Mini GOP结构

6 设置图片编码参数

7 发送到后续处理阶段

性能优化:

使用多线程处理

支持低延迟模式

支持多级时域滤波

支持自适应量化

/*

Defines 定义常量

层级偏移量定义,用于预测结构中的层级索引

这些偏移用于在分层预测结构中定位不同层级的图片

函数名:calc_ahd

功能:计算平均直方图差异,Average Histogram Difference, AHD

该函数用于场景转换检测,通过比较当前图片和参考图片的直方图差异来判断

场景是否发生变化,AHD值越大,表示两帧之间的差异越大

参数:

scs 序列控制集,包含序列级别的配置信息

input_pcs 输入图片的控制集,当前要分析的图片

ref_pcs: 参考图片的控制集,用于比较的参考图片

active_region_cnt 输出参数,统计活跃区域的数量。

ahd 计算得到的平均直方图差异值

算法说明:

1 将图片成多个区域

2 对每个区域,计算其直方图与参考图片对应区域直方图的差异

3 累加所有区域的差异值得到总的AHD

4 如果某个区域的差异超过阈值,则将其标记为活跃区域

*/

static uint32_t calc_ahd(

SequenceControlSet *scs, #序列控制集指针

PictureParentControlSet *input_pcs, #输入图片控制集指针

PictureParentControlSet *ref_pcs, #参考图片控制集指针

uint8_t *active_region_cnt #活跃区域计数指针

)

{

uint32_t ahd = 0;//初始化AHD值为0,用于累加所有区域的直方图差异

//计算每个区域的宽度和高度

//区域宽度 = 图片宽度 / 水平方向区域数量

uint32_t region_width = ref_pcs->enhance_pic->width / scs->picture_analysis_number_of_regions_per_width;

//区域高度 = 图片高度 / 垂直方向区域数量

uint32_t region_height = ref_pcs->enhance_pic->height / scs->picture_analysis_number_of_regions_per_height;

//遍历图片所有区域

//外层循环遍历水平方向所有区域

for (int i = 0; i<width;i++)

for (int j = 0; j<height; j++) {

uint32_t ahd_per_region = 0;

for (int bin = 0; bin < 255; bin++) {

//计算当前区域指定bin直方图差异

//取绝对值,累加到当前区域AHD

ahd_per_region += ABS((int32_t)input_pcs->picture_histogram[region_in_picture_width_index][region_in_picture_height_index][bin] - (int32_t)ref_pcs->picture_histogram[region_in_picture_width_index][region_in_picture_height_index][bin]);

}

ahd += ahd_per_region;//当前区域AHD累加AHD

//如果当前区域AHD超过阈值区域面积则认为活跃区域

//活跃区域表示区域发生显著变化

if (ahd_per_region > (region_width * region_height))

(*active_region_cnt)++; //增加活跃区域计算

}

}

return ahd; //返回计算得到ahd

}

"C:\Program Files\Python39\python.exe" D:/李晓永/个人/仿真设计平台/结题/3_31版程序_发高德/jt_cryocooler.py concurrent.futures.process._RemoteTraceback: """ Traceback (most recent call last): File "C:\Program Files\Python39\lib\concurrent\futures\process.py", line 243, in _process_worker r = call_item.fn(*call_item.args, **call_item.kwargs) File "D:\李晓永\个人\仿真设计平台\结题\3_31版程序_发高德\jt_cryocooler.py", line 461, in run_simulation_with_excel df = pd.read_excel(config_file, sheet_name=0) File "C:\Program Files\Python39\lib\site-packages\pandas\io\excel\_base.py", line 495, in read_excel io = ExcelFile( File "C:\Program Files\Python39\lib\site-packages\pandas\io\excel\_base.py", line 1567, in __init__ self._reader = self._engines[engine]( File "C:\Program Files\Python39\lib\site-packages\pandas\io\excel\_openpyxl.py", line 552, in __init__ import_optional_dependency("openpyxl") File "C:\Program Files\Python39\lib\site-packages\pandas\compat\_optional.py", line 164, in import_optional_dependency raise ImportError(msg) ImportError: Pandas requires version '3.1.0' or newer of 'openpyxl' (version '3.0.10' currently installed). """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "D:\李晓永\个人\仿真设计平台\结题\3_31版程序_发高德\jt_cryocooler.py", line 588, in <module> run_simulations2(files4) File "D:\李晓永\个人\仿真设计平台\结题\3_31版程序_发高德\jt_cryocooler.py", line 520, in run_simulations2 future.result() File "C:\Program Files\Python39\lib\concurrent\futures\_base.py", line 433, in result return self.__get_result() File "C:\Program Files\Python39\lib\concurrent\futures\_base.py", line 389, in __get_result raise self._exception ImportError: Pandas requires version '3.1.0' or newer of 'openpyxl' (version '3.0.10' currently installed). Process finished with exit code 1该错误的原因是什么
09-27
[ 87.455067][ T2222] PD_ERR: qsh_process : EF:qsh_process:0x4:cbworker0:0x10000034:sdc_err_handler.c:109:Crash on SDC [ 87.458285][ T643] qcom_smp2p soc:qcom,smp2p-adsp: 2: sleepstate_see: status:0 val:be0101 [ 87.463939][ T643] qcom_smp2p soc:qcom,smp2p-adsp: 2: sleepstate_see: status:0 val:be0101 [ 87.464053][ T643] qcom_q6v5_pas 3000000.remoteproc-adsp: rproc crash at cycle:2021392846, recovery state: disabled and lead to device crash [ 87.464153][ T643] qcom_q6v5_pas 3000000.remoteproc-adsp: fatal error received: err_qdi.c:1215:EF:qsh_process:0x4:cbworker0:0x10000034:sdc_err_handler.c:109:Crash on SDC [ 87.464275][ T643] Kernel panic - not syncing: Panicking, remoteproc 3000000.remoteproc-adsp crashed reason: err_qdi.c:1215:EF:qsh_process:0x4:cbworker0:0x10000034:sdc_err_handler.c:109:Crash on SDC [ 87.464322][ T643] CPU: 0 PID: 643 Comm: irq/200-smp2p_2 Tainted: G S B W OE 6.6.57-android15-8-o-g173e77461dbc-4k #1 1400000003000000474e55006f8a2ddfecd3b352 [ 87.464368][ T643] Hardware name: Qualcomm Technologies, Inc. Kera MTP,neutron (DT) [ 87.464390][ T643] Call trace: [ 87.464403][ T643] dump_backtrace+0x178/0x1a8 [ 87.464448][ T643] show_stack+0x2c/0x40 [ 87.464482][ T643] dump_stack_lvl+0x7c/0x9c [ 87.464528][ T643] dump_stack+0x1c/0x28 [ 87.464570][ T643] panic+0x2c8/0x600 [ 87.464606][ T643] q6v5_fatal_interrupt+0xac4/0xbdc [qcom_q6v5 1400000003000000474e5500b09a35819f0c8dfd] [ 87.464773][ T643] handle_nested_irq+0x16c/0x238 [ 87.464821][ T643] qcom_smp2p_intr+0x67c/0x948 [oplus_smp2p 1400000003000000474e550010ac4277d076d1d3] [ 87.464973][ T643] irq_thread_fn+0x64/0xe8 [ 87.465015][ T643] irq_thread+0x35c/0x550 [ 87.465056][ T643] kthread+0x210/0x2d0 [ 87.465091][ T643] ret_from_fork+0x10/0x20 [ 87.465131][ T643] SMP: stopping secondary CPUs
12-13
D:\pythonProject1\PaddleDetection-release-2.8.1>python tools/export_model.py -c configs/ppyolo/ppyolo_r18vd_coco.yml --output_dir ./inference_model -o weights=tools/output/249.pdparams 信息: 用提供的模式无法找到文件。 Warning: Unable to use JDE/FairMOT/ByteTrack, please install lap, for example: `pip install lap`, see https://github.com/gatagat/lap Warning: Unable to use numba in PP-Tracking, please install numba, for example(python3.7): `pip install numba==0.56.4` Warning: Unable to use numba in PP-Tracking, please install numba, for example(python3.7): `pip install numba==0.56.4` [06/05 14:50:46] ppdet.utils.checkpoint INFO: Skipping import of the encryption module. Warning: Unable to use MOT metric, please install motmetrics, for example: `pip install motmetrics`, see https://github.com/longcw/py-motmetrics Warning: Unable to use MCMOT metric, please install motmetrics, for example: `pip install motmetrics`, see https://github.com/longcw/py-motmetrics [06/05 14:50:47] ppdet.utils.checkpoint INFO: Finish loading model weights: tools/output/249.pdparams Traceback (most recent call last): File "D:\pythonProject1\PaddleDetection-release-2.8.1\tools\export_model.py", line 148, in <module> main() File "D:\pythonProject1\PaddleDetection-release-2.8.1\tools\export_model.py", line 144, in main run(FLAGS, cfg) File "D:\pythonProject1\PaddleDetection-release-2.8.1\tools\export_model.py", line 105, in run trainer.export(FLAGS.output_dir, for_fd=FLAGS.for_fd) File "D:\pythonProject1\PaddleDetection-release-2.8.1\ppdet\engine\trainer.py", line 1294, in export static_model, pruned_input_spec, input_spec = self._get_infer_cfg_and_input_spec( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\pythonProject1\PaddleDetection-release-2.8.1\ppdet\engine\trainer.py", line 1240, in _get_infer_cfg_and_input_spec static_model, pruned_input_spec = self._model_to_static(model, input_spec, prune_input) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\pythonProject1\PaddleDetection-release-2.8.1\ppdet\engine\trainer.py", line 1155, in _model_to_static input_spec, static_model.forward.main_program, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Admin\AppData\Roaming\Python\Python311\site-packages\paddle\jit\dy2static\program_translator.py", line 1118, in main_program concrete_program = self.concrete_program ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Admin\AppData\Roaming\Python\Python311\site-packages\paddle\jit\dy2static\program_translator.py", line 1002, in concrete_program return self.concrete_program_specify_input_spec(input_spec=None) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Admin\AppData\Roaming\Python\Python311\site-packages\paddle\jit\dy2static\program_translator.py", line 1046, in concrete_program_specify_input_spec concrete_program, _ = self.get_concrete_program( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Admin\AppData\Roaming\Python\Python311\site-packages\paddle\jit\dy2static\program_translator.py", line 935, in get_concrete_program concrete_program, partial_program_layer = self._program_cache[ ^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Admin\AppData\Roaming\Python\Python311\site-packages\paddle\jit\dy2static\program_translator.py", line 1694, in __getitem__ self._caches[item_id] = self._build_once(item) ^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Admin\AppData\Roaming\Python\Python311\site-packages\paddle\jit\dy2static\program_translator.py", line 1631, in _build_once concrete_program = ConcreteProgram.pir_from_func_spec( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Admin\AppData\Roaming\Python\Python311\site-packages\decorator.py", line 235, in fun return caller(func, *(extras + args), **kw) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Admin\AppData\Roaming\Python\Python311\site-packages\paddle\base\wrapped_decorator.py", line 40, in __impl__ return wrapped_func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Admin\AppData\Roaming\Python\Python311\site-packages\paddle\base\dygraph\base.py", line 101, in __impl__ return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Admin\AppData\Roaming\Python\Python311\site-packages\paddle\jit\dy2static\program_translator.py", line 1302, in pir_from_func_spec error_data.raise_new_exception() File "C:\Users\Admin\AppData\Roaming\Python\Python311\site-packages\paddle\jit\dy2static\error.py", line 454, in raise_new_exception raise new_exception from None TypeError: In transformed code: File "D:\pythonProject1\PaddleDetection-release-2.8.1\ppdet\modeling\architectures\meta_arch.py", line 59, in forward if self.training: File "D:\pythonProject1\PaddleDetection-release-2.8.1\ppdet\modeling\architectures\meta_arch.py", line 69, in forward for inp in inputs_list: File "D:\pythonProject1\PaddleDetection-release-2.8.1\ppdet\modeling\architectures\meta_arch.py", line 76, in forward outs.append(self.get_pred()) File "D:\pythonProject1\PaddleDetection-release-2.8.1\ppdet\modeling\architectures\yolo.py", line 150, in get_pred return self._forward() File "D:\pythonProject1\PaddleDetection-release-2.8.1\ppdet\modeling\architectures\yolo.py", line 92, in _forward if self.training: File "D:\pythonProject1\PaddleDetection-release-2.8.1\ppdet\modeling\architectures\yolo.py", line 103, in _forward if self.for_mot: File "D:\pythonProject1\PaddleDetection-release-2.8.1\ppdet\modeling\architectures\yolo.py", line 115, in _forward if self.return_idx: File "D:\pythonProject1\PaddleDetection-release-2.8.1\ppdet\modeling\architectures\yolo.py", line 119, in _forward elif self.post_process is not None: File "D:\pythonProject1\PaddleDetection-release-2.8.1\ppdet\modeling\architectures\yolo.py", line 121, in _forward bbox, bbox_num, nms_keep_idx = self.post_process( File "D:\pythonProject1\PaddleDetection-release-2.8.1\ppdet\modeling\post_process.py", line 69, in __call__ if self.nms is not None: File "D:\pythonProject1\PaddleDetection-release-2.8.1\ppdet\modeling\post_process.py", line 71, in __call__ bbox_pred, bbox_num, before_nms_indexes = self.nms(bboxes, score, File "D:\pythonProject1\PaddleDetection-release-2.8.1\ppdet\modeling\layers.py", line 605, in __call__ def __call__(self, bbox, score, *args): return ops.matrix_nms( ~~~~~~~~~~~~~~~~~~~~~~ <--- HERE bboxes=bbox, scores=score, File "D:\pythonProject1\PaddleDetection-release-2.8.1\ppdet\modeling\ops.py", line 714, in matrix_nms helper.append_op( File "C:\Users\Admin\AppData\Roaming\Python\Python311\site-packages\paddle\base\layer_helper.py", line 57, in append_op return self.main_program.current_block().append_op(*args, **kwargs) File "C:\Users\Admin\AppData\Roaming\Python\Python311\site-packages\paddle\base\framework.py", line 4701, in append_op op = Operator( File "C:\Users\Admin\AppData\Roaming\Python\Python311\site-packages\paddle\base\framework.py", line 3329, in __init__ raise TypeError( TypeError: The type of '%BBoxes' in operator matrix_nms should be one of [str, bytes, Variable]. but received : Value(define_op_name=pd_op.concat, index=0, dtype=tensor<-1x3840x4xf32>, stop_gradient=False) 中文回答
06-06
import numpy as np import pandas as pd from scipy import stats from scipy.linalg import eig from sklearn.neighbors import KernelDensity import os class DualFactorAHP: """ 双因子融合AHP (PD4-AHP + Spe-AHP) 实现类 适用于88家公司、5年、10个二元指标的面板数据 """ def __init__(self, n_companies=88, n_years=5, n_indicators=10): self.n_companies = n_companies self.n_years = n_years self.n_indicators = n_indicators self.RI = 1.49 # n=10时的随机一致性指标 def load_data_from_csv(self, file_path, company_col='company', year_col='year'): """ 从CSV文件加载数据 假设CSV格式为长格式: company, year, indicator1, indicator2, ..., indicator10 """ print(f"从文件加载数据: {file_path}") if not os.path.exists(file_path): raise FileNotFoundError(f"数据文件不存在: {file_path}") # 读取CSV文件 df = pd.read_csv(file_path) # 验证必要的列存在 required_columns = [company_col, year_col] + [f'X{i+1}' for i in range(self.n_indicators)] missing_cols = [col for col in required_columns if col not in df.columns] if missing_cols: raise ValueError(f"缺少必要的列: {missing_cols}") # 重塑为3D数组 (公司×年份×指标) data_3d = np.full((self.n_companies, self.n_years, self.n_indicators), np.nan) companies = sorted(df[company_col].unique()) years = sorted(df[year_col].unique()) for i, company in enumerate(companies[:self.n_companies]): for j, year in enumerate(years[:self.n_years]): mask = (df[company_col] == company) & (df[year_col] == year) if mask.any(): row = df[mask].iloc[0] for k in range(self.n_indicators): data_3d[i, j, k] = row[f'X{k+1}'] return data_3d def load_data_from_excel(self, file_path, sheet_name=0, company_col='company', year_col='year'): """ 从Excel文件加载数据 """ print(f"从Excel文件加载数据: {file_path}") if not os.path.exists(file_path): raise FileNotFoundError(f"数据文件不存在: {file_path}") # 读取Excel文件 df = pd.read_excel(file_path, sheet_name=sheet_name) # 验证必要的列存在 required_columns = [company_col, year_col] + [f'X{i+1}' for i in range(self.n_indicators)] missing_cols = [col for col in required_columns if col not in df.columns] if missing_cols: raise ValueError(f"缺少必要的列: {missing_cols}") # 重塑为3D数组 data_3d = np.full((self.n_companies, self.n_years, self.n_indicators), np.nan) companies = sorted(df[company_col].unique()) years = sorted(df[year_col].unique()) for i, company in enumerate(companies[:self.n_companies]): for j, year in enumerate(years[:self.n_years]): mask = (df[company_col] == company) & (df[year_col] == year) if mask.any(): row = df[mask].iloc[0] for k in range(self.n_indicators): data_3d[i, j, k] = row[f'X{k+1}'] return data_3d def load_data_from_3d_array(self, file_path): """ 直接从.npy文件加载3D数组 """ print(f"从npy文件加载3D数组: {file_path}") return np.load(file_path) def create_sample_data(self): """ 生成示例数据(当没有真实数据时使用) """ print("生成示例数据...") np.random.seed(42) return np.random.choice([0, 1], size=(self.n_companies, self.n_years, self.n_indicators), p=[0.3, 0.7]) def data_preprocessing(self, raw_data): """ 步骤1: 数据预处理 """ print("步骤1: 数据预处理...") data_processed = raw_data.copy() missing_mask = np.isnan(data_processed) if np.any(missing_mask): print(f"发现 {np.sum(missing_mask)} 个缺失值,进行填充...") # 使用同年度该指标的均值四舍五入填充 for year in range(self.n_years): year_data = data_processed[:, year, :] for indicator in range(self.n_indicators): indicator_data = year_data[:, indicator] if np.any(np.isnan(indicator_data)): mean_val = np.nanmean(indicator_data) fill_val = np.round(mean_val) nan_mask = np.isnan(indicator_data) data_processed[nan_mask, year, indicator] = fill_val # 验证数据为二元数据 unique_vals = np.unique(data_processed) assert set(unique_vals).issubset({0, 1}), "数据必须为二元数据(0/1)" return data_processed # 其他方法保持不变... def calculate_spearman_matrix(self, data): """步骤2: 构造Spe-AHP指标相关矩阵""" # 实现同上... pass def calculate_pd4_time_matrix(self, data): """步骤3: 构造PD4-AHP时间因子矩阵""" # 实现同上... pass def dual_factor_fusion(self, spearman_matrices, time_matrices): """步骤4: 双因子矩阵融合""" # 实现同上... pass def consistency_check(self, judgment_matrices): """步骤5: 一致性检验""" # 实现同上... pass # 使用示例 - 多种数据输入方式 def main(): # 创建AHP处理器 ahp_processor = DualFactorAHP(n_companies=88, n_years=5, n_indicators=10) # 方式1: 从CSV文件加载(推荐) try: # 在这里填入你的CSV文件路径 csv_file_path = "data/company_panel_data.csv" # ← 在这里填入你的数据地址 raw_data = ahp_processor.load_data_from_csv( file_path=csv_file_path, company_col='company', # 公司列名 year_col='year' # 年份列名 ) print("成功从CSV文件加载数据") except FileNotFoundError: print("CSV文件未找到,尝试Excel文件...") # 方式2: 从Excel文件加载 try: # 在这里填入你的Excel文件路径 excel_file_path = "data/company_panel_data.xlsx" # ← 在这里填入你的数据地址 raw_data = ahp_processor.load_data_from_excel( file_path=excel_file_path, sheet_name=0, # 工作表名称或索引 company_col='company', year_col='year' ) print("成功从Excel文件加载数据") except FileNotFoundError: print("Excel文件未找到,尝试npy文件...") # 方式3: 从npy文件加载(如果是3D数组格式) try: npy_file_path = "data/panel_data_3d.npy" # ← 在这里填入你的数据地址 raw_data = ahp_processor.load_data_from_3d_array(npy_file_path) print("成功从npy文件加载数据") except FileNotFoundError: print("所有数据文件均未找到,使用示例数据...") # 方式4: 生成示例数据 raw_data = ahp_processor.create_sample_data() print("使用生成的示例数据") # 验证数据形状 print(f"数据形状: {raw_data.shape} (公司×年份×指标)") # 执行AHP流程 processed_data = ahp_processor.data_preprocessing(raw_data) spearman_matrices = ahp_processor.calculate_spearman_matrix(processed_data) time_matrices = ahp_processor.calculate_pd4_time_matrix(processed_data) judgment_matrices = ahp_processor.dual_factor_fusion(spearman_matrices, time_matrices) consistency_results = ahp_processor.consistency_check(judgment_matrices) # 输出结果 print("\n=== 分析完成 ===") for result in consistency_results: status = "通过" if result['passed'] else "未通过" print(f"第{result['period']}期: CR={result['CR']:.4f} ({status})") # 简单使用示例(如果你知道确切的数据格式) def simple_example(): """ 如果你知道确切的数据格式,可以使用这个简化版本 """ ahp = DualFactorAHP() # 直接指定文件路径 data_path = "C:/你的路径/公司数据.csv" # ← 在这里填入你的确切数据地址 if data_path.endswith('.csv'): raw_data = ahp.load_data_from_csv(data_path) elif data_path.endswith(('.xlsx', '.xls')): raw_data = ahp.load_data_from_excel(data_path) elif data_path.endswith('.npy'): raw_data = ahp.load_data_from_3d_array(data_path) else: raise ValueError("不支持的文件格式") # 继续执行后续步骤... processed_data = ahp.data_preprocessing(raw_data) # ... 其他步骤 if __name__ == "__main__": main()
10-21
Exception in thread Thread-1: Traceback (most recent call last): File "D:\python377\lib\threading.py", line 926, in _bootstrap_inner self.run() File "D:\python377\lib\threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "D:\python377\lib\multiprocessing\pool.py", line 412, in _handle_workers pool._maintain_pool() File "D:\python377\lib\multiprocessing\pool.py", line 248, in _maintain_pool self._repopulate_pool() File "D:\python377\lib\multiprocessing\pool.py", line 241, in _repopulate_pool w.start() File "D:\python377\lib\multiprocessing\process.py", line 112, in start self._popen = self._Popen(self) File "D:\python377\lib\multiprocessing\context.py", line 322, in _Popen return Popen(process_obj) File "D:\python377\lib\multiprocessing\popen_spawn_win32.py", line 72, in __init__ None, None, False, 0, env, None, None) OSError: [WinError 1455] 页面文件太小,无法完成操作。 Traceback (most recent call last): File "<string>", line 1, in <module> File "D:\python377\lib\multiprocessing\spawn.py", line 105, in spawn_main exitcode = _main(fd) File "D:\python377\lib\multiprocessing\spawn.py", line 114, in _main prepare(preparation_data) File "D:\python377\lib\multiprocessing\spawn.py", line 225, in prepare _fixup_main_from_path(data['init_main_from_path']) File "D:\python377\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path run_name="__mp_main__") File "D:\python377\lib\runpy.py", line 263, in run_path pkg_name=pkg_name, script_name=fname) File "D:\python377\lib\runpy.py", line 96, in _run_module_code mod_name, mod_spec, pkg_name, script_name) File "D:\python377\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "D:\code\nba_random_forest_model.py", line 1, in <module> import pandas as pd File "D:\python377\lib\site-packages\pandas\__init__.py", line 17, in <module> "Unable to import required dependencies:\n" + "\n".join(missing_dependencies) ImportError: Unable to import required dependencies: numpy: IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE! Importing the numpy C-extensions failed. This error can happen for many reasons, often due to issues with your setup or how NumPy was installed. We have compiled some common reasons and troubleshooting tips at: https://numpy.org/devdocs/user/troubleshooting-importerror.html Please note and check the following: * The Python version is: Python3.7 from "D:\python377\python.exe" * The NumPy version is: "1.19.5" and make sure that they are the versions you expect. Please carefully study the documentation linked above for further help. Original error was: DLL load failed: 出现了内部错误。 D:\python377\lib\site-packages\sklearn\ensemble\weight_boosting.py:29: DeprecationWarning: numpy.core.umath_tests is an internal NumPy module and should not be imported. It will be removed in a future NumPy release. from numpy.core.umath_tests import inner1d D:\python377\lib\site-packages\sklearn\ensemble\weight_boosting.py:29: DeprecationWarning: numpy.core.umath_tests is an internal NumPy module and should not be imported. It will be removed in a future NumPy release. from numpy.core.umath_tests import inner1d D:\python377\lib\site-packages\sklearn\ensemble\weight_boosting.py:29: DeprecationWarning: numpy.core.umath_tests is an internal NumPy module and should not be imported. It will be removed in a future NumPy release. from numpy.core.umath_tests import inner1d D:\python377\lib\site-packages\sklearn\ensemble\weight_boosting.py:29: DeprecationWarning: numpy.core.umath_tests is an internal NumPy module and should not be imported. It will be removed in a future NumPy release. from numpy.core.umath_tests import inner1d D:\python377\lib\site-packages\sklearn\ensemble\weight_boosting.py:29: DeprecationWarning: numpy.core.umath_tests is an internal NumPy module and should not be imported. It will be removed in a future NumPy release. from numpy.core.umath_tests import inner1d D:\python377\lib\site-packages\sklearn\ensemble\weight_boosting.py:29: DeprecationWarning: numpy.core.umath_tests is an internal NumPy module and should not be imported. It will be removed in a future NumPy release. from numpy.core.umath_tests import inner1d 在机器学习的过程中出现这样的问题是怎么回事
09-19
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值