编译出EXE在其他的机器上运行后提示:无法定位程序输入点到_ftol2于动态链接库msvcrt.dll上

本文记录了解决一个特定的动态链接错误的过程,该错误出现在使用VS2008编译的应用程序中,提示“无法定位程序输入点到_ftol2于动态链接库msvcrt.dll”。通过更换编译环境及移除不兼容的IPHLPAPI.DLL文件最终解决了问题。

    前些天我编译的一个程序在其他的机器上都可以运行的好好的,这些天更新了一些代码,因为重装了系统,用VS2008编

译出新的EXE程序,发现在其他的机器上运行后提示:无法定位程序输入点到_ftol2于动态链接库msvcrt.dll上

    查资料查了很久也没找到可用的办法,于是我把前些天代码重新编译了下,居然也提示说:无法定位程序输入点到_ftol2于动

态链接库msvcrt.dll上。

    最后用同学的XP下+VC2008来编译,结果在其他的机器上又可以运行了。

    如果有和我一样遇到这种情况的同学可以试试。

 

    其他的办法(我没有成功,但有些机器上应该是可以):1.把winsxs下的WinSxS目录下的VC9的文件打包,还有C:/WINDOWS/WinSxS/Manifests、C:/WINDOWS/WinSxS/Policies目录下,全部拷到目标机器上(这个是论坛上的fibbery大哥说的)

    2.用高版本的msvcrt.dll  替换windows/system32/下的msvcrt.dll (这个是百度的)

    3.用系统还原还原不能运行的机器,特别是一开始可以运行的那种

 

 

        刚刚再次做了下试验,终于发现出问题的真正原因了,原来不是编译问题,也不是msvcrt.dll的问题,而是因为我把VISTA下的IPHLPAPI.DLL拷到了程序目录下,而IPHLPAPI.DLL的版本和xp自带的版本不一致所以导致出错,删除IPHLPAPI.DLL后,一切都恢复了~~~囧

import torch import numpy as np from scipy.optimize import minimize import time import csv import os # 用于 KMP_DUPLICATE_LIB_OK # --- 设置环境变量 (如果需要) --- os.environ['KMP_DUPLICATE_LIB_OK'] = 'True' # --- 导入自定义模块 --- from config import DEFAULT_DTYPE, TARGET_ENERGY_BARRIER_OBJ # 可以从这里读取默认值,但循环中会覆盖 from objective import objective_function_for_optimizer from constraints import constraint_function_for_optimizer from parameters import para_wrapper_for_optimizer from energy_barrier import calculate_energy_barrier # from plotting_utils import plot_optimized_structures # 数据集生成时通常绘图以加快速度 torch.set_default_dtype(DEFAULT_DTYPE) def run_single_optimization(l1D_factor, l2D_factor, target_Eb, initial_conditions_np): """ 为一组给定的输入参数运行一次优化。 返回一个包含输入和输的字典,如果优化失败则返回 None。 """ e1_vec = torch.tensor([1.0, 0.0], dtype=DEFAULT_DTYPE).reshape(2, 1) e2_vec = torch.tensor([0.0, 1.0], dtype=DEFAULT_DTYPE).reshape(2, 1) l1R_vec = 1.0 * e1_vec l2R_vec = 1.0 * e2_vec l1D_vec = l1D_factor * e1_vec # 根据因子计算 l2D_vec = l2D_factor * e2_vec # 根据因子计算 # 更新 config.py 中的 TARGET_ENERGY_BARRIER_OBJ (或者直接传递给目标函数) # 这里我们假设 objective_function_for_optimizer 可以接受 target_Eb # 如果行,需要修改 objective.py 或在这里动态修改 config.TARGET_ENERGY_BARRIER_OBJ # 临时的简单做法是让目标函数直接使用传入的 target_Eb (需要修改 objective.py) # 或者,更简单:objective.py 的目标函数保持变,这里的 target_Eb 只是记录用 options_optimizer = {'maxiter': 1000, 'ftol': 1e-6, 'disp': False} # 数据集生成时关闭 disp current_target_Eb_obj = target_Eb # 假设 objective.py 会使用这个 (需要修改 objective.py) # 或者, 在 objective.py 中保持 TARGET_ENERGY_BARRIER_OBJ, # 那么这里的 target_Eb 只是我们想要记录的目标 # --- 临时的解决方案:动态修改 config --- # (推荐用于并行,但对于顺序执行是可行的) # from config import TARGET_ENERGY_BARRIER_OBJ as ORIGINAL_TARGET_EB # import config # config.TARGET_ENERGY_BARRIER_OBJ = target_Eb # --- 结束临时方案 --- # 更稳健的方法是修改 objective_function_for_optimizer 使其接受 target_Eb 作为参数 obj_fun = lambda x_np: objective_function_for_optimizer(x_np, l1R_vec, l2R_vec, l1D_vec, l2D_vec) # 假设 obj 函数会用 config里的TARGET_ENERGY_BARRIER_OBJ # 如果想让它用这里的 target_Eb,需要修改 objective.py # 使得 objective_function_for_optimizer(x_np, ..., target_eb_value) cons_fun = lambda x_np: constraint_function_for_optimizer(x_np, l1R_vec, l2R_vec, l1D_vec, l2D_vec) constraints = [{'type': 'ineq', 'fun': cons_fun}] result = minimize(obj_fun, initial_conditions_np, method='SLSQP', constraints=constraints, options=options_optimizer) # --- 恢复 config (如果动态修改了) --- # config.TARGET_ENERGY_BARRIER_OBJ = ORIGINAL_TARGET_EB # --- 结束恢复 --- if result.success: x_optimized_np = result.x # 计算实际的 Eb (因为目标函数是 (Eb - target)^2) # 注意: calculate_energy_barrier 也需要使用正确的 l1D, l2D actual_Eb, _, _, _, _ = calculate_energy_barrier(x_optimized_np, l1R_vec, l2R_vec, l1D_vec, l2D_vec) # 获取几何参数等,如果需要存储 # (s1, ..., coor_def_opt) = para_wrapper_for_optimizer(...) return { "l1D_factor": l1D_factor, "l2D_factor": l2D_factor, "target_Eb_in_obj_config": TARGET_ENERGY_BARRIER_OBJ, # 记录 objective.py 实际使用的目标值 "s11": x_optimized_np[0], "s12": x_optimized_np[1], "s21": x_optimized_np[2], "s22": x_optimized_np[3], "phi1": x_optimized_np[4], "phi2": x_optimized_np[5], "phi3": x_optimized_np[6], "phi4": x_optimized_np[7], "fval": result.fun, "actual_Eb": actual_Eb if isinstance(actual_Eb, float) else actual_Eb.item(), "success": 1 } else: return { "l1D_factor": l1D_factor, "l2D_factor": l2D_factor, "target_Eb_in_obj_config": TARGET_ENERGY_BARRIER_OBJ, "s11": np.nan, "s12": np.nan, # ... 等 "phi1": np.nan, # ... 等 "fval": result.fun if hasattr(result, 'fun') else np.nan, "actual_Eb": np.nan, "success": 0, "message": result.message } def generate_dataset(num_samples_target=10000): dataset = [] output_csv_file = "kirigami_dataset.csv" # 定义参数范围 # 示例:l1D_factor 从 0.8 到 1.5, l2D_factor 从 0.7 到 1.3 # 目标能量势垒 TARGET_ENERGY_BARRIER_OBJ (从 config.py 获取或在此处定义范围) # 简单的随机采样策略 # 固定的初始猜测 base_ic_np = np.array([0.45, -0.05, 0.45, 0.05, -0.2 * np.pi, 0.15 * np.pi, -0.2 * np.pi, 0.2 * np.pi], dtype=np.float64) # 写入 CSV 文件的头部 fieldnames = ["l1D_factor", "l2D_factor", "target_Eb_in_obj_config", "s11", "s12", "s21", "s22", "phi1", "phi2", "phi3", "phi4", "fval", "actual_Eb", "success", "message"] with open(output_csv_file, 'w', newline='') as csvfile: writer = csv.DictWriter(csvfile, fieldnames=fieldnames) writer.writeheader() generated_count = 0 attempt_count = 0 max_attempts = num_samples_target * 5 # 尝试更多次数以获得足够的成功样本 while generated_count < num_samples_target and attempt_count < max_attempts: attempt_count += 1 print(f"Attempt {attempt_count}, Generated {generated_count}/{num_samples_target}") # 随机生成输入参数 current_l1D_factor = np.random.uniform(0.7, 1.5) current_l2D_factor = np.random.uniform(0.7, 1.5) # 也可以随机化目标能量势垒,如果需要的话 # current_target_Eb = np.random.uniform(0.001, 0.005) # 当前 objective.py 使用 config.py 中的 TARGET_ENERGY_BARRIER_OBJ # 如果要为每次运行改变它,需要修改 objective.py 或动态修改 config (如上面的注释所示) # 为了简单起见,我们先使用 config.py 中的固定值,并在数据中记录它。 # 可以对初始猜测进行扰动 current_ic_np = base_ic_np + np.random.normal(0, 0.01, size=base_ic_np.shape) # 确保角度在合理范围内 (例如 -pi 到 pi) - 可选 current_ic_np[4:8] = np.clip(current_ic_np[4:8], -np.pi, np.pi) result_data = run_single_optimization(current_l1D_factor, current_l2D_factor, TARGET_ENERGY_BARRIER_OBJ, # 此参数当前仅用于记录 current_ic_np) if result_data: writer.writerow(result_data) if result_data["success"] == 1: generated_count += 1 csvfile.flush() # 确保数据及时写入文件 print(f"数据集生成完毕。总共尝试 {attempt_count} 次, 成功生成 {generated_count} 个有效样本。") print(f"数据已保存到 {output_csv_file}") if __name__ == '__main__': # run_optimization() # 原来的单次运行 generate_dataset(num_samples_target=100) # 生成少量样本进行测试 # generate_dataset(num_samples_target=10000) # 生成大量样本,这是里面的generate_dataset文件,现在我想在里面加一个进度条显示优化进度,帮我直接在原文件上修改,要求尽量简单
11-05
以下代码是涉及到贝叶斯更新的一段代码,但目前的瓶颈是精度仍然较低,请你提供提高精度的措施。 def update(self, active_parameters): if len(self.observation_data['depths']) == 0: return self.posterior_distributions if not active_parameters: return self.posterior_distributions if not hasattr(self, 'parameter_history'): self.parameter_history = {param: [] for param in self.prior_distributions.keys()} if not hasattr(self, 'gradient_history'): self.gradient_history = {param: [] for param in self.prior_distributions.keys()} max_attempts = 3 attempt = 0 success = False while attempt < max_attempts and not success: bounds = [ (self.prior_distributions[param]['min'], self.prior_distributions[param]['max']) for param in active_parameters ] result = differential_evolution( self.likelihood, bounds=bounds, args=(active_parameters,), strategy='best1bin', maxiter=15, popsize=8, tol=10, mutation=(0.5, 1), recombination=0.7, workers=-1, disp=False, polish=False, ) if result.success: print('--------------------------------------------------------------------------') result_lbfgs = minimize( self.likelihood, result.x, args=(active_parameters,), method='L-BFGS-B', bounds=bounds, options={'maxiter': 150, 'ftol': 1e-6} ) if result_lbfgs.success: best_params = result_lbfgs.x success = True else: best_params = result.x else: best_params = result.x if result.success: for idx, param in enumerate(active_parameters): prev = self.optimal_parameters[param] new = best_params[idx] alpha = 0.5 self.optimal_parameters[param] = alpha * new + (1 - alpha) * prev soil_data_opt = update_soil_data_with_posterior(self.soil_data_template, self.optimal_parameters) eta_opt = update_eta_with_posterior(self.optimal_parameters) parameters_model_opt = get_parameters(soil_data_opt, eta_opt) # print(f'parameters_model_opt: {parameters_model_opt}') result_opt = self.model(exist_info=self.exist_info, parameters=parameters_model_opt, **self.model_args) if result_opt is None: for param in active_parameters: current_min = self.posterior_distributions[param]['min'] current_max = self.posterior_distributions[param]['max'] initial_min = self.initial_ranges[param][0] initial_max = self.initial_ranges[param][1] range_width = (current_max - current_min) new_range_width = min(initial_max - initial_min, 1.25 * range_width) new_min = max(initial_min, self.optimal_parameters[param] - new_range_width / 2) new_max = min(initial_max, self.optimal_parameters[param] + new_range_width / 2) self.posterior_distributions[param] = {'min': new_min, 'max': new_max} self.prior_distributions = self.posterior_distributions.copy() attempt += 1 continue Tip_depth_pred_opt, Q_vertical_pred_opt = result_opt capacities_pred_opt = np.interp(self.observation_data['depths'], Tip_depth_pred_opt, Q_vertical_pred_opt) observed_capacities = np.array(self.observation_data['bearing_capacities']) max_observed = np.max(observed_capacities) if np.max(observed_capacities) != 0 else 1 max_predicted = np.max(capacities_pred_opt) if np.max(capacities_pred_opt) != 0 else 1 max_ = max(max_observed, max_predicted) capacities_pred_opt_norm = capacities_pred_opt / max_ capacities_obs_norm = observed_capacities / max_ residuals_opt = capacities_obs_norm - capacities_pred_opt_norm depths = np.array(self.observation_data['depths']) weights = self.compute_constrained_weights(depths) if np.sum(weights) == 0: weights = np.ones_like(depths) residual_std = np.sqrt(np.average((residuals_opt) ** 2, weights=weights / np.sum(weights))) capacities_obs_std = np.std(capacities_obs_norm) if residual_std <= self.residual_threshold * capacities_obs_std: for param in active_parameters: current_min = self.posterior_distributions[param]['min'] current_max = self.posterior_distributions[param]['max'] range_width = (current_max - current_min) new_range_width = max(self.min_range_widths[param], 0.75 * range_width) new_min = max(self.initial_ranges[param][0], self.optimal_parameters[param] - new_range_width / 2) new_max = min(self.initial_ranges[param][1], self.optimal_parameters[param] + new_range_width / 2) self.posterior_distributions[param] = {'min': new_min, 'max': new_max} else: for param in active_parameters: current_min = self.posterior_distributions[param]['min'] current_max = self.posterior_distributions[param]['max'] initial_min = self.initial_ranges[param][0] initial_max = self.initial_ranges[param][1] if current_min <= initial_min and current_max >= initial_max: continue range_width = (current_max - current_min) new_range_width = min(initial_max - initial_min, 1.25 * range_width) new_min = max(initial_min, self.optimal_parameters[param] - new_range_width / 2) new_max = min(initial_max, self.optimal_parameters[param] + new_range_width / 2) self.posterior_distributions[param] = {'min': new_min, 'max': new_max} self.prior_distributions = self.posterior_distributions.copy() success = True else: for param in active_parameters: current_min = self.posterior_distributions[param]['min'] current_max = self.posterior_distributions[param]['max'] initial_min = self.initial_ranges[param][0] initial_max = self.initial_ranges[param][1] if current_min <= initial_min and current_max >= initial_max: continue range_width = (current_max - current_min) new_range_width = min(initial_max - initial_min, 1.25 * range_width) new_min = max(initial_min, self.optimal_parameters[param] - new_range_width / 2) new_max = min(initial_max, self.optimal_parameters[param] + new_range_width / 2) self.posterior_distributions[param] = {'min': new_min, 'max': new_max} self.prior_distributions = self.posterior_distributions.copy() attempt += 1 # print(self.posterior_distributions) new_eta = self.optimal_parameters['eta'] self.exist_info['d'].append(self.observation_data['depths'][-1]) self.exist_info['eta'].append(new_eta) return self.posterior_distributions
07-26
import numpy as np import pandas as pd from scipy.stats import norm, expon from scipy.optimize import minimize import matplotlib.pyplot as plt # 设置中文字体 plt.rcParams['font.sans-serif'] = ['SimHei'] plt.rcParams['axes.unicode_minus'] = False print("=" * 60) print("复赛大题(二)解答") print("=" * 60) # 问题2.1.1 解答 print("\n问题2.1.1 解答") print("-" * 40) def calculate_power(n, accrual_rate, trial_duration, survival_rate_control, hr, alpha=0.025): """ 计算log-rank检验的效能 参数: n: 每组样本量 accrual_rate: 入组速率(人/月) trial_duration: 试验总时长(月) survival_rate_control: 对照组2年生存率 hr: 风险比 alpha: 一类错误率 返回: power: 检验效能 expected_events: 预期事件数 """ # 计算入组时间 accrual_time = n * 2 / accrual_rate # 两组总样本量为2n # 计算风险率 (基于2年生存率) lambda_control = -np.log(survival_rate_control) / 24 # 月风险率 # 试验组风险率 lambda_treatment = lambda_control * hr # 计算平均随访时间 # 假设入组时间均匀分布 avg_followup = trial_duration - accrual_time / 2 # 计算预期事件概率 # 使用指数分布计算事件概率 p_event_control = 1 - np.exp(-lambda_control * avg_followup) p_event_treatment = 1 - np.exp(-lambda_treatment * avg_followup) # 计算预期事件数 expected_events_control = n * p_event_control expected_events_treatment = n * p_event_treatment total_events = expected_events_control + expected_events_treatment # 使用Schoenfeld公式计算效能 # log-rank检验的效应量 effect_size = np.log(hr) # 计算检验统计量的非中心参数 non_centrality = effect_size * np.sqrt(total_events / 4) # 计算效能 z_alpha = norm.ppf(1 - alpha) power = norm.cdf(-z_alpha - non_centrality) return power, total_events # 阳性人群参数 n_positive = 160 accrual_rate = 10 trial_duration = 36 survival_positive_control = 0.5 hr_positive = 0.51 # 计算阳性人群效能 power_positive, events_positive = calculate_power( n_positive/2, accrual_rate, trial_duration, survival_positive_control, hr_positive ) print(f"阳性人群检验效能: {power_positive:.3f} ({power_positive*100:.1f}%)") print(f"阳性人群预期事件数: {events_positive:.1f}") # 阴性人群参数 survival_negative_control = 0.7 hr_negative_values = [0.57, 0.67, 0.76] print("\n全人群检验效能:") for hr_negative in hr_negative_values: # 计算阴性人群效能 power_negative, events_negative = calculate_power( n_positive/2, accrual_rate, trial_duration, survival_negative_control, hr_negative ) # 计算全人群效能 (假设阳性和阴性独立) # 全人群的加权平均效应量 total_events = events_positive + events_negative weighted_effect = (np.log(hr_positive) * events_positive + np.log(hr_negative) * events_negative) / total_events # 全人群效能 non_centrality_total = weighted_effect * np.sqrt(total_events / 4) z_alpha = norm.ppf(1 - 0.025) power_total = norm.cdf(-z_alpha - non_centrality_total) print(f"HR_阴性={hr_negative}: 效能={power_total:.3f} ({power_total*100:.1f}%)") # 问题2.1.2 解答 print("\n问题2.1.2 解答") print("-" * 40) def analyze_negative_impact(hr_negative_values, r_values): """ 分析阴性人群比例对检验效能的影响 """ results = {} for hr_negative in hr_negative_values: power_changes = [] for r in r_values: # 阳性人群样本量 (总样本量固定为320) n_pos = 320 * (1 - r) / 2 # 每组样本量 n_neg = 320 * r / 2 # 每组样本量 # 计算阳性人群事件数 lambda_control_pos = -np.log(survival_positive_control) / 24 accrual_time = 320 / accrual_rate avg_followup = trial_duration - accrual_time / 2 p_event_control_pos = 1 - np.exp(-lambda_control_pos * avg_followup) events_pos = n_pos * 2 * p_event_control_pos # 两组总事件数 # 计算阴性人群事件数 lambda_control_neg = -np.log(survival_negative_control) / 24 p_event_control_neg = 1 - np.exp(-lambda_control_neg * avg_followup) events_neg = n_neg * 2 * p_event_control_neg # 两组总事件数 # 计算加权效应量 total_events = events_pos + events_neg weighted_effect = (np.log(hr_positive) * events_pos + np.log(hr_negative) * events_neg) / total_events # 计算效能 non_centrality = weighted_effect * np.sqrt(total_events / 4) power = norm.cdf(-norm.ppf(0.975) - non_centrality) power_changes.append(power) results[hr_negative] = power_changes return results # 阴性人群比例从0%到70% r_values = np.linspace(0, 0.7, 8) results_2_1_2 = analyze_negative_impact(hr_negative_values, r_values) # 绘制结果 plt.figure(figsize=(10, 6)) for hr, powers in results_2_1_2.items(): plt.plot(r_values * 100, [p * 100 for p in powers], marker='o', label=f'HR_阴性={hr}') plt.axhline(y=power_positive * 100, color='r', linestyle='--', label=f'仅阳性人群基准 ({power_positive*100:.1f}%)') plt.xlabel('阴性人群比例 (%)') plt.ylabel('检验效能 (%)') plt.title('阴性人群比例对全人群检验效能的影响') plt.legend() plt.grid(True) plt.show() # 问题2.2 解答 print("\n问题2.2 解答") print("-" * 40) def simulate_trial(n_total=400, accrual_rate=20, target_events=300, prop_positive=0.5, hr_positive=0.6, hr_negative=0.8, median_survival_control=12, n_sim=10000): """ 模拟临床试验 返回: power_positive: 阳性人群检验效能 power_overall: 全人群检验效能 correlation: 两个检验统计量的相关系数 """ # 计算参数 n_positive = int(n_total * prop_positive) n_negative = n_total - n_positive # 计算风险率 lambda_control = np.log(2) / median_survival_control # 月风险率 lambda_treatment_pos = lambda_control * hr_positive lambda_treatment_neg = lambda_control * hr_negative # 模拟结果存储 z_positive_list = [] z_overall_list = [] for _ in range(n_sim): # 模拟阳性人群 survival_pos_control = expon.rvs(scale=1/lambda_control, size=n_positive//2) survival_pos_treatment = expon.rvs(scale=1/lambda_treatment_pos, size=n_positive//2) # 模拟阴性人群 survival_neg_control = expon.rvs(scale=1/lambda_control, size=n_negative//2) survival_neg_treatment = expon.rvs(scale=1/lambda_treatment_neg, size=n_negative//2) # 合并全人群 survival_all_control = np.concatenate([survival_pos_control, survival_neg_control]) survival_all_treatment = np.concatenate([survival_pos_treatment, survival_neg_treatment]) # 计算log-rank统计量 (简化版本) # 实际应用中应使用更精确的计算方法 z_positive = (np.mean(survival_pos_treatment) - np.mean(survival_pos_control)) / \ np.sqrt(np.var(survival_pos_treatment)/len(survival_pos_treatment) + np.var(survival_pos_control)/len(survival_pos_control)) z_overall = (np.mean(survival_all_treatment) - np.mean(survival_all_control)) / \ np.sqrt(np.var(survival_all_treatment)/len(survival_all_treatment) + np.var(survival_all_control)/len(survival_all_control)) z_positive_list.append(z_positive) z_overall_list.append(z_overall) # 计算效能 z_critical = norm.ppf(0.975) # 单侧0.025 power_positive = np.mean(np.array(z_positive_list) > z_critical) power_overall = np.mean(np.array(z_overall_list) > z_critical) # 计算相关系数 correlation = np.corrcoef(z_positive_list, z_overall_list)[0, 1] return power_positive, power_overall, correlation # 运行模拟 print("运行模拟验证...") power_pos, power_all, corr = simulate_trial(n_sim=5000) print(f"模拟结果:") print(f"阳性人群效能: {power_pos:.3f}") print(f"全人群效能: {power_all:.3f}") print(f"检验统计量相关系数: {corr:.3f}") # 问题2.2.1: 最大化两个人群均拒绝的概率 def objective_both(alpha_allocation): """ 目标函数: 最大化两个检验均拒绝的概率 """ alpha_all, alpha_pos = alpha_allocation # 使用多元正态分布近似计算联合概率 # 这里使用简化的近似计算 rho = corr # 使用模拟得到的相关系数 # 计算边际效能 beta_all = norm.ppf(1 - power_all) # 转换成功率对应的Z值 beta_pos = norm.ppf(1 - power_pos) # 计算联合概率 (简化近似) # 实际应用中应使用更精确的多元正态分布计算 joint_power = power_all * power_pos * (1 + rho) / 2 # 考虑alpha分配的影响 # 这里简化处理,实际应根据alpha分配调整检验阈值 return -joint_power # 最小化负值等价于最大化正值 # 约束条件: alpha_all + alpha_pos = 0.025 constraints = ({'type': 'eq', 'fun': lambda x: x[0] + x[1] - 0.025}) bounds = [(0.001, 0.024), (0.001, 0.024)] # 避免极端值 # 优化 result_both = minimize(objective_both, [0.0125, 0.0125], method='SLSQP', bounds=bounds, constraints=constraints) alpha_opt_both = result_both.x max_prob_both = -result_both.fun print(f"\n问题2.2.1 最优分配:") print(f"α_全 = {alpha_opt_both[0]:.4f}, α_阳 = {alpha_opt_both[1]:.4f}") print(f"两个人群均拒绝的最大概率: {max_prob_both:.3f}") # 问题2.2.2: 最大化至少一个人群拒绝的概率 def objective_any(alpha_allocation): """ 目标函数: 最大化至少一个检验拒绝的概率 """ alpha_all, alpha_pos = alpha_allocation # 计算至少一个拒绝的概率 = 1 - 两个都拒绝的概率 # 使用简化近似 prob_none = (1 - power_all) * (1 - power_pos) * (1 - rho) / 2 prob_any = 1 - prob_none return -prob_any # 最小化负值 result_any = minimize(objective_any, [0.0125, 0.0125], method='SLSQP', bounds=bounds, constraints=constraints) alpha_opt_any = result_any.x max_prob_any = -result_any.fun print(f"\n问题2.2.2 最优分配:") print(f"α_全 = {alpha_opt_any[0]:.4f}, α_阳 = {alpha_opt_any[1]:.4f}") print(f"至少一个人群拒绝的最大概率: {max_prob_any:.3f}") # 理论计算验证 print("\n理论计算验证:") print("基于多元正态分布近似:") # 使用更精确的多元正态分布计算 (需要scipy>=1.6.0) try: from scipy.stats import multivariate_normal # 设置均值向量 (基于效能) mean = [norm.ppf(power_all), norm.ppf(power_pos)] # 设置协方差矩阵 cov = [[1, corr], [corr, 1]] # 创建多元正态分布 mvn = multivariate_normal(mean=mean, cov=cov) # 计算两个检验均拒绝的概率 # 使用数值积分计算多元正态分布的累积概率 # 这里简化处理,实际应使用更精确的方法 print("多元正态分布方法可用于更精确计算") except ImportError: print("scipy版本较低,无法使用多元正态分布精确计算") print("\n" + "=" * 60) print("解答完成") print("=" * 60)ameError Traceback (most recent call last) Cell In[1], line 296 293 prob_any = 1 - prob_none 294 return -prob_any # 最小化负值 --> 296 result_any = minimize(objective_any, [0.0125, 0.0125], 297 method='SLSQP', bounds=bounds, constraints=constraints) 299 alpha_opt_any = result_any.x 300 max_prob_any = -result_any.fun File D:\anaconda3\Lib\site-packages\scipy\optimize\_minimize.py:722, in minimize(fun, x0, args, method, jac, hess, hessp, bounds, constraints, tol, callback, options) 719 res = _minimize_cobyla(fun, x0, args, constraints, callback=callback, 720 bounds=bounds, **options) 721 elif meth == 'slsqp': --> 722 res = _minimize_slsqp(fun, x0, args, jac, bounds, 723 constraints, callback=callback, **options) 724 elif meth == 'trust-constr': 725 res = _minimize_trustregion_constr(fun, x0, args, jac, hess, hessp, 726 bounds, constraints, 727 callback=callback, **options) File D:\anaconda3\Lib\site-packages\scipy\optimize\_slsqp_py.py:383, in _minimize_slsqp(func, x0, args, jac, bounds, constraints, maxiter, ftol, iprint, disp, eps, callback, finite_diff_rel_step, **unknown_options) 380 xu[infbnd[:, 1]] = np.nan 382 # ScalarFunction provides function and gradient evaluation --> 383 sf = _prepare_scalar_function(func, x, jac=jac, args=args, epsilon=eps, 384 finite_diff_rel_step=finite_diff_rel_step, 385 bounds=new_bounds) 386 # gh11403 SLSQP sometimes exceeds bounds by 1 or 2 ULP, make sure this 387 # doesn't get sent to the func/grad evaluator. 388 wrapped_fun = _clip_x_for_func(sf.fun, new_bounds) File D:\anaconda3\Lib\site-packages\scipy\optimize\_optimize.py:288, in _prepare_scalar_function(fun, x0, jac, args, bounds, epsilon, finite_diff_rel_step, hess) 284 bounds = (-np.inf, np.inf) 286 # ScalarFunction caches. Reuse of fun(x) during grad 287 # calculation reduces overall function evaluations. --> 288 sf = ScalarFunction(fun, x0, args, grad, hess, 289 finite_diff_rel_step, bounds, epsilon=epsilon) 291 return sf File D:\anaconda3\Lib\site-packages\scipy\optimize\_differentiable_functions.py:166, in ScalarFunction.__init__(self, fun, x0, args, grad, hess, finite_diff_rel_step, finite_diff_bounds, epsilon) 163 self.f = fun_wrapped(self.x) 165 self._update_fun_impl = update_fun --> 166 self._update_fun() 168 # Gradient evaluation 169 if callable(grad): File D:\anaconda3\Lib\site-packages\scipy\optimize\_differentiable_functions.py:262, in ScalarFunction._update_fun(self) 260 def _update_fun(self): 261 if not self.f_updated: --> 262 self._update_fun_impl() 263 self.f_updated = True File D:\anaconda3\Lib\site-packages\scipy\optimize\_differentiable_functions.py:163, in ScalarFunction.__init__.<locals>.update_fun() 162 def update_fun(): --> 163 self.f = fun_wrapped(self.x) File D:\anaconda3\Lib\site-packages\scipy\optimize\_differentiable_functions.py:145, in ScalarFunction.__init__.<locals>.fun_wrapped(x) 141 self.nfev += 1 142 # Send a copy because the user may overwrite it. 143 # Overwriting results in undefined behaviour because 144 # fun(self.x) will change self.x, with the two no longer linked. --> 145 fx = fun(np.copy(x), *args) 146 # Make sure the function returns a true scalar 147 if not np.isscalar(fx): Cell In[1], line 292, in objective_any(alpha_allocation) 289 alpha_all, alpha_pos = alpha_allocation 290 # 计算至少一个拒绝的概率 = 1 - 两个都拒绝的概率 291 # 使用简化近似 --> 292 prob_none = (1 - power_all) * (1 - power_pos) * (1 - rho) / 2 293 prob_any = 1 - prob_none 294 return -prob_any NameError: name 'rho' is not defined
09-28
评论 2
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值