在DOT NET中调用Excel后,Excel进程的并未终止问题的解决办法

博客主要讨论在DOT NET中调用Excel后,Excel进程未终止的问题。介绍了网上常见的解决办法,如强制垃圾回收、释放Com对象引用、创建局部变量等,但大多无效。最终发现,在调用Excel的方法之外调用GC.Collect()在winform程序中有效,不过在asp.net中仍不行。

在DOT NET中调用Excel后,Excel进程的并未终止问题

Excel.Application myExcel = new Excel.Application() ;

...

myExcel.Quit();

网上的解决办法大多是说在myExcel.Quit()后强制进行垃圾回收

GC.Collect();

但是都不行,还有人说要释放对该Com对象(myExcel)的引用

for(int i =1;i>0;i= System.Runtime.InteropServices.Marshal.ReleaseComObject(myExcel))
   {
   }

还有人说把myExcel 创建为局部变量,方法结束后垃圾回收器会清理它,好像也不行(即使在这个方法最后强制垃圾回收器收集垃圾也不行)。

垃圾回收器似乎没有工作,似乎和垃圾回收的策略有关,微软的这个Dot Net清洁工不清理自己所在地的垃圾! GC.Collect()不清理自己所在作用域的垃圾,在调用Excel的方法之外调用GC.Collect() 就好了,进程列表中Excel的进程在GC.Collect() 后就消失了。

void DoSomething()

{

      ...

      HandleExcel();

      GC.Collect() ;

}

void HandleExcel()

{

Excel.Application myExcel = new Excel.Application() ;

...

myExcel.Quit();

}

这个方法在winform程序中有效,但是对于在asp.net中还是不行(好像是微软的问题)。

我正在编辑【python】代码,遇到了 【i正在处理: Rosenbrock函数 使用Armijo准则... 使用Wolfe准则... Armijo: 1000 次迭代, 最终梯度范数: 4.5590e-03 Wolfe: 1000 次迭代, 最终梯度范数: 3.7379e-03 正在处理: 自定义函数1 使用Armijo准则... 使用Wolfe准则... Armijo: 1000 次迭代, 最终梯度范数: 8.9191e+00 Wolfe: 1000 次迭代, 最终梯度范数: 6.0593e+00 正在处理: 自定义函数2 使用Armijo准则... 使用Wolfe准则... Armijo: 225 次迭代, 最终梯度范数: 9.7385e-08 Wolfe: 225 次迭代, 最终梯度范数: 9.7385e-08 正在处理: 自定义函数3(一维) 使用Armijo准则... 使用Wolfe准则... Armijo: 6 次迭代, 最终梯度范数: 2.3869e-08 Wolfe: 2 次迭代, 最终梯度范数: 0.0000e+00 正在处理: 自定义函数4(一维) 使用Armijo准则... 使用Wolfe准则... Armijo: 105 次迭代, 最终梯度范数: 0.0000e+00 Wolfe: 39 次迭代, 最终梯度范数: 0.0000e+00 优化结果已保存到 优化结果\优化结果.xlsx Font 'default' does not have a glyph for '\u2212' [U+2212], substituting with a dummy symbol. Font 'default' does not have a glyph for '\u2212' [U+2212], substituting with a dummy symbol. Font 'default' does not have a glyph for '\u2212' [U+2212], substituting with a dummy symbol. Font 'default' does not have a glyph for '\u2212' [U+2212], substituting with a dummy symbol. Font 'default' does not have a glyph for '\u2212' [U+2212], substituting with a dummy symbol. Font 'default' does not have a glyph for '\u2212' [U+2212], substituting with a dummy symbol. Font 'default' does not have a glyph for '\u2212' [U+2212], substituting with a dummy symbol. Font 'default' does not have a glyph for '\u2212' [U+2212], substituting with a dummy symbol. Font 'default' does not have a glyph for '\u2212' [U+2212], substituting with a dummy symbol. Font 'default' does not have a glyph for '\u2212' [U+2212], substituting with a dummy symbol. 函数值变化曲线图已保存到 优化结果\函数值变化曲线.png Font 'default' does not have a glyph for '\u2212' [U+2212], substituting with a dummy symbol. Font 'default' does not have a glyph for '\u2212' [U+2212], substituting with a dummy symbol. Font 'default' does not have a glyph for '\u2212' [U+2212], substituting with a dummy symbol. Font 'default' does not have a glyph for '\u2212' [U+2212], substituting with a dummy symbol. Font 'default' does not have a glyph for '\u2212' [U+2212], substituting with a dummy symbol. Font 'default' does not have a glyph for '\u2212' [U+2212], substituting with a dummy symbol. Font 'default' does not have a glyph for '\u2212' [U+2212], substituting with a dummy symbol. Font 'default' does not have a glyph for '\u2212' [U+2212], substituting with a dummy symbol. Font 'default' does not have a glyph for '\u2212' [U+2212], substituting with a dummy symbol. Font 'default' does not have a glyph for '\u2212' [U+2212], substituting with a dummy symbol. Font 'default' does not have a glyph for '\u2212' [U+2212], substituting with a dummy symbol. Font 'default' does not have a glyph for '\u2212' [U+2212], substituting with a dummy symbol. 进程已结束,退出代码为 0】 ,请帮我检查并改正错误点。我的原始代码如下: 【import torch import numpy as np import pandas as pd import matplotlib.pyplot as plt import os import math from scipy.optimize import line_search # 设置中文字体 plt.rcParams['font.sans-serif'] = ['SimHei'] # 使用黑体 plt.rcParams['axes.unicode_minus'] = False # 正常显示负号 # 使用Armijo非精确线搜索的最速下降法 def gradient_descent_armijo(obj_func, initial_point, max_iterations=1000, epsilon=1e-7): x = torch.tensor(initial_point, dtype=torch.float64, requires_grad=True) history = { '迭代次数': [], '迭代点': [], '函数值': [], '梯度范数': [] } for k in range(max_iterations): # 计算函数值和梯度 loss = obj_func(x) gradient = torch.autograd.grad(loss, x)[0] gradient_norm = torch.norm(gradient).item() # 记录当前状态 history['迭代次数'].append(k) history['迭代点'].append(x.detach().numpy().copy()) history['函数值'].append(loss.item()) history['梯度范数'].append(gradient_norm) # 检查终止条件 if gradient_norm < epsilon or not np.isfinite(gradient_norm): break # 计算搜索方向 dk = -gradient # 计算步长 - Armijo准则 rho1 = 0.1 alpha = 1.0 phi0 = loss.item() phi0_prime = torch.dot(gradient, dk).item() # Armijo线搜索 for _ in range(50): # 最大50次尝试 x_new = x + alpha * dk try: phi_alpha = obj_func(x_new).item() except RuntimeError: # 处理数值问题 phi_alpha = float('inf') if phi_alpha <= phi0 + alpha * rho1 * phi0_prime: break else: alpha *= 0.5 # 减少步长 if alpha < 1e-10: # 防止步长过小 break # 更新点 x = x + alpha * dk return history # 使用Wolfe准则的最速下降法(修复版) def gradient_descent_wolfe(obj_func, initial_point, max_iterations=1000, epsilon=1e-7): x = torch.tensor(initial_point, dtype=torch.float64, requires_grad=True) history = { '迭代次数': [], '迭代点': [], '函数值': [], '梯度范数': [] } for k in range(max_iterations): # 计算函数值和梯度 loss = obj_func(x) gradient = torch.autograd.grad(loss, x)[0] gradient_norm = torch.norm(gradient).item() # 记录当前状态 history['迭代次数'].append(k) history['迭代点'].append(x.detach().numpy().copy()) history['函数值'].append(loss.item()) history['梯度范数'].append(gradient_norm) # 检查终止条件 if gradient_norm < epsilon or not np.isfinite(gradient_norm): break # 计算搜索方向 dk = -gradient.detach().numpy() x_np = x.detach().numpy() grad_np = gradient.detach().numpy() # 定义一维函数用于线搜索 def f(alpha): try: x_new = x_np + alpha * dk return obj_func(torch.tensor(x_new, dtype=torch.float64)).item() except RuntimeError: # 处理数值问题 return float('inf') def fprime(alpha): try: x_new = x_np + alpha * dk x_tensor = torch.tensor(x_new, dtype=torch.float64, requires_grad=True) loss_val = obj_func(x_tensor) grad_val = torch.autograd.grad(loss_val, x_tensor)[0] return torch.dot(grad_val, torch.tensor(dk, dtype=torch.float64)).item() except RuntimeError: # 处理数值问题 return float('inf') # 执行Wolfe线搜索 - 修复版本 c1 = 1e-4 c2 = 0.9 max_iter = 20 # 初始化参数 alpha = 1.0 alpha_low = 0.0 alpha_high = 10.0 try: phi0 = f(0) derphi0 = fprime(0) # 检查初始方向导数 if derphi0 >= 0: # 不是下降方向,使用固定小步长 alpha = 1e-4 else: # Wolfe线搜索主循环 for _ in range(max_iter): phi_alpha = f(alpha) # Armijo条件检查 if phi_alpha > phi0 + c1 * alpha * derphi0: alpha_high = alpha alpha = (alpha_low + alpha_high) / 2 continue derphi_alpha = fprime(alpha) # 曲率条件检查 if abs(derphi_alpha) <= -c2 * derphi0: break if derphi_alpha >= 0: alpha_high = alpha alpha = (alpha_low + alpha_high) / 2 else: alpha_low = alpha alpha = min(alpha * 2, (alpha_low + alpha_high) / 2) # 确保alpha在合理范围内 alpha = max(1e-10, min(alpha, 1e3)) except Exception as e: print(f" Wolfe线搜索出错: {str(e)}, 使用默认步长0.1") alpha = 0.1 # 默认步长 # 更新点 x_new = x_np + alpha * dk x = torch.tensor(x_new, dtype=torch.float64, requires_grad=True) return history # 结果摘要函数 - 只保留前5次和后5次迭代 def summarize_history(history, max_rows=10): n = len(history['迭代次数']) if n <= max_rows: return history summary = {key: [] for key in history} # 前5次迭代 for i in range(5): for key in history: summary[key].append(history[key][i]) # 添加省略行 for key in history: if key == '迭代次数': summary[key].append("...") else: summary[key].append("...") # 后5次迭代 for i in range(n - 5, n): for key in history: summary[key].append(history[key][i]) return summary # 保存结果到Excel def save_results_to_excel(results, filename="优化结果.xlsx"): # 创建结果目录 os.makedirs("优化结果", exist_ok=True) path = os.path.join("优化结果", filename) # 创建Excel写入器 with pd.ExcelWriter(path, engine='openpyxl') as writer: for problem_name, methods in results.items(): all_dfs = [] for method_name, history in methods.items(): # 创建摘要 summary = summarize_history(history) # 创建DataFrame df = pd.DataFrame(summary) df['方法'] = method_name all_dfs.append(df) # 合并方法结果 if all_dfs: combined_df = pd.concat(all_dfs) combined_df['问题名称'] = problem_name # 重新排列列顺序 combined_df = combined_df[['问题名称', '方法', '迭代次数', '迭代点', '函数值', '梯度范数']] # 保存到Excel的一个sheet sheet_name = problem_name[:30] # 限制sheet名称长度 combined_df.to_excel(writer, sheet_name=sheet_name, index=False) print(f"优化结果已保存到 {path}") # 绘制函数值变化曲线 def plot_function_values(results): plt.figure(figsize=(15, 20)) for idx, (problem_name, methods) in enumerate(results.items()): plt.subplot(5, 1, idx + 1) for method_name, history in methods.items(): iterations = history['迭代次数'] values = history['函数值'] # 只显示实际迭代点 valid_indices = [i for i, v in enumerate(iterations) if isinstance(v, int)] valid_iterations = [iterations[i] for i in valid_indices] valid_values = [values[i] for i in valid_indices] # 仅当函数值为有限数时绘制 if any(np.isfinite(v) for v in valid_values): plt.plot(valid_iterations, valid_values, label=method_name) plt.title(f"{problem_name} - 函数值变化") plt.xlabel('迭代次数') plt.ylabel('函数值') plt.legend() plt.grid(True) # 仅当函数值为正时使用对数尺度 if all(v > 0 for v in valid_values if np.isfinite(v)): plt.yscale('log') else: # 添加警告信息 plt.text(0.5, 0.5, '注意: 函数值非正数,无法使用对数坐标', transform=plt.gca().transAxes, ha='center', color='red') plt.tight_layout() plot_path = os.path.join("优化结果", "函数值变化曲线.png") plt.savefig(plot_path) print(f"函数值变化曲线图已保存到 {plot_path}") plt.show() # 定义优化问题(增强数值稳定性) ############# Rosenbrock函数 ################ def obj_rosenbrock(x): x = torch.clamp(x, -1e50, 1e50) return 100 * (x[1] - x[0] ** 2) ** 2 + (1 - x[0]) ** 2 ############# 自定义函数1 ################ def obj_custom1(x): x = torch.clamp(x, -1e50, 1e50) return (x[0] - x[1]) ** 4 + x[0] ** 2 - x[1] ** 2 - 2 * x[0] + 2 * x[1] + 1 ############# 自定义函数2 ################ def obj_custom2(x): x = torch.clamp(x, -1e50, 1e50) return 2.5 * x[0] ** 2 + 0.5 * x[1] ** 2 + 2 * x[0] * x[1] - 3 * x[0] - x[1] ############# 自定义函数3(一维) ################ def obj_custom3(x): x = torch.clamp(x, -1e50, 1e50) return x[0] ** 2 - x[0] ** 3 / 3 ############# 自定义函数4(一维) ################ def obj_custom4(x): x = torch.clamp(x, -1e50, 1e50) return -x[0] ** 2 + 6 * x[0] - 5 # 主程序 def main(): # 定义所有优化问题 problems = { "Rosenbrock函数": { "func": obj_rosenbrock, "initial_point": [-1.2, 1.0] }, "自定义函数1": { "func": obj_custom1, "initial_point": [0.0, 0.0] }, "自定义函数2": { "func": obj_custom2, "initial_point": [0.0, 0.0] }, "自定义函数3(一维)": { "func": obj_custom3, "initial_point": [1.5] }, "自定义函数4(一维)": { "func": obj_custom4, "initial_point": [0.0] } } # 存储所有结果 all_results = {} # 对每个问题运行两种优化方法 for name, problem in problems.items(): print(f"\n正在处理: {name}") # 存储当前问题的结果 problem_results = {} # 使用Armijo准则 print(" 使用Armijo准则...") try: history_armijo = gradient_descent_armijo( problem["func"], problem["initial_point"], max_iterations=1000, epsilon=1e-7 ) problem_results["Armijo"] = history_armijo except Exception as e: print(f" Armijo准则失败: {str(e)}") problem_results["Armijo"] = { '迭代次数': [0], '迭代点': [problem["initial_point"]], '函数值': [float('nan')], '梯度范数': [float('nan')] } # 使用Wolfe准则 print(" 使用Wolfe准则...") try: history_wolfe = gradient_descent_wolfe( problem["func"], problem["initial_point"], max_iterations=1000, epsilon=1e-7 ) problem_results["Wolfe"] = history_wolfe except Exception as e: print(f" Wolfe准则失败: {str(e)}") problem_results["Wolfe"] = { '迭代次数': [0], '迭代点': [problem["initial_point"]], '函数值': [float('nan')], '梯度范数': [float('nan')] } # 添加到总结果 all_results[name] = problem_results # 打印收敛信息 if "Armijo" in problem_results: n_armijo = len(problem_results["Armijo"]['迭代次数']) final_grad_armijo = problem_results["Armijo"]['梯度范数'][-1] if n_armijo > 0 else float('nan') print(f" Armijo: {n_armijo} 次迭代, 最终梯度范数: {final_grad_armijo:.4e}") if "Wolfe" in problem_results: n_wolfe = len(problem_results["Wolfe"]['迭代次数']) final_grad_wolfe = problem_results["Wolfe"]['梯度范数'][-1] if n_wolfe > 0 else float('nan') print(f" Wolfe: {n_wolfe} 次迭代, 最终梯度范数: {final_grad_wolfe:.4e}") # 保存结果到Excel save_results_to_excel(all_results) # 绘制函数值变化曲线 plot_function_values(all_results) if __name__ == "__main__": main()】
06-16
评论 6
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值