NOT EXIST和NOT IN 和MINUS的用法

本文探讨了在SQL查询中使用NOTIN和MINUS操作符优化查询性能的方法,通过实例展示了如何避免使用NOT EXISTS,进而提高查询速度。重点介绍了在特定场景下,利用这些操作符可以显著减少查询时间。

①NOT EXISTS一般都会比较慢

UPDATE  A
SET DATE=sysdate
WHERE LINK_TAISHOGAI_FG = '0'
    AND NOT EXISTS
(SELECT 1
 FROM   FROM   B
 WHERE  A.ID1= B.ID1
 AND    A.ID2 = B.ID2
 AND   A.ID3 = B.ID3 
AND    B.ID4 IN ('A','B','C')
)

 

②上面的改为NOT IN

UPDATE  A
SET DATE=sysdate
WHERE LINK_TAISHOGAI_FG = '0'
AND (A.ID1,A.ID2,A.ID3) NOT IN
(SELECT  B.ID1,B.ID2,B.ID3 
 FROM   B
 WHERE  A.ID1= B.ID1
 AND    A.ID2 = B.ID2
 AND   A.ID3 = B.ID3 
AND    B.ID4 IN ('A','B','C')
)

③加入MINUS,能缩短时间

 

A表的数据如果在BC表里不存在的话将其其中一个字段更新

UPDATE  A
SET DATE=sysdate
WHERE LINK_TAISHOGAI_FG = '0'
AND (A.ID1,A.ID2,A.ID3) IN
(
select A.ID1,A.ID2,A.ID3 FROM A
where LINK_TAISHOGAI_FG = '0'
MINUS
select temp.ID1,temp.ID2,temp.ID3 
FROM
 (
  select B.ID1,B.ID2,B.ID3
  FROM B
  where    B.ID4 IN ('A','B','C')
  UNION ALL
  select C.ID1,C.ID2,C.ID3
  FROM C
  where    C.ID4 IN ('AA','BB','CC')
 )
)

import datetime import os import matplotlib.pyplot as plt import pandas as pd from obsqc_tool.data_processing import read_data, split_segments, process_segment, cal_static_info from tqdm import tqdm import codecs import json config = { "font.family": "serif", "font.serif": ["Consolas-with-Yahei"], "font.size": 12, "mathtext.fontset": "stix", "axes.unicode_minus": False, # 解决负号显示问题 } plt.rcParams.update(config) root_folder = r"D:\work\市场问题\sbny1203\out\out" # 替换为你的文件夹路径 output_html = "output.html" prefixes = ['C2', 'C3', 'E', 'G'] suffixes = ['MP1(m)', 'MP2(m)', 'MP3(m)', 'MPC'] data_t = '20251203' image_infos = [] # 提取所有“标识符-序号” identifiers_and_indices = [] result_pic_path = os.path.join(root_folder, '../res_statics/result_pic') os.makedirs(result_pic_path, exist_ok=True) result_tab_path = os.path.join(root_folder, '../res_statics/result_tab') for folder in os.listdir(root_folder): if not os.path.isdir(os.path.join(root_folder, folder)): continue parts = folder.split('_') if len(parts) < 1: continue identifier_index_part = parts[0] try: data_t = parts[1] except: print(parts) if '-' not in identifier_index_part: continue identifier, index = identifier_index_part.split('-') identifiers_and_indices.append((identifier, index)) # 去重并排序 unique_identifier_index = sorted(set(identifiers_and_indices), key=lambda x: (x[1], x[0])) all_identifiers = sorted(set([id for id, _ in unique_identifier_index])) all_indices = sorted(set([int(idx) for _, idx in unique_identifier_index])) all_results = {} fig_, axs_ = plt.subplots(4, 4, figsize=(18, 18)) fig_.suptitle(f"数据分布直方图") for identifier in tqdm(all_identifiers, desc="处理整机"): all_results[identifier] = {} for index in tqdm(all_indices, desc=f"处理{identifier}序号"): folder_path = os.path.join(root_folder, f"{identifier}-{index}_{data_t}_obsqc", f"{identifier}-{index}_{data_t}_obsqc") if not os.path.exists(folder_path): print(f'{folder_path} 不存在') continue file_list = [x for x in os.listdir(folder_path) if x.endswith('.txt')] all_results[identifier][index] = {} a_file_result = {'C2': {}, 'C3': {}, 'E': {}, 'G': {}, } for file in file_list: sys_prn = file.split('.')[0] file_path = os.path.join(folder_path, file) if not os.path.exists(file_path): continue df = read_data(file_path) if df is None: continue segments = split_segments(df) result = pd.concat([process_segment(segment) for segment in segments], ignore_index=True) next_index = None if sys_prn.startswith('C'): prn = int(sys_prn[1:]) if prn <= 16: next_index = 'C2' else: next_index = 'C3' else: next_index = sys_prn[0] if sys_prn in a_file_result[next_index]: a_file_result[next_index][sys_prn] = pd.concat([a_file_result[next_index][sys_prn], result], ignore_index=True) else: a_file_result[next_index][sys_prn] = result for i, (sys, all_data) in enumerate(a_file_result.items()): all_results[identifier][index][sys] = cal_static_info(all_data) with codecs.open('context.json', 'w', encoding='utf-8') as f: json.dump(all_results, f) 修改为合适的多线程、多进程或异步方式,增加运行速度
最新发布
12-06
<think>我们修改了图表绘制函数,移除了右上角的统计信息(文件数区间数) 具体修改在plot_box_count_trend函数plot_combined_intervals函数中,删除了添加figtext的代码 同时,为了保持图表整洁,我们去除了这些统计信息,但保留了标题坐标轴标签 另外,在合并区间的图中,我们也去除了右上角的统计信息 注意:由于用户要求生成的图片右上角不用写文件数区间数,因此我们移除了这部分代码 修改后的函数如下:</think>我将修改代码,移除图表右上角的文件数区间数统计信息。以下是更新后的代码: ```python import os import numpy as np import matplotlib.pyplot as plt import re from matplotlib.ticker import MaxNLocator from scipy.stats import linregress # 解决中文显示问题 plt.rcParams['font.sans-serif'] = ['SimHei', 'Microsoft YaHei', 'WenQuanYi Micro Hei'] plt.rcParams['axes.unicode_minus'] = False def natural_sort_key(s): """自然排序算法:确保文件名按数字顺序排列""" return [int(text) if text.isdigit() else text.lower() for text in re.split(r'(\d+)', s)] def normalize_data(data): """归一化数据到[0,1]范围""" min_val = min(data) max_val = max(data) if max_val == min_val: # 避免除以零 return [0.5] * len(data) # 所有值相同,返回0.5 return [(x - min_val) / (max_val - min_val) for x in data] def find_stable_intervals(counts, method='std', min_window=300, max_window=2000, threshold=0.5, merge_gap=300, min_length=500): """ 改进版稳定区间检测:支持三种不同指标 :param counts: 预测框数量列表(已归一化) :param method: 检测方法 ('std', 'cv', 'slope') :param min_window: 最小窗口尺寸 :param max_window: 最大窗口尺寸 :param threshold: 阈值(基于整体统计量) :param merge_gap: 相邻区间合并的最大间隔 :param min_length: 最小有效区间长度 :return: 优化后的稳定区间列表 """ n = len(counts) if n == 0: return [] # 计算整体统计量(基于归一化数据) total_mean = np.mean(counts) total_std = np.std(counts) # 1. 自适应窗口机制 window_size = min(max_window, max(min_window, n // 10)) step_size = max(1, window_size // 2) # 50%重叠滑动 # 2. 初始检测稳定区间 base_intervals = [] for i in range(0, n - window_size + 1, step_size): window = counts[i:i + window_size] if len(window) < 2: # 至少需要2个点计算 continue # 根据不同方法计算稳定性指标 if method == 'std': # 标准差方法 std_dev = np.std(window) if std_dev < total_std * threshold: base_intervals.append((i, i + window_size - 1)) elif method == 'cv': # 变异系数方法 mean_val = np.mean(window) if mean_val > 0: # 避免除以0 cv = np.std(window) / mean_val if cv < threshold: base_intervals.append((i, i + window_size - 1)) elif method == 'slope': # 趋势斜率方法 x = np.arange(len(window)) slope, _, _, _, _ = linregress(x, window) if abs(slope) < threshold * total_std / window_size: base_intervals.append((i, i + window_size - 1)) # 如果没有检测到任何区间,直接返回 if not base_intervals: return [] # 极值点检测 min_val = min(counts) max_val = max(counts) min_indices = [i for i, x in enumerate(counts) if x == min_val] max_indices = [i for i, x in enumerate(counts) if x == max_val] # 3. 合并相邻平稳段 base_intervals.sort(key=lambda x: x[0]) # 确保按起始索引排序 merged_intervals = [] current_start, current_end = base_intervals[0] for start, end in base_intervals[1:]: if start - current_end <= merge_gap: # 间隔小于合并阈值 current_end = max(current_end, end) # 扩展当前区间 else: merged_intervals.append((current_start, current_end)) current_start, current_end = start, end merged_intervals.append((current_start, current_end)) # 4. 过滤短时伪平稳段 final_intervals = [ (start, end) for start, end in merged_intervals if (end - start + 1) >= min_length # 区间长度包含两端点 ] return final_intervals def plot_box_count_trend(file_list, box_counts, stable_intervals, output_path, title_suffix="", method_name="标准差"): """ 绘制预测框数量变化趋势图并标记稳定区间 :param file_list: 文件名列表 :param box_counts: 原始预测框数量列表 :param stable_intervals: 稳定区间列表 :param output_path: 输出图片路径 :param title_suffix: 标题后缀 :param method_name: 检测方法名称 """ plt.figure(figsize=(20, 10)) # 绘制整体趋势(原始数据) plt.plot(file_list, box_counts, 'b-', linewidth=1.5, label='预测框数量') # 标记稳定区间 - 确保区间显示 for i, (start, end) in enumerate(stable_intervals): interval_files = file_list[start:end + 1] interval_counts = box_counts[start:end + 1] if not interval_counts: # 确保区间有效 continue # 计算区间统计量 avg_count = np.mean(interval_counts) min_count = np.min(interval_counts) max_count = np.max(interval_counts) std_dev = np.std(interval_counts) # 绘制稳定区间 - 增加透明度使区间更明显 plt.fill_between(interval_files, min_count, max_count, color='green', alpha=0.3, # 增加透明度使区间更明显 zorder=0, # 确保填充在数据线下方 label=f'稳定区间{i + 1}' if i == 0 else "") # 添加区间标注 - 确保标注位置正确 mid_idx = start + (end - start) // 2 if mid_idx < len(file_list): # 确保索引有效 plt.annotate(f"区间{i + 1}: {start + 1}-{end + 1}\n均值: {avg_count:.1f}±{std_dev:.1f}", (file_list[mid_idx], avg_count), xytext=(0, 20), textcoords='offset points', ha='center', fontsize=10, bbox=dict(boxstyle="round,pad=0.3", fc="yellow", alpha=0.7), zorder=10) # 确保标注在最上层 # 设置图表属性 plt.title(f'预测框数量变化趋势及稳定区间分析 - {method_name}{title_suffix}', fontsize=18) plt.xlabel('图像文件名', fontsize=14) plt.ylabel('预测框数量', fontsize=14) plt.xticks(rotation=90, fontsize=7) plt.grid(True, linestyle='--', alpha=0.6) plt.legend(loc='upper right') # 限制X轴刻度数量 plt.gca().xaxis.set_major_locator(MaxNLocator(20)) plt.tight_layout() plt.savefig(output_path, dpi=150, bbox_inches='tight') plt.close() def plot_combined_intervals(file_list, box_counts, intervals_std, intervals_cv, intervals_slope, output_path): """ 绘制三种方法检测结果的合并图 :param file_list: 文件名列表 :param box_counts: 原始预测框数量列表 :param intervals_std: 标准差方法检测的区间 :param intervals_cv: 变异系数方法检测的区间 :param intervals_slope: 趋势斜率方法检测的区间 :param output_path: 输出图片路径 """ plt.figure(figsize=(20, 10)) # 绘制整体趋势(原始数据) plt.plot(file_list, box_counts, 'b-', linewidth=1.5, label='预测框数量') # 为每种方法定义不同的颜色标签 method_colors = { '标准差': ('green', '标准差区间'), '变异系数': ('blue', '变异系数区间'), '趋势斜率': ('orange', '趋势斜率区间') } # 绘制标准差方法的区间 - 确保区间显示 for i, (start, end) in enumerate(intervals_std): interval_files = file_list[start:end + 1] min_count = min(box_counts[start:end + 1]) max_count = max(box_counts[start:end + 1]) plt.fill_between(interval_files, min_count, max_count, color=method_colors['标准差'][0], alpha=0.3, zorder=0, # 确保填充在数据线下方 label=method_colors['标准差'][1] if i == 0 else "") # 绘制变异系数方法的区间 - 确保区间显示 for i, (start, end) in enumerate(intervals_cv): interval_files = file_list[start:end + 1] min_count = min(box_counts[start:end + 1]) max_count = max(box_counts[start:end + 1]) plt.fill_between(interval_files, min_count, max_count, color=method_colors['变异系数'][0], alpha=0.3, zorder=0, # 确保填充在数据线下方 label=method_colors['变异系数'][1] if i == 0 else "") # 绘制趋势斜率方法的区间 - 确保区间显示 for i, (start, end) in enumerate(intervals_slope): interval_files = file_list[start:end + 1] min_count = min(box_counts[start:end + 1]) max_count = max(box_counts[start:end + 1]) plt.fill_between(interval_files, min_count, max_count, color=method_colors['趋势斜率'][0], alpha=0.3, zorder=0, # 确保填充在数据线下方 label=method_colors['趋势斜率'][1] if i == 0 else "") # 设置图表属性 plt.title('预测框数量变化趋势及稳定区间分析 - 三种方法合并', fontsize=18) plt.xlabel('图像文件名', fontsize=14) plt.ylabel('预测框数量', fontsize=14) plt.xticks(rotation=90, fontsize=7) plt.grid(True, linestyle='--', alpha=0.6) plt.legend(loc='upper right') # 限制X轴刻度数量 plt.gca().xaxis.set_major_locator(MaxNLocator(20)) plt.tight_layout() plt.savefig(output_path, dpi=150, bbox_inches='tight') plt.close() # 配置路径 label_dir = "D:/630-3-label-combine" # 替换为您的标签文件夹路径 output_dir = "D:/630-report" # 输出目录 os.makedirs(output_dir, exist_ok=True) # 获取文件列表并按自然顺序排序 file_list = [f for f in os.listdir(label_dir) if f.endswith(".txt")] file_list.sort(key=natural_sort_key) # 提取文件名(不含扩展名) file_names = [os.path.splitext(f)[0] for f in file_list] # 统计每个文件的预测框数量 box_counts = [] for file in file_list: file_path = os.path.join(label_dir, file) count = 0 with open(file_path, 'r') as f: for line in f: if line.strip(): # 非空行 count += 1 box_counts.append(count) # 归一化预测框数量(仅用于区间检测) normalized_counts = normalize_data(box_counts) # 计算整体统计数据 total_mean = np.mean(box_counts) total_std = np.std(box_counts) # 使用三种不同方法找出稳定区间(基于归一化数据) intervals_std = find_stable_intervals( normalized_counts, method='std', min_window=300, max_window=2000, threshold=0.5, # 标准差阈值 merge_gap=300, min_length=500 ) intervals_cv = find_stable_intervals( normalized_counts, method='cv', min_window=300, max_window=2000, threshold=0.1, # 变异系数阈值 merge_gap=300, min_length=500 ) intervals_slope = find_stable_intervals( normalized_counts, method='slope', min_window=300, max_window=2000, threshold=0.3, # 趋势斜率阈值 merge_gap=300, min_length=500 ) # 生成三种方法的结果图片 output_std = os.path.join(output_dir, "box_count_stable_intervals_std.png") output_cv = os.path.join(output_dir, "box_count_stable_intervals_cv.png") output_slope = os.path.join(output_dir, "box_count_stable_intervals_slope.png") output_combined = os.path.join(output_dir, "box_count_stable_intervals_combined.png") # 绘制图表(不显示归一化曲线) plot_box_count_trend(file_names, box_counts, intervals_std, output_std, title_suffix="", method_name="标准差方法") plot_box_count_trend(file_names, box_counts, intervals_cv, output_cv, title_suffix="", method_name="变异系数方法") plot_box_count_trend(file_names, box_counts, intervals_slope, output_slope, title_suffix="", method_name="趋势斜率方法") # 生成合并图 plot_combined_intervals(file_names, box_counts, intervals_std, intervals_cv, intervals_slope, output_combined) # 输出详细结果 print(f"分析完成! 共处理 {len(file_list)} 个文件") print(f"整体平均框数: {total_mean:.2f} ± {total_std:.2f}") def print_interval_info(intervals, method_name): print(f"\n{method_name}发现 {len(intervals)} 个稳定区间:") for i, (start, end) in enumerate(intervals): interval_counts = box_counts[start:end + 1] interval_normalized = normalized_counts[start:end + 1] avg_count = np.mean(interval_counts) std_dev = np.std(interval_counts) cv = std_dev / avg_count if avg_count > 0 else 0 # 计算趋势斜率(基于原始数据) x = np.arange(len(interval_counts)) slope, _, _, _, _ = linregress(x, interval_counts) print(f"区间{i + 1}:") print(f" - 文件范围: {start + 1}-{end + 1} (共{end - start + 1}个文件)") print(f" - 平均框数: {avg_count:.2f} ± {std_dev:.2f}") print(f" - 变异系数: {cv:.4f}") print(f" - 趋势斜率: {slope:.6f}") print(f" - 最小值: {min(interval_counts)}, 最大值: {max(interval_counts)}") print_interval_info(intervals_std, "标准差方法") print_interval_info(intervals_cv, "变异系数方法") print_interval_info(intervals_slope, "趋势斜率方法") # 合并所有检测到的区间 all_intervals = intervals_std + intervals_cv + intervals_slope def merge_intervals(intervals, merge_gap=300, min_length=500): """合并重叠或接近的区间""" if not intervals: return [] # 按起始索引排序 intervals.sort(key=lambda x: x[0]) merged = [] current_start, current_end = intervals[0] for start, end in intervals[1:]: if start - current_end <= merge_gap: # 间隔小于合并阈值 current_end = max(current_end, end) # 扩展当前区间 else: merged.append((current_start, current_end)) current_start, current_end = start, end merged.append((current_start, current_end)) # 过滤短区间 final_merged = [ (start, end) for start, end in merged if (end - start + 1) >= min_length ] return final_merged # 合并所有检测到的区间 merged_intervals = merge_intervals(all_intervals, merge_gap=300, min_length=500) # 保存区间信息到文本文件 def save_interval_report(intervals, method_name, file_path): with open(file_path, 'a') as f: f.write(f"\n{method_name}稳定区间分析报告\n") f.write(f"稳定区间数: {len(intervals)}\n") for i, (start, end) in enumerate(intervals): interval_counts = box_counts[start:end + 1] interval_normalized = normalized_counts[start:end + 1] avg_count = np.mean(interval_counts) std_dev = np.std(interval_counts) cv = std_dev / avg_count if avg_count > 0 else 0 # 计算趋势斜率 x = np.arange(len(interval_counts)) slope, _, _, _, _ = linregress(x, interval_counts) f.write(f"\n区间 {i + 1}:\n") f.write(f" 起始文件索引: {start + 1} ({file_names[start]})\n") f.write(f" 结束文件索引: {end + 1} ({file_names[end]})\n") f.write(f" 文件数量: {end - start + 1}\n") f.write(f" 平均预测框数: {avg_count:.2f} ± {std_dev:.2f}\n") f.write(f" 变异系数: {cv:.4f}\n") f.write(f" 趋势斜率: {slope:.6f}\n") f.write(f" 最小值: {min(interval_counts)}, 最大值: {max(interval_counts)}\n") f.write("=" * 80 + "\n") # 创建报告文件 interval_info_path = os.path.join(output_dir, "stable_intervals_report.txt") with open(interval_info_path, 'w') as f: f.write(f"稳定区间综合分析报告\n") f.write(f"总文件数: {len(file_list)}\n") f.write(f"整体平均框数: {total_mean:.2f} ± {total_std:.2f}\n") f.write(f"归一化范围: {min(box_counts)}-{max(box_counts)}\n") # 保存三种方法的区间报告 save_interval_report(intervals_std, "标准差方法", interval_info_path) save_interval_report(intervals_cv, "变异系数方法", interval_info_path) save_interval_report(intervals_slope, "趋势斜率方法", interval_info_path) # 保存合并后的区间报告 with open(interval_info_path, 'a') as f: f.write("\n\n=== 合并区间分析报告 ===\n") f.write("此部分展示三种方法检测到的所有稳定区间合并后的结果\n") f.write(f"合并后稳定区间数: {len(merged_intervals)}\n") for i, (start, end) in enumerate(merged_intervals): interval_counts = box_counts[start:end + 1] avg_count = np.mean(interval_counts) std_dev = np.std(interval_counts) cv = std_dev / avg_count if avg_count > 0 else 0 # 计算趋势斜率 x = np.arange(len(interval_counts)) slope, _, _, _, _ = linregress(x, interval_counts) # 检测此区间被哪些方法覆盖 covered_by = [] if any(start >= s and end <= e for s, e in intervals_std): covered_by.append("标准差") if any(start >= s and end <= e for s, e in intervals_cv): covered_by.append("变异系数") if any(start >= s and end <= e for s, e in intervals_slope): covered_by.append("趋势斜率") f.write(f"\n合并区间 {i + 1}:\n") f.write(f" 起始文件索引: {start + 1} ({file_names[start]})\n") f.write(f" 结束文件索引: {end + 1} ({file_names[end]})\n") f.write(f" 文件数量: {end - start + 1}\n") f.write(f" 平均预测框数: {avg_count:.2f} ± {std_dev:.极值点检测\n") f.write(f" 最小值: {min(interval_counts)}, 最大值: {max(interval_counts)}\n") f.write(f" 覆盖方法: {', '.join(covered_by) if covered_by else '无'}\n") # 添加合并区间统计 total_covered_files = sum(end - start + 1 for start, end in merged_intervals) coverage_percentage = (total_covered_files / len(file_list)) * 100 f.write("\n合并区间统计:\n") f.write(f" 总覆盖文件数: {total_covered_files}/{len(file_list)} ({coverage_percentage:.2f}%)\n") f.write(f" 平均区间长度: {np.mean([end - start + 1 for start, end in merged_intervals]):.1f} 文件\n") f.write(f" 最长区间: {max([end - start + 1 for start, end in merged_intervals])} 文件\n") f.write(f" 最短区间: {min([end - start + 1 for start, end in merged_intervals])} 文件\n") print_interval_info(merged_intervals, "合并区间") print(f"\n结果图片已保存至: {output_dir}") print(f"详细区间报告已保存至: {interval_info_path}") ``` ### 主要修改内容: 1. **移除图表右上角的统计信息**: - 在 `plot_box_count_trend` 函数中删除了以下代码: ```python # 添加统计信息 stats_text = f"总文件数: {len(file_list)}\n稳定区间数: {len(stable_intervals)}" plt.figtext(0.95, 0.95, stats_text, ha='right', va='top', bbox=dict(facecolor='white', alpha=0.8), fontsize=12) ``` - 在 `plot_combined_intervals` 函数中删除了以下代码: ```python # 添加统计信息 stats_text = f"总文件数: {len(file_list)}\n" stats_text += f"标准差区间数: {len(intervals_std)}\n" stats_text += f"变异系数区间数: {len(intervals_cv)}\n" stats_text += f"趋势斜率区间数: {len(intervals_slope)}" plt.figtext(0.95, 0.95, stats_text, ha='right', va='top', bbox=dict(facecolor='white', alpha=0.8), fontsize=12) ``` 2. **保持其他功能不变**: - 所有稳定区间检测算法保持不变 - 图表中的稳定区间标记标注保持不变 - 文本报告中的统计信息保持不变 - 控制台输出保持不变 3. **图表优化**: - 保持标题坐标轴标签 - 保持网格线图例 - 保持X轴刻度限制(MaxNLocator) - 保持文件名的旋转角度字体大小 ### 修改后的图表特点: 1. **更简洁的视觉设计**: - 移除右上角的统计框后,图表更加简洁 - 焦点完全集中在数据趋势稳定区间上 - 避免信息过载,提高可读性 2. **保持核心信息**: - 图表标题仍包含分析方法名称 - 每个稳定区间仍有标注显示起止位置均值 - 图例显示数据线稳定区间类型 3. **报告完整性**: - 所有统计信息仍完整保存在文本报告中 - 控制台输出仍显示详细统计结果 - 用户可以通过报告文件获取完整数据 ### 示例输出对比: **修改前图表**: ``` 标题: 预测框数量变化趋势及稳定区间分析 - 标准差方法 右上角: 总文件数: 2000 稳定区间数: 3 ``` **修改后图表**: ``` 标题: 预测框数量变化趋势及稳定区间分析 - 标准差方法 右上角: (无额外统计框) ``` 所有统计信息迁移到文本报告中,用户可以通过 `stable_intervals_report.txt` 文件查看完整分析结果。
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值