麻烦的下载对话框的中文文件名处理.

本文探讨了不同浏览器处理中文文件名时出现的乱码问题,并提供了一种通用解决方案。通过对浏览器类型进行判断,采取不同的编码方式,确保文件名在各种浏览器下正确显示。

因为不同浏览器对head处理的行为不同,所以下载对话框中的中文文件名比较麻烦.没有统一方法,不要以为你看到的是正确的中文,换个浏览器

或换个版本就不一定正确了.

大多数情况下,默认转出UTF-8的字符串是可以的:

String str = "这是一个UTF-8格式编码的中文字符串.dat";

如果不是UTF-8,那么转换到 UTF-8的字符串,大多数时候是有效的.

但是同样是UTF-8字符串,IE的不同版本看到的竟然是不同效果.虽然在contentTyep中已经指明 charset=UTF-8,而且从

客户端进看页面编码也是UTF-8,但不同版本的IE却有的乱码有的正常.

对于IE而言,全部使用url-encode,所有版本都是正确的.这是网上流行的解决方案,但是.....................

对于IE6,限制文件名大约在150个字符以内,一个中文字符UTF8编码为3个byte,再URLEncode成9个byte,所以对于IE6最多不能超过17个中文.所以这个方法不行.

使用俺8年前用的IE下载的乱码无敌大法,当时在IE6以下的所有版本都绝不乱码.今天测试了一下,IE7,IE8,IE9BATE也坚决不乱,又不会有17个中文字符的限制.方法:

将字符串以GB18030编码转换为byte[],然后将每个byte置的成char进行UTF8编码:

byte[] src = "UTF8字符串".getBytes("GB18030");

byte[] buf = new byte[src.length * 3];

int pos = 0;

for(int i=0;i<src.length;i++){

char c = (char)src[i] & 0xFF;

if(c <= 0x007F && c != 0) buf[pos++] = (byte)c;

else if(c > 0x07FF){

buf[pos+2] = (byte) (0x80 | ((c >> 0 ) & 0x3F));

buf[pos+1] = (byte) (0x80 | ((c >>6 ) & 0x3F));

buf[pos+0] = (byte) (0xE0 | ((c >>12 ) & 0x0F));

pos += 3;

}

else{

buf[pos+1] = (byte) (0x80 | ((c >>0) & 0x3F));

buf[pos+0] = (byte) (0xC0 | ((c >>6 ) & 0x1F));

pos += 2;

}

}

return new String(buf,0,pos,"UTF-8");

绝对无敌....................

但是对于FF,url-encode肯定是不行了.直接输出UTF-8,基本上测试过的版本都正确.

但是...................................

不同的容器行为又不样.因为response.setCharacterEncoding只对实体起作用,对头域不起作用,所以对于头域只能手工处理.

比如tomcat,因为它默认是ISO8859_1,你通过response.setCharacterEncoding可以让它对实体部分的处理按你指定的字符集

处理,但

response.setHeader("Content-Disposition","attachment;filename="+str);

无法控制str按指定的字符集处理,所以只能反向转换把UTF-8转成ISO8859_1:

str = new String(str.getBytes("UTF-8"),"ISO8859_1");

response.setHeader("Content-Disposition","attachment;filename="+str);

这样在tomcat输出后,大多数浏览器接收到的 byte[]是UTF-8格式的,能正确处理(IE还是存在版本问题),但这种方案又和容器偶合了.

如果FF不和容器偶合的话,可以进行base64编码,就象IE进行urlencode编码一样.

我KAO,这不是折腾人吗?

那么要尽量做到通用,先要判断浏览器:

String agent = request.getHeader("User-Agent");

if(agent == null) xxxx;

agent = agent.toUpperCase();

if(agent.indexOf("MSIE") != -1) str = URLEncoder.encode(str);

else if(agent.indexOf("FIREFOX") != -1) str = Base64Encoder.encode(str);

else{

............................

}

余下的浏览器怎么办呢?经过测试Opera/Chrome都只能解释UTF-8的中文名.所以

else{

............................

}将这样:

else if(serverTyep.equals("tomcat")){

str = new String(str.getBytes("UTF-8"),"ISO8859_1");

}

else{

//其它容器是否要转码看它默认用什么字符集处理.比如resin就不用任何编码,只要是UTF-8格式的就正确

}

对于服务端的不兼容,其它平台不一定有这个问题.但是对于客户端的不兼容,任何平台都会有这个问题.要针对不同浏览器做不同的处理.

全盘检索代码,是否错误,是否存在逻辑错误,是否内存溢出,是否可执行,核心分析任务是否有效: import os import sys import re import json import gc import time import tempfile import concurrent.futures import difflib import threading import traceback import numpy as np import librosa import torch import psutil from typing import List, Dict, Tuple, Optional, Set from threading import Lock, Semaphore, RLock from datetime import datetime from pydub import AudioSegment from pydub.silence import split_on_silence from modelscope.pipelines import pipeline from modelscope.utils.constant import Tasks from transformers import AutoModelForSequenceClassification, AutoTokenizer from torch.utils.data import TensorDataset, DataLoader from PyQt5.QtWidgets import (QApplication, QMainWindow, QWidget, QVBoxLayout, QHBoxLayout, QPushButton, QLabel, QLineEdit, QTextEdit, QFileDialog, QProgressBar, QGroupBox, QMessageBox, QListWidget, QSplitter, QTabWidget, QTableWidget, QTableWidgetItem, QHeaderView, QAction, QMenu, QToolBar, QCheckBox, QComboBox, QSpinBox, QDialog, QDialogButtonBox) # 添加缺失的导入 from PyQt5.QtCore import QThread, pyqtSignal, Qt, QTimer, QSize from PyQt5.QtGui import QFont, QTextCursor, QColor, QIcon # ====================== 资源监控器 ====================== class ResourceMonitor: """统一资源监控器(增强版)""" def __init__(self): self.gpu_available = torch.cuda.is_available() def memory_percent(self) -> Dict[str, float]: """获取内存使用百分比,同时返回CPU和GPU信息""" try: result = { "cpu": psutil.virtual_memory().percent } if self.gpu_available: allocated = torch.cuda.memory_allocated() / (1024 ** 3) total = torch.cuda.get_device_properties(0).total_memory / (1024 ** 3) result["gpu"] = (allocated / total) * 100 if total > 0 else 0 return result except Exception as e: print(f"获取内存使用百分比失败: {str(e)}") return {"cpu": 0, "gpu": 0} # ====================== 方言配置中心(优化版) ====================== class DialectConfig: """集中管理方言配置,便于维护和扩展(带缓存)""" # 标准关键词 STANDARD_KEYWORDS = { "opening": ["您好", "很高兴为您服务", "问有什么可以帮您"], "closing": ["感谢来电", "祝您生活愉快", "再见"], "forbidden": ["不知道", "没办法", "你投诉吧", "随便你"] } # 贵州方言关键词 GUIZHOU_KEYWORDS = { "opening": ["麻烦您喽", "问搞哪样", "有咋个可以帮您", "多谢喽"], "closing": ["搞归一喽", "麻烦您喽", "再见喽", "慢走喽"], "forbidden": ["搞不成", "没得法", "随便你喽", "你投诉吧喽"] } # 方言到标准表达的映射(扩展更多贵州方言) DIALECT_MAPPING = { "恼火得很": "非常生气", "鬼火戳": "很愤怒", "搞不成": "无法完成", "没得": "没有", "搞哪样嘛": "做什么呢", "归一喽": "完成了", "咋个": "怎么", "克哪点": "去哪里", "麻烦您喽": "麻烦您了", "多谢喽": "多谢了", "憨包": "傻瓜", "归一": "结束", "板扎": "很好", "鬼火冒": "非常生气", "背时": "倒霉", "吃豁皮": "占便宜" } # 类属性缓存 _combined_keywords = None _compiled_opening = None _compiled_closing = None _hotwords = None _dialect_pattern = None @classmethod def get_combined_keywords(cls) -> Dict[str, List[str]]: """获取合并后的关键词集(带缓存)""" if cls._combined_keywords is None: cls._combined_keywords = { "opening": cls.STANDARD_KEYWORDS["opening"] + cls.GUIZHOU_KEYWORDS["opening"], "closing": cls.STANDARD_KEYWORDS["closing"] + cls.GUIZHOU_KEYWORDS["closing"], "forbidden": cls.STANDARD_KEYWORDS["forbidden"] + cls.GUIZHOU_KEYWORDS["forbidden"] } return cls._combined_keywords @classmethod def get_compiled_opening(cls) -> List[re.Pattern]: """获取预编译的开场关键词正则表达式(带缓存)""" if cls._compiled_opening is None: keywords = cls.get_combined_keywords()["opening"] cls._compiled_opening = [re.compile(re.escape(kw)) for kw in keywords] return cls._compiled_opening @classmethod def get_compiled_closing(cls) -> List[re.Pattern]: """获取预编译的结束关键词正则表达式(带缓存)""" if cls._compiled_closing is None: keywords = cls.get_combined_keywords()["closing"] cls._compiled_closing = [re.compile(re.escape(kw)) for kw in keywords] return cls._compiled_closing @classmethod def get_asr_hotwords(cls) -> List[str]: """获取ASR热词列表(带缓存)""" if cls._hotwords is None: combined = cls.get_combined_keywords() cls._hotwords = sorted(set( combined["opening"] + combined["closing"] )) return cls._hotwords @classmethod def preprocess_text(cls, texts: List[str]) -> List[str]: """将方言文本转换为标准表达(使用一次性替换)""" if cls._dialect_pattern is None: # 创建方言替换的正则表达式(一次性) # 修复:按长度降序排序,确保长词优先匹配 keys = sorted(cls.DIALECT_MAPPING.keys(), key=len, reverse=True) pattern_str = "|".join(re.escape(key) for key in keys) cls._dialect_pattern = re.compile(pattern_str) def replace_match(match): return cls.DIALECT_MAPPING[match.group(0)] return [cls._dialect_pattern.sub(replace_match, text) for text in texts] # ====================== 系统配置管理器 ====================== class ConfigManager: """管理应用程序配置""" _instance = None def __new__(cls): if cls._instance is None: cls._instance = super().__new__(cls) cls._instance._init_config() return cls._instance def _init_config(self): """初始化默认配置""" self.config = { "model_paths": { "asr": "./models/iic-speech_paraformer-large-vad-punc-spk_asr_nat-zh-cn", "sentiment": "./models/IDEA-CCNL-Erlangshen-Roberta-110M-Sentiment" }, "sample_rate": 16000, "silence_thresh": -40, "min_silence_len": 1000, "max_concurrent": 1, "dialect_config": "guizhou", "max_audio_duration": 3600 # 最大音频时长(秒) } self.load_config() def load_config(self): """从文件加载配置""" try: if os.path.exists("config.json"): with open("config.json", "r") as f: self.config.update(json.load(f)) except: pass def save_config(self): """保存配置到文件""" try: with open("config.json", "w") as f: json.dump(self.config, f, indent=2) except: pass def get(self, key: str, default=None): """获取配置值""" return self.config.get(key, default) def set(self, key: str, value): """设置配置值""" self.config[key] = value self.save_config() # ====================== 音频处理工具(优化版) ====================== class AudioProcessor: """处理音频转换和特征提取(避免重复加载)""" SUPPORTED_FORMATS = ('.mp3', '.wav', '.amr', '.m4a') @staticmethod def convert_to_wav(input_path: str, temp_dir: str) -> Optional[List[str]]: """将音频转换为WAV格式(在静音处分割)""" try: os.makedirs(temp_dir, exist_ok=True) # 检查文件格式 if not any(input_path.lower().endswith(ext) for ext in AudioProcessor.SUPPORTED_FORMATS): raise ValueError(f"不支持的音频格式: {os.path.splitext(input_path)[1]}") if input_path.lower().endswith('.wav'): return [input_path] # 已经是WAV格式 # 检查ffmpeg是否可用 try: AudioSegment.converter = "ffmpeg" # 显式指定ffmpeg audio = AudioSegment.from_file(input_path) except FileNotFoundError: print("错误: 未找到ffmpeg,安装并添加到环境变量") return None # 检查音频时长是否超过限制 max_duration = ConfigManager().get("max_audio_duration", 3600) * 1000 # 毫秒 if len(audio) > max_duration: return AudioProcessor._split_long_audio(audio, input_path, temp_dir) else: return AudioProcessor._convert_single_audio(audio, input_path, temp_dir) except Exception as e: print(f"格式转换失败: {str(e)}") return None @staticmethod def _split_long_audio(audio: AudioSegment, input_path: str, temp_dir: str) -> List[str]: """分割长音频文件""" wav_paths = [] # 在静音处分割音频 chunks = split_on_silence( audio, min_silence_len=ConfigManager().get("min_silence_len", 1000), silence_thresh=ConfigManager().get("silence_thresh", -40), keep_silence=500 ) # 合并小片段,避免分段过多 merged_chunks = [] current_chunk = AudioSegment.empty() for chunk in chunks: if len(current_chunk) + len(chunk) < 5 * 60 * 1000: # 5分钟 current_chunk += chunk else: if len(current_chunk) > 0: merged_chunks.append(current_chunk) current_chunk = chunk if len(current_chunk) > 0: merged_chunks.append(current_chunk) # 导出分段音频 sample_rate = ConfigManager().get("sample_rate", 16000) for i, chunk in enumerate(merged_chunks): chunk = chunk.set_frame_rate(sample_rate).set_channels(1) chunk_path = os.path.join( temp_dir, f"{os.path.splitext(os.path.basename(input_path))[0]}_part{i + 1}.wav" ) chunk.export(chunk_path, format="wav") wav_paths.append(chunk_path) return wav_paths @staticmethod def _convert_single_audio(audio: AudioSegment, input_path: str, temp_dir: str) -> List[str]: """转换单个短音频文件""" sample_rate = ConfigManager().get("sample_rate", 16000) audio = audio.set_frame_rate(sample_rate).set_channels(1) wav_path = os.path.join(temp_dir, os.path.splitext(os.path.basename(input_path))[0] + ".wav") audio.export(wav_path, format="wav") return [wav_path] @staticmethod def extract_features_from_audio(y: np.ndarray, sr: int) -> Dict[str, float]: """从已加载的音频数据中提取特征(避免重复加载)""" try: duration = librosa.get_duration(y=y, sr=sr) segment_length = 60 # 60秒分段 total_segments = max(1, int(np.ceil(duration / segment_length))) syllable_rates = [] volume_stabilities = [] for i in range(total_segments): start = i * segment_length end = min((i + 1) * segment_length, duration) y_segment = y[int(start * sr):int(end * sr)] if len(y_segment) == 0: continue # 语速计算 intervals = librosa.effects.split(y_segment, top_db=20) speech_duration = sum(end - start for start, end in intervals) / sr if speech_duration > 0.1: # 避免极短语音导致的异常 syllable_rate = len(intervals) / speech_duration else: syllable_rate = 0 syllable_rates.append(syllable_rate) # 音量稳定性 rms = librosa.feature.rms(y=y_segment)[0] if len(rms) > 0 and np.mean(rms) > 0: volume_stability = np.std(rms) / np.mean(rms) volume_stabilities.append(volume_stability) return { "duration": duration, "syllable_rate": round(np.mean(syllable_rates) if syllable_rates else 0, 2), "volume_stability": round(np.mean(volume_stabilities) if volume_stabilities else 0, 4) } except: return {"duration": 0, "syllable_rate": 0, "volume_stability": 0} # ====================== 模型加载器(优化版) ====================== class ModelLoader: """加载和管理AI模型(使用RLock)""" asr_pipeline = None sentiment_model = None sentiment_tokenizer = None model_lock = RLock() # 使用RLock代替Lock models_loaded = False # 添加模型加载状态标志 @classmethod def load_models(cls): """加载所有模型""" config = ConfigManager() # 加载ASR模型 if not cls.asr_pipeline: with cls.model_lock: if not cls.asr_pipeline: # 双重检查锁定 cls.load_asr_model(config.get("model_paths")["asr"]) # 加载情感分析模型 if not cls.sentiment_model: with cls.model_lock: if not cls.sentiment_model: # 双重检查锁定 cls.load_sentiment_model(config.get("model_paths")["sentiment"]) cls.models_loaded = True @classmethod def reload_models(cls): """重新加载模型(配置变更后)""" with cls.model_lock: cls.asr_pipeline = None cls.sentiment_model = None cls.sentiment_tokenizer = None gc.collect() if torch.cuda.is_available(): torch.cuda.empty_cache() cls.load_models() @classmethod def load_asr_model(cls, model_path: str): """加载语音识别模型""" try: if not os.path.exists(model_path): raise FileNotFoundError(f"ASR模型路径不存在: {model_path}") asr_kwargs = {} if hasattr(torch, 'quantization'): asr_kwargs['quantize'] = 'int8' print("启用ASR模型量化") cls.asr_pipeline = pipeline( task=Tasks.auto_speech_recognition, model=model_path, device='cuda' if torch.cuda.is_available() else 'cpu', **asr_kwargs ) print("ASR模型加载完成") except Exception as e: print(f"加载ASR模型失败: {str(e)}") raise @classmethod def load_sentiment_model(cls, model_path: str): """加载情感分析模型""" try: if not os.path.exists(model_path): raise FileNotFoundError(f"情感分析模型路径不存在: {model_path}") cls.sentiment_model = AutoModelForSequenceClassification.from_pretrained(model_path) cls.sentiment_tokenizer = AutoTokenizer.from_pretrained(model_path) if torch.cuda.is_available(): cls.sentiment_model = cls.sentiment_model.cuda() print("情感分析模型加载完成") except Exception as e: print(f"加载情感分析模型失败: {str(e)}") raise # ====================== 核心分析线程(优化版) ====================== class AnalysisThread(QThread): progress_updated = pyqtSignal(int, str, str) result_ready = pyqtSignal(dict) finished_all = pyqtSignal() error_occurred = pyqtSignal(str, str) memory_warning = pyqtSignal() resource_cleanup = pyqtSignal() def __init__(self, audio_paths: List[str], temp_dir: str = "temp_wav"): super().__init__() self.audio_paths = audio_paths self.temp_dir = temp_dir self.is_running = True self.current_file = "" self.max_concurrent = min( ConfigManager().get("max_concurrent", 1), self.get_max_concurrent_tasks() ) self.resource_monitor = ResourceMonitor() self.semaphore = Semaphore(self.max_concurrent) os.makedirs(temp_dir, exist_ok=True) def run(self): try: if not ModelLoader.models_loaded: self.error_occurred.emit("模型未加载", "等待模型加载完成后再开始分析") return self.progress_updated.emit(0, f"最大并行任务数: {self.max_concurrent}", "") # 使用线程池并行处理 with concurrent.futures.ThreadPoolExecutor(max_workers=self.max_concurrent) as executor: # 创建任务 future_to_path = {} for path in self.audio_paths: if not self.is_running: break # 使用信号量控制并发 self.semaphore.acquire() batch_size = self.get_available_batch_size() future = executor.submit(self.analyze_audio, path, batch_size) future_to_path[future] = path future.add_done_callback(lambda f: self.semaphore.release()) # 处理完成的任务 for i, future in enumerate(concurrent.futures.as_completed(future_to_path)): if not self.is_running: break path = future_to_path[future] self.current_file = os.path.basename(path) # 内存检查 if self.check_memory_usage(): self.memory_warning.emit() self.is_running = False break try: result = future.result() if result: self.result_ready.emit(result) # 更新进度 progress = int((i + 1) / len(self.audio_paths) * 100) self.progress_updated.emit( progress, f"完成: {self.current_file} ({i + 1}/{len(self.audio_paths)})", self.current_file ) except Exception as e: result = { "file_name": self.current_file, "status": "error", "error": f"分析失败: {str(e)}" } self.result_ready.emit(result) # 分析完成后 if self.is_running: self.finished_all.emit() except Exception as e: self.error_occurred.emit("系统错误", str(e)) traceback.print_exc() finally: # 确保资源清理 self.resource_cleanup.emit() self.cleanup_resources() def analyze_audio(self, audio_path: str, batch_size: int) -> Dict: """分析单个音频文件(整合所有优化)""" result = { "file_name": os.path.basename(audio_path), "status": "processing" } wav_paths = [] try: # 1. 音频格式转换 wav_paths = AudioProcessor.convert_to_wav(audio_path, self.temp_dir) if not wav_paths: result["error"] = "格式转换失败(检查ffmpeg是否安装)" result["status"] = "error" return result # 2. 提取音频特征(合并所有分段) audio_features = self._extract_audio_features(wav_paths) result.update(audio_features) result["duration_str"] = self._format_duration(audio_features["duration"]) # 3. 语音识别与处理 all_segments, full_text = self._process_asr_segments(wav_paths) # 4. 说话人区分(使用优化后的方法) agent_segments, customer_segments = self.identify_speakers(all_segments) # 5. 生成带说话人标签的文本 labeled_text = self._generate_labeled_text(all_segments, agent_segments, customer_segments) result["asr_text"] = labeled_text.strip() # 6. 文本分析(包含方言预处理) text_analysis = self._analyze_text(agent_segments, customer_segments, batch_size) result.update(text_analysis) # 7. 服务规范检查(使用方言适配的关键词) service_check = self._check_service_rules(agent_segments) result.update(service_check) # 8. 问题解决率(上下文关联) result["issue_resolved"] = self._check_issue_resolution(customer_segments, agent_segments) result["status"] = "success" except Exception as e: result["error"] = f"分析失败: {str(e)}" result["status"] = "error" finally: # 清理临时文件 self._cleanup_temp_files(wav_paths) # 显式内存清理 self.cleanup_resources() return result def identify_speakers(self, segments: List[Dict]) -> Tuple[List[Dict], List[Dict]]: """区分客服与客户(优化版:子串匹配+提前终止)""" if not segments: return [], [] # 获取预编译的正则表达式 opening_patterns = DialectConfig.get_compiled_opening() closing_patterns = DialectConfig.get_compiled_closing() agent_id = None found_by_opening = False found_by_closing = False # 策略1:在前3段中查找开场白关键词(提前终止) for seg in segments[:3]: text = seg["text"] # 检查是否包含任意开场关键词 for pattern in opening_patterns: if pattern.search(text): agent_id = seg["spk_id"] found_by_opening = True break # 找到即终止内层循环 if found_by_opening: break # 找到即终止外层循环 # 策略2:在后3段中查找结束语关键词(提前终止) if not found_by_opening: # 逆序遍历最后3段 for seg in reversed(segments[-3:] if len(segments) >= 3 else segments): text = seg["text"] # 检查是否包含任意结束关键词 for pattern in closing_patterns: if pattern.search(text): agent_id = seg["spk_id"] found_by_closing = True break # 找到即终止内层循环 if found_by_closing: break # 找到即终止外层循环 # 策略3:如果前两种策略未找到,使用说话频率最高的作为客服 if agent_id is None: spk_counts = {} for seg in segments: spk_id = seg["spk_id"] spk_counts[spk_id] = spk_counts.get(spk_id, 0) + 1 if spk_counts: agent_id = max(spk_counts, key=spk_counts.get) else: return [], [] # 如果没有有效的agent_id,返回空列表 # 使用集合存储agent的spk_id,提高查询效率 agent_spk_ids = {agent_id} return ( [seg for seg in segments if seg["spk_id"] in agent_spk_ids], [seg for seg in segments if seg["spk_id"] not in agent_spk_ids] ) def _analyze_text(self, agent_segments: List[Dict], customer_segments: List[Dict], batch_size: int) -> Dict: """文本情感分析(优化版:向量化批处理)""" def analyze_speaker(segments: List[Dict], speaker_type: str) -> Dict: if not segments: return { f"{speaker_type}_negative": 0.0, f"{speaker_type}_neutral": 1.0, f"{speaker_type}_positive": 0.0, f"{speaker_type}_emotions": "无" } # 方言预处理 - 使用优化的一次性替换 texts = [seg["text"] for seg in segments] processed_texts = DialectConfig.preprocess_text(texts) # 使用DataLoader进行批处理 with ModelLoader.model_lock: inputs = ModelLoader.sentiment_tokenizer( processed_texts, padding=True, truncation=True, max_length=128, return_tensors="pt" ) # 创建TensorDataset和DataLoader dataset = TensorDataset(inputs['input_ids'], inputs['attention_mask']) dataloader = DataLoader(dataset, batch_size=batch_size, shuffle=False) device = "cuda" if torch.cuda.is_available() else "cpu" sentiment_dist = [] emotions = [] # 批量处理 for batch in dataloader: input_ids, attention_mask = batch inputs = { 'input_ids': input_ids.to(device), 'attention_mask': attention_mask.to(device) } with torch.no_grad(): outputs = ModelLoader.sentiment_model(**inputs) batch_probs = torch.nn.functional.softmax(outputs.logits, dim=-1) sentiment_dist.append(batch_probs.cpu()) # 情绪识别(批量) emotion_keywords = ["愤怒", "生气", "鬼火", "不耐烦", "搞哪样嘛", "恼火", "背时"] for text in processed_texts: if any(kw in text for kw in emotion_keywords): if any(kw in text for kw in ["愤怒", "生气", "鬼火", "恼火"]): emotions.append("愤怒") elif any(kw in text for kw in ["不耐烦", "搞哪样嘛"]): emotions.append("不耐烦") elif "背时" in text: emotions.append("沮丧") # 合并结果 if sentiment_dist: all_probs = torch.cat(sentiment_dist, dim=0) avg_sentiment = torch.mean(all_probs, dim=0).tolist() else: avg_sentiment = [0.0, 1.0, 0.0] # 默认值 return { f"{speaker_type}_negative": round(avg_sentiment[0], 4), f"{speaker_type}_neutral": round(avg_sentiment[1], 4), f"{speaker_type}_positive": round(avg_sentiment[2], 4), f"{speaker_type}_emotions": ",".join(set(emotions)) if emotions else "无" } return { **analyze_speaker(agent_segments, "agent"), **analyze_speaker(customer_segments, "customer") } # ====================== 辅助方法 ====================== def get_available_batch_size(self) -> int: """根据GPU内存动态调整batch size(考虑并行)""" if not torch.cuda.is_available(): return 4 # CPU默认批次 total_mem = torch.cuda.get_device_properties(0).total_memory / (1024 ** 3) # GB per_task_mem = total_mem / self.max_concurrent # 修正批次大小逻辑:显存越少,批次越小 if per_task_mem < 2: return 2 elif per_task_mem < 4: return 4 else: return 8 def get_max_concurrent_tasks(self) -> int: """根据系统资源计算最大并行任务数""" if torch.cuda.is_available(): total_mem = torch.cuda.get_device_properties(0).total_memory / (1024 ** 3) if total_mem < 6: return 1 elif total_mem < 12: return 2 else: return 3 else: # CPU模式下根据核心数设置 return max(1, os.cpu_count() // 2) def check_memory_usage(self) -> bool: try: mem_percent = self.resource_monitor.memory_percent() return mem_percent.get("cpu", 0) > 85 or mem_percent.get("gpu", 0) > 85 except: return False def _extract_audio_features(self, wav_paths: List[str]) -> Dict[str, float]: """提取音频特征(合并所有分段)""" combined_y = np.array([], dtype=np.float32) sr = ConfigManager().get("sample_rate", 16000) for path in wav_paths: y, _ = librosa.load(path, sr=sr) combined_y = np.concatenate((combined_y, y)) return AudioProcessor.extract_features_from_audio(combined_y, sr) def _process_asr_segments(self, wav_paths: List[str]) -> Tuple[List[Dict], str]: """处理ASR分段""" segments = [] full_text = "" for path in wav_paths: result = ModelLoader.asr_pipeline( path, hotwords=DialectConfig.get_asr_hotwords(), output_dir=None ) for seg in result[0]["sentences"]: segments.append({ "start": seg["start"], "end": seg["end"], "text": seg["text"], "spk_id": seg.get("spk_id", "0") }) full_text += seg["text"] + " " return segments, full_text.strip() def _generate_labeled_text(self, all_segments: List[Dict], agent_segments: List[Dict], customer_segments: List[Dict]) -> str: """生成带说话人标签的文本""" agent_spk_id = agent_segments[0]["spk_id"] if agent_segments else None customer_spk_id = customer_segments[0]["spk_id"] if customer_segments else None labeled_text = [] for seg in all_segments: if seg["spk_id"] == agent_spk_id: speaker = "客服" elif seg["spk_id"] == customer_spk_id: speaker = "客户" else: speaker = f"说话人{seg['spk_id']}" labeled_text.append(f"[{speaker}]: {seg['text']}") return "\n".join(labeled_text) def _check_service_rules(self, agent_segments: List[Dict]) -> Dict: """检查服务规范""" forbidden_keywords = DialectConfig.get_combined_keywords()["forbidden"] found_forbidden = [] found_opening = False found_closing = False # 检查开场白(前3段) for seg in agent_segments[:3]: text = seg["text"] if any(kw in text for kw in DialectConfig.get_combined_keywords()["opening"]): found_opening = True break # 检查结束语(后3段) for seg in reversed(agent_segments[-3:] if len(agent_segments) >= 3 else agent_segments): text = seg["text"] if any(kw in text for kw in DialectConfig.get_combined_keywords()["closing"]): found_closing = True break # 检查禁用词 for seg in agent_segments: text = seg["text"] for kw in forbidden_keywords: if kw in text: found_forbidden.append(kw) break return { "opening_found": found_opening, "closing_found": found_closing, "forbidden_words": ", ".join(set(found_forbidden)) if found_forbidden else "无" } def _check_issue_resolution(self, customer_segments: List[Dict], agent_segments: List[Dict]) -> bool: """检查问题是否解决(增强版)""" if not customer_segments or not agent_segments: return False # 提取所有文本 customer_texts = [seg["text"] for seg in customer_segments] agent_texts = [seg["text"] for seg in agent_segments] full_conversation = " ".join(customer_texts + agent_texts) # 问题解决关键词 resolution_keywords = ["解决", "处理", "完成", "已", "好了", "可以了", "没问题"] thank_keywords = ["谢谢", "感谢", "多谢"] negative_keywords = ["没解决", "不行", "不对", "还是", "仍然", "再"] # 检查是否有负面词汇 has_negative = any(kw in full_conversation for kw in negative_keywords) if has_negative: return False # 检查客户最后是否表达感谢 last_customer_text = customer_segments[-1]["text"] if any(kw in last_customer_text for kw in thank_keywords): return True # 检查是否有解决关键词 if any(kw in full_conversation for kw in resolution_keywords): return True # 检查客服是否确认解决 for agent_text in reversed(agent_texts[-3:]): # 检查最后3段 if any(kw in agent_text for kw in resolution_keywords): return True return False def _cleanup_temp_files(self, paths: List[str]): """清理临时文件(增强兼容性)""" for path in paths: try: if os.path.exists(path): # Windows系统可能需要多次尝试 for _ in range(3): try: os.remove(path) break except PermissionError: time.sleep(0.1) except: pass def _format_duration(self, seconds: float) -> str: """将秒转换为时分秒格式""" minutes, seconds = divmod(int(seconds), 60) hours, minutes = divmod(minutes, 60) return f"{hours:02d}:{minutes:02d}:{seconds:02d}" def cleanup_resources(self): """显式清理资源""" gc.collect() if torch.cuda.is_available(): torch.cuda.empty_cache() def stop(self): """停止分析""" self.is_running = False # ====================== 模型加载线程 ====================== class ModelLoadThread(QThread): progress_updated = pyqtSignal(int, str) finished = pyqtSignal(bool, str) def run(self): try: # 检查模型路径 config = ConfigManager().get("model_paths") if not os.path.exists(config["asr"]): self.finished.emit(False, "ASR模型路径不存在") return if not os.path.exists(config["sentiment"]): self.finished.emit(False, "情感分析模型路径不存在") return self.progress_updated.emit(20, "加载语音识别模型...") ModelLoader.load_asr_model(config["asr"]) self.progress_updated.emit(60, "加载情感分析模型...") ModelLoader.load_sentiment_model(config["sentiment"]) self.progress_updated.emit(100, "模型加载完成") self.finished.emit(True, "模型加载成功。建议:可通过设置界面修改模型路径") except Exception as e: self.finished.emit(False, f"模型加载失败: {str(e)}。建议:检查模型路径是否正确,或重新下载模型文件") # ====================== GUI主界面 ====================== class MainWindow(QMainWindow): def __init__(self): super().__init__() self.setWindowTitle("贵州方言客服质检系统") self.setGeometry(100, 100, 1200, 800) self.setup_ui() self.setup_menu() self.analysis_thread = None self.model_load_thread = None self.temp_dir = "temp_wav" os.makedirs(self.temp_dir, exist_ok=True) self.model_loaded = False def setup_ui(self): """设置用户界面""" # 主布局 main_widget = QWidget() main_layout = QVBoxLayout() main_widget.setLayout(main_layout) self.setCentralWidget(main_widget) # 工具栏 toolbar = QToolBar("主工具栏") toolbar.setIconSize(QSize(24, 24)) self.addToolBar(toolbar) # 添加文件按钮 add_file_action = QAction(QIcon("icons/add.png"), "添加文件", self) add_file_action.triggered.connect(self.add_files) toolbar.addAction(add_file_action) # 开始分析按钮 analyze_action = QAction(QIcon("icons/start.png"), "开始分析", self) analyze_action.triggered.connect(self.start_analysis) toolbar.addAction(analyze_action) # 停止按钮 stop_action = QAction(QIcon("icons/stop.png"), "停止分析", self) stop_action.triggered.connect(self.stop_analysis) toolbar.addAction(stop_action) # 设置按钮 settings_action = QAction(QIcon("icons/settings.png"), "设置", self) settings_action.triggered.connect(self.open_settings) toolbar.addAction(settings_action) # 分割布局 splitter = QSplitter(Qt.Horizontal) main_layout.addWidget(splitter) # 左侧文件列表 left_widget = QWidget() left_layout = QVBoxLayout() left_widget.setLayout(left_layout) file_list_label = QLabel("待分析文件列表") file_list_label.setFont(QFont("Arial", 12, QFont.Bold)) left_layout.addWidget(file_list_label) self.file_list = QListWidget() self.file_list.setSelectionMode(QListWidget.ExtendedSelection) left_layout.addWidget(self.file_list) # 右侧结果区域 right_widget = QWidget() right_layout = QVBoxLayout() right_widget.setLayout(right_layout) # 进度条 progress_label = QLabel("分析进度") progress_label.setFont(QFont("Arial", 12, QFont.Bold)) right_layout.addWidget(progress_label) self.progress_bar = QProgressBar() self.progress_bar.setRange(0, 100) self.progress_bar.setTextVisible(True) right_layout.addWidget(self.progress_bar) # 当前文件标签 self.current_file_label = QLabel("当前文件: 无") right_layout.addWidget(self.current_file_label) # 结果标签页 self.tab_widget = QTabWidget() right_layout.addWidget(self.tab_widget, 1) # 文本结果标签页 text_tab = QWidget() text_layout = QVBoxLayout() text_tab.setLayout(text_layout) self.text_result = QTextEdit() self.text_result.setReadOnly(True) text_layout.addWidget(self.text_result) self.tab_widget.addTab(text_tab, "文本结果") # 详细结果标签页 detail_tab = QWidget() detail_layout = QVBoxLayout() detail_tab.setLayout(detail_layout) self.result_table = QTableWidget() self.result_table.setColumnCount(10) self.result_table.setHorizontalHeaderLabels([ "文件名", "时长", "语速", "音量稳定性", "客服情感", "客户情感", "开场白", "结束语", "禁用词", "问题解决" ]) self.result_table.horizontalHeader().setSectionResizeMode(QHeaderView.Stretch) detail_layout.addWidget(self.result_table) self.tab_widget.addTab(detail_tab, "详细结果") # 添加左右部件到分割器 splitter.addWidget(left_widget) splitter.addWidget(right_widget) splitter.setSizes([300, 900]) def setup_menu(self): """设置菜单栏""" menu_bar = self.menuBar() # 文件菜单 file_menu = menu_bar.addMenu("文件") add_file_action = QAction("添加文件", self) add_file_action.triggered.connect(self.add_files) file_menu.addAction(add_file_action) export_action = QAction("导出结果", self) export_action.triggered.connect(self.export_results) file_menu.addAction(export_action) exit_action = QAction("退出", self) exit_action.triggered.connect(self.close) file_menu.addAction(exit_action) # 分析菜单 analysis_menu = menu_bar.addMenu("分析") start_action = QAction("开始分析", self) start_action.triggered.connect(self.start_analysis) analysis_menu.addAction(start_action) stop_action = QAction("停止分析", self) stop_action.triggered.connect(self.stop_analysis) analysis_menu.addAction(stop_action) # 设置菜单 settings_menu = menu_bar.addMenu("设置") config_action = QAction("系统配置", self) config_action.triggered.connect(self.open_settings) settings_menu.addAction(config_action) model_action = QAction("加载模型", self) model_action.triggered.connect(self.load_models) settings_menu.addAction(model_action) def add_files(self): """添加文件到分析列表""" files, _ = QFileDialog.getOpenFileNames( self, "选择音频文件", "", "音频文件 (*.mp3 *.wav *.amr *.m4a)" ) if files: for file in files: self.file_list.addItem(file) def start_analysis(self): """开始分析""" if self.file_list.count() == 0: QMessageBox.warning(self, "警告", "先添加要分析的音频文件") return if not self.model_loaded: QMessageBox.warning(self, "警告", "模型未加载,先加载模型") return # 获取文件路径 audio_paths = [self.file_list.item(i).text() for i in range(self.file_list.count())] # 清空结果 self.text_result.clear() self.result_table.setRowCount(0) # 创建分析线程 self.analysis_thread = AnalysisThread(audio_paths, self.temp_dir) # 连接信号 self.analysis_thread.progress_updated.connect(self.update_progress) self.analysis_thread.result_ready.connect(self.handle_result) self.analysis_thread.finished_all.connect(self.analysis_finished) self.analysis_thread.error_occurred.connect(self.show_error) self.analysis_thread.memory_warning.connect(self.handle_memory_warning) self.analysis_thread.resource_cleanup.connect(self.cleanup_resources) # 启动线程 self.analysis_thread.start() def stop_analysis(self): """停止分析""" if self.analysis_thread and self.analysis_thread.isRunning(): self.analysis_thread.stop() self.analysis_thread.wait() QMessageBox.information(self, "信息", "分析已停止") def load_models(self): """加载模型""" if self.model_load_thread and self.model_load_thread.isRunning(): return self.model_load_thread = ModelLoadThread() self.model_load_thread.progress_updated.connect( lambda value, msg: self.progress_bar.setValue(value) ) self.model_load_thread.finished.connect(self.handle_model_load_result) self.model_load_thread.start() def update_progress(self, progress: int, message: str, current_file: str): """更新进度""" self.progress_bar.setValue(progress) self.current_file_label.setText(f"当前文件: {current_file}") def handle_result(self, result: Dict): """处理分析结果""" # 添加到文本结果 self.text_result.append(f"文件: {result['file_name']}") self.text_result.append(f"状态: {result['status']}") if result["status"] == "success": self.text_result.append(f"时长: {result['duration_str']}") self.text_result.append(f"语速: {result['syllable_rate']} 音节/秒") self.text_result.append(f"音量稳定性: {result['volume_stability']}") self.text_result.append(f"客服情感: 负面({result['agent_negative']:.2%}) " f"中性({result['agent_neutral']:.2%}) " f"正面({result['agent_positive']:.2%})") self.text_result.append(f"客服情绪: {result['agent_emotions']}") self.text_result.append(f"客户情感: 负面({result['customer_negative']:.2%}) " f"中性({result['customer_neutral']:.2%}) " f"正面({result['customer_positive']:.2%})") self.text_result.append(f"客户情绪: {result['customer_emotions']}") self.text_result.append(f"开场白: {'有' if result['opening_found'] else '无'}") self.text_result.append(f"结束语: {'有' if result['closing_found'] else '无'}") self.text_result.append(f"禁用词: {result['forbidden_words']}") self.text_result.append(f"问题解决: {'是' if result['issue_resolved'] else '否'}") self.text_result.append("\n=== 对话文本 ===\n") self.text_result.append(result["asr_text"]) self.text_result.append("\n" + "=" * 50 + "\n") # 添加到结果表格 row = self.result_table.rowCount() self.result_table.insertRow(row) self.result_table.setItem(row, 0, QTableWidgetItem(result["file_name"])) self.result_table.setItem(row, 1, QTableWidgetItem(result["duration_str"])) self.result_table.setItem(row, 2, QTableWidgetItem(str(result["syllable_rate"]))) self.result_table.setItem(row, 3, QTableWidgetItem(str(result["volume_stability"]))) self.result_table.setItem(row, 4, QTableWidgetItem( f"负:{result['agent_negative']:.2f} 中:{result['agent_neutral']:.2f} 正:{result['agent_positive']:.2f}" )) self.result_table.setItem(row, 5, QTableWidgetItem( f"负:{result['customer_negative']:.2f} 中:{result['customer_neutral']:.2f} 正:{result['customer_positive']:.2f}" )) self.result_table.setItem(row, 6, QTableWidgetItem("是" if result["opening_found"] else "否")) self.result_table.setItem(row, 7, QTableWidgetItem("是" if result["closing_found"] else "否")) self.result_table.setItem(row, 8, QTableWidgetItem(result["forbidden_words"])) self.result_table.setItem(row, 9, QTableWidgetItem("是" if result["issue_resolved"] else "否")) # 根据结果着色 if not result["opening_found"]: self.result_table.item(row, 6).setBackground(QColor(255, 200, 200)) if not result["closing_found"]: self.result_table.item(row, 7).setBackground(QColor(255, 200, 200)) if result["forbidden_words"] != "无": self.result_table.item(row, 8).setBackground(QColor(255, 200, 200)) if not result["issue_resolved"]: self.result_table.item(row, 9).setBackground(QColor(255, 200, 200)) def analysis_finished(self): """分析完成""" QMessageBox.information(self, "完成", "所有音频分析完成") self.progress_bar.setValue(100) def show_error(self, title: str, message: str): """显示错误信息""" QMessageBox.critical(self, title, message) def handle_memory_warning(self): """处理内存警告""" QMessageBox.warning(self, "内存警告", "内存使用过高,分析已停止。关闭其他应用程序后重试") def cleanup_resources(self): """清理资源""" gc.collect() if torch.cuda.is_available(): torch.cuda.empty_cache() def handle_model_load_result(self, success: bool, message: str): """处理模型加载结果""" if success: self.model_loaded = True QMessageBox.information(self, "成功", message) else: QMessageBox.critical(self, "错误", message) def open_settings(self): """打开设置对话框""" settings_dialog = QDialog(self) settings_dialog.setWindowTitle("系统设置") settings_dialog.setFixedSize(500, 400) layout = QVBoxLayout() # ASR模型路径 asr_layout = QHBoxLayout() asr_label = QLabel("ASR模型路径:") asr_line = QLineEdit(ConfigManager().get("model_paths")["asr"]) asr_browse = QPushButton("浏览...") def browse_asr(): path = QFileDialog.getExistingDirectory(self, "选择ASR模型目录") if path: asr_line.setText(path) asr_browse.clicked.connect(browse_asr) asr_layout.addWidget(asr_label) asr_layout.addWidget(asr_line) asr_layout.addWidget(asr_browse) layout.addLayout(asr_layout) # 情感分析模型路径 sentiment_layout = QHBoxLayout() sentiment_label = QLabel("情感模型路径:") sentiment_line = QLineEdit(ConfigManager().get("model_paths")["sentiment"]) sentiment_browse = QPushButton("浏览...") def browse_sentiment(): path = QFileDialog.getExistingDirectory(self, "选择情感模型目录") if path: sentiment_line.setText(path) sentiment_browse.clicked.connect(browse_sentiment) sentiment_layout.addWidget(sentiment_label) sentiment_layout.addWidget(sentiment_line) sentiment_layout.addWidget(sentiment_browse) layout.addLayout(sentiment_layout) # 并发设置 concurrent_layout = QHBoxLayout() concurrent_label = QLabel("最大并发任务:") concurrent_spin = QSpinBox() concurrent_spin.setRange(1, 8) concurrent_spin.setValue(ConfigManager().get("max_concurrent", 1)) concurrent_layout.addWidget(concurrent_label) concurrent_layout.addWidget(concurrent_spin) layout.addLayout(concurrent_layout) # 方言设置 dialect_layout = QHBoxLayout() dialect_label = QLabel("方言设置:") dialect_combo = QComboBox() dialect_combo.addItems(["标准普通话", "贵州方言"]) dialect_combo.setCurrentIndex(1 if ConfigManager().get("dialect_config") == "guizhou" else 0) dialect_layout.addWidget(dialect_label) dialect_layout.addWidget(dialect_combo) layout.addLayout(dialect_layout) # 音频时长限制 duration_layout = QHBoxLayout() duration_label = QLabel("最大音频时长(秒):") duration_spin = QSpinBox() duration_spin.setRange(60, 86400) # 1分钟到24小时 duration_spin.setValue(ConfigManager().get("max_audio_duration", 3600)) duration_layout.addWidget(duration_label) duration_layout.addWidget(duration_spin) layout.addLayout(duration_layout) # 按钮 button_box = QDialogButtonBox(QDialogButtonBox.Ok | QDialogButtonBox.Cancel) button_box.accepted.connect(settings_dialog.accept) button_box.rejected.connect(settings_dialog.reject) layout.addWidget(button_box) settings_dialog.setLayout(layout) if settings_dialog.exec_() == QDialog.Accepted: # 保存设置 ConfigManager().set("model_paths", { "asr": asr_line.text(), "sentiment": sentiment_line.text() }) ConfigManager().set("max_concurrent", concurrent_spin.value()) ConfigManager().set("dialect_config", "guizhou" if dialect_combo.currentIndex() == 1 else "standard") ConfigManager().set("max_audio_duration", duration_spin.value()) # 重新加载模型 ModelLoader.reload_models() def export_results(self): """导出结果""" if self.result_table.rowCount() == 0: QMessageBox.warning(self, "警告", "没有可导出的结果") return path, _ = QFileDialog.getSaveFileName( self, "保存结果", "", "CSV文件 (*.csv)" ) if path: try: with open(path, "w", encoding="utf-8") as f: # 写入表头 headers = [] for col in range(self.result_table.columnCount()): headers.append(self.result_table.horizontalHeaderItem(col).text()) f.write(",".join(headers) + "\n") # 写入数据 for row in range(self.result_table.rowCount()): row_data = [] for col in range(self.result_table.columnCount()): item = self.result_table.item(row, col) row_data.append(item.text() if item else "") f.write(",".join(row_data) + "\n") QMessageBox.information(self, "成功", f"结果已导出到: {path}") except Exception as e: QMessageBox.critical(self, "错误", f"导出失败: {str(e)}") def closeEvent(self, event): """关闭事件处理""" if self.analysis_thread and self.analysis_thread.isRunning(): self.analysis_thread.stop() self.analysis_thread.wait() # 清理临时目录(增强兼容性) try: for file in os.listdir(self.temp_dir): file_path = os.path.join(self.temp_dir, file) if os.path.isfile(file_path): # Windows系统可能需要多次尝试 for _ in range(3): try: os.remove(file_path) break except PermissionError: time.sleep(0.1) os.rmdir(self.temp_dir) except: pass event.accept() # ====================== 程序入口 ====================== if __name__ == "__main__": torch.set_num_threads(4) # 限制CPU线程数 app = QApplication(sys.argv) # 设置应用样式 app.setStyle('Fusion') window = MainWindow() window.show() sys.exit(app.exec_())
08-02
检查代码中的错误,是否合理,是否冗余:import os import sys import re import json import gc import time import concurrent.futures import traceback import numpy as np import librosa import torch import psutil from typing import List, Dict, Tuple, Optional from threading import RLock, Semaphore from pydub import AudioSegment from pydub.silence import split_on_silence from pydub.utils import get_encoder_name_extension, make_chunks from modelscope.pipelines import pipeline from modelscope.utils.constant import Tasks from transformers import AutoModelForSequenceClassification, AutoTokenizer from torch.utils.data import TensorDataset, DataLoader from PyQt5.QtWidgets import (QApplication, QMainWindow, QWidget, QVBoxLayout, QHBoxLayout, QPushButton, QLabel, QLineEdit, QTextEdit, QFileDialog, QProgressBar, QGroupBox, QMessageBox, QListWidget, QSplitter, QTabWidget, QTableWidget, QTableWidgetItem, QHeaderView, QAction, QMenu, QToolBar, QComboBox, QSpinBox, QDialog, QDialogButtonBox) from PyQt5.QtCore import QThread, pyqtSignal, Qt from PyQt5.QtGui import QFont, QColor, QIcon # ====================== 工具函数 ====================== def check_ffmpeg_available() -> Tuple[bool, str]: """检查ffmpeg是否可用并返回检查结果和说明""" try: # 尝试加载一个空的音频片段来触发ffmpeg检查 test_audio = AudioSegment.empty() # 尝试导出到一个常见格式 test_format = 'wav' encoder = get_encoder_name_extension(test_format) if not encoder: return False, f"未找到{test_format}格式的编码器,确保ffmpeg已正确安装" return True, "ffmpeg已正确安装并可用" except FileNotFoundError: return False, "未找到ffmpeg程序,安装ffmpeg并确保其在系统PATH中" except Exception as e: return False, f"ffmpeg检查失败: {str(e)}" # ====================== 资源监控器 ====================== class ResourceMonitor: def __init__(self): self.gpu_available = torch.cuda.is_available() def memory_percent(self) -> Dict[str, float]: try: result = {"cpu": psutil.virtual_memory().percent} if self.gpu_available: allocated = torch.cuda.memory_allocated() / (1024 ** 3) total = torch.cuda.get_device_properties(0).total_memory / (1024 ** 3) result["gpu"] = (allocated / total) * 100 if total > 0 else 0 return result except Exception as e: print(f"内存监控失败: {str(e)}") return {"cpu": 0, "gpu": 0} # ====================== 方言处理器(简化版) ====================== class DialectProcessor: # 合并贵州方言和普通话关键词 KEYWORDS = { "opening": ["您好", "很高兴为您服务", "问有什么可以帮您", "麻烦您喽", "问搞哪样", "有咋个可以帮您", "多谢喽"], "closing": ["感谢来电", "祝您生活愉快", "再见", "搞归一喽", "麻烦您喽", "再见喽", "慢走喽"], "forbidden": ["不知道", "没办法", "你投诉吧", "随便你", "搞不成", "没得法", "随便你喽", "你投诉吧喽"], "salutation": ["先生", "女士", "小姐", "老师", "师傅", "哥", "姐", "兄弟", "妹儿"], "reassurance": ["非常抱歉", "不要着急", "我们会尽快处理", "理解您的心情", "实在对不住", "莫急哈", "马上帮您整", "理解您得很"] } # 贵州方言到普通话的固定映射 DIALECT_MAPPING = { "恼火得很": "非常生气", "鬼火戳": "很愤怒", "搞不成": "无法完成", "没得": "没有", "搞哪样嘛": "做什么呢", "归一喽": "完成了", "咋个": "怎么", "克哪点": "去哪里", "麻烦您喽": "麻烦您了", "多谢喽": "多谢了", "憨包": "傻瓜", "归一": "结束", "板扎": "很好", "鬼火冒": "非常生气", "背时": "倒霉", "吃豁皮": "占便宜" } # Trie树根节点 _trie_root = None class TrieNode: def __init__(self): self.children = {} self.is_end = False self.value = "" @classmethod def build_dialect_trie(cls): """构建方言转换的Trie树""" if cls._trie_root is not None: return cls._trie_root root = cls.TrieNode() # 按长度降序排序,确保最长匹配优先 for dialect, standard in sorted(cls.DIALECT_MAPPING.items(), key=lambda x: len(x[0]), reverse=True): node = root for char in dialect: if char not in node.children: node.children[char] = cls.TrieNode() node = node.children[char] node.is_end = True node.value = standard cls._trie_root = root return root @classmethod def preprocess_text(cls, texts: List[str]) -> List[str]: """使用Trie树进行方言转换""" if cls._trie_root is None: cls.build_dialect_trie() processed_texts = [] for text in texts: processed = [] i = 0 n = len(text) while i < n: node = cls._trie_root j = i found = False # 在Trie树中查找最长匹配 while j < n and text[j] in node.children: node = node.children[text[j]] j += 1 if node.is_end: # 找到完整匹配 processed.append(node.value) i = j found = True break if not found: # 无匹配 processed.append(text[i]) i += 1 processed_texts.append(''.join(processed)) return processed_texts # ====================== 系统配置管理器 ====================== class ConfigManager: _instance = None def __new__(cls): if cls._instance is None: cls._instance = super().__new__(cls) cls._instance._init_config() return cls._instance def _init_config(self): self.config = { "model_paths": { "asr": "./models/iic-speech_paraformer-large-vad-punc-spk_asr_nat-zh-cn", "sentiment": "./models/IDEA-CCNL-Erlangshen-Roberta-110M-Sentiment" }, "sample_rate": 16000, "silence_thresh": -40, "min_silence_len": 1000, "max_concurrent": 1, "max_audio_duration": 3600 # 移除了方言配置 } self.load_config() def load_config(self): try: if os.path.exists("config.json"): with open("config.json", "r", encoding="utf-8") as f: self.config.update(json.load(f)) except json.JSONDecodeError: print("配置文件格式错误,使用默认配置") except Exception as e: print(f"加载配置失败: {str(e)},使用默认配置") def save_config(self): try: with open("config.json", "w", encoding="utf-8") as f: json.dump(self.config, f, indent=2, ensure_ascii=False) except Exception as e: print(f"保存配置失败: {str(e)}") def get(self, key: str, default=None): return self.config.get(key, default) def set(self, key: str, value): self.config[key] = value self.save_config() def check_model_paths(self) -> Tuple[bool, List[str]]: """检查模型路径是否有效""" errors = [] model_paths = self.get("model_paths", {}) for model_name, path in model_paths.items(): if not path: errors.append(f"{model_name}模型路径未设置") elif not os.path.exists(path): errors.append(f"{model_name}模型路径不存在: {path}") elif not os.path.isdir(path): errors.append(f"{model_name}模型路径不是有效的目录: {path}") return len(errors) == 0, errors # ====================== 音频处理工具 ====================== class AudioProcessor: SUPPORTED_FORMATS = ('.mp3', '.wav', '.amr', '.m4a') @staticmethod def check_dependencies(): """检查音频处理所需的依赖""" return check_ffmpeg_available() @staticmethod def convert_to_wav(input_path: str, temp_dir: str) -> Optional[List[str]]: # 先检查ffmpeg是否可用 ffmpeg_available, ffmpeg_msg = check_ffmpeg_available() if not ffmpeg_available: print(f"ffmpeg错误: {ffmpeg_msg}") return None try: os.makedirs(temp_dir, exist_ok=True) ext = os.path.splitext(input_path)[1].lower() if ext not in AudioProcessor.SUPPORTED_FORMATS: raise ValueError(f"不支持的音频格式: {ext},支持的格式为: {', '.join(AudioProcessor.SUPPORTED_FORMATS)}") if ext == '.wav': return [input_path] # 尝试加载音频文件 try: audio = AudioSegment.from_file(input_path) except Exception as e: raise RuntimeError(f"无法加载音频文件: {str(e)}。确认文件未损坏且ffmpeg支持该格式。") max_duration = ConfigManager().get("max_audio_duration", 3600) * 1000 if len(audio) > max_duration: return AudioProcessor._split_long_audio(audio, input_path, temp_dir) return AudioProcessor._convert_single_audio(audio, input_path, temp_dir) except Exception as e: print(f"格式转换失败: {str(e)}") return None @staticmethod def _split_long_audio(audio: AudioSegment, input_path: str, temp_dir: str) -> List[str]: chunks = split_on_silence( audio, min_silence_len=ConfigManager().get("min_silence_len", 1000), silence_thresh=ConfigManager().get("silence_thresh", -40), keep_silence=500 ) merged_chunks = [] current_chunk = AudioSegment.empty() for chunk in chunks: if len(current_chunk) + len(chunk) < 5 * 60 * 1000: # 5分钟 current_chunk += chunk else: if len(current_chunk) > 0: merged_chunks.append(current_chunk) current_chunk = chunk if len(current_chunk) > 0: merged_chunks.append(current_chunk) wav_paths = [] sample_rate = ConfigManager().get("sample_rate", 16000) for i, chunk in enumerate(merged_chunks): chunk = chunk.set_frame_rate(sample_rate).set_channels(1) chunk_path = os.path.join(temp_dir, f"{os.path.splitext(os.path.basename(input_path))[0]}_part{i + 1}.wav") chunk.export(chunk_path, format="wav") wav_paths.append(chunk_path) return wav_paths @staticmethod def _convert_single_audio(audio: AudioSegment, input_path: str, temp_dir: str) -> List[str]: sample_rate = ConfigManager().get("sample_rate", 16000) audio = audio.set_frame_rate(sample_rate).set_channels(1) wav_path = os.path.join(temp_dir, os.path.splitext(os.path.basename(input_path))[0] + ".wav") audio.export(wav_path, format="wav") return [wav_path] @staticmethod def extract_features_from_audio(y: np.ndarray, sr: int) -> Dict[str, float]: try: duration = librosa.get_duration(y=y, sr=sr) segment_length = 60 total_segments = max(1, int(np.ceil(duration / segment_length))) syllable_rates, volume_stabilities = [], [] total_samples = len(y) samples_per_segment = int(segment_length * sr) for i in range(total_segments): start = i * samples_per_segment end = min((i + 1) * samples_per_segment, total_samples) y_segment = y[start:end] if len(y_segment) == 0: continue intervals = librosa.effects.split(y_segment, top_db=20) speech_samples = sum(end - start for start, end in intervals) speech_duration = speech_samples / sr syllable_rates.append(len(intervals) / speech_duration if speech_duration > 0.1 else 0) rms = librosa.feature.rms(y=y_segment, frame_length=2048, hop_length=512)[0] if len(rms) > 0 and np.mean(rms) > 0: volume_stabilities.append(np.std(rms) / np.mean(rms)) return { "duration": duration, "syllable_rate": round(np.mean([r for r in syllable_rates if r > 0]) if syllable_rates else 0, 2), "volume_stability": round(np.mean(volume_stabilities) if volume_stabilities else 0, 4) } except Exception as e: print(f"特征提取错误: {str(e)}") return {"duration": 0, "syllable_rate": 0, "volume_stability": 0} # ====================== 模型加载器 ====================== class ModelLoader: asr_pipeline = None sentiment_model = None sentiment_tokenizer = None model_lock = RLock() models_loaded = False @classmethod def load_models(cls): config = ConfigManager() # 先检查模型路径是否有效 paths_valid, errors = config.check_model_paths() if not paths_valid: raise ValueError(f"模型路径无效:\n{chr(10).join(errors)}") if not cls.asr_pipeline: with cls.model_lock: if not cls.asr_pipeline: cls._load_asr_model(config.get("model_paths")["asr"]) if not cls.sentiment_model: with cls.model_lock: if not cls.sentiment_model: cls._load_sentiment_model(config.get("model_paths")["sentiment"]) cls.models_loaded = True @classmethod def reload_models(cls): with cls.model_lock: cls.asr_pipeline = None cls.sentiment_model = None cls.sentiment_tokenizer = None gc.collect() if torch.cuda.is_available(): torch.cuda.empty_cache() cls.load_models() @classmethod def _load_asr_model(cls, model_path: str): try: if not os.path.exists(model_path): raise FileNotFoundError(f"ASR模型路径不存在: {model_path}") asr_kwargs = {'quantize': 'int8'} if hasattr(torch, 'quantization') else {} cls.asr_pipeline = pipeline( task=Tasks.auto_speech_recognition, model=model_path, device='cuda' if torch.cuda.is_available() else 'cpu',** asr_kwargs ) except Exception as e: print(f"加载ASR模型失败: {str(e)}") raise @classmethod def _load_sentiment_model(cls, model_path: str): try: if not os.path.exists(model_path): raise FileNotFoundError(f"情感分析模型路径不存在: {model_path}") cls.sentiment_model = AutoModelForSequenceClassification.from_pretrained(model_path) cls.sentiment_tokenizer = AutoTokenizer.from_pretrained(model_path) if torch.cuda.is_available(): cls.sentiment_model = cls.sentiment_model.cuda() except Exception as e: print(f"加载情感分析模型失败: {str(e)}") raise # ====================== 核心分析线程(简化版) ====================== class AnalysisThread(QThread): progress_updated = pyqtSignal(int, str, str) result_ready = pyqtSignal(dict) finished_all = pyqtSignal() error_occurred = pyqtSignal(str, str) memory_warning = pyqtSignal() resource_cleanup = pyqtSignal() def __init__(self, audio_paths: List[str], temp_dir: str = "temp_wav"): super().__init__() self.audio_paths = audio_paths self.temp_dir = temp_dir self.is_running = True self.current_file = "" self.max_concurrent = min(ConfigManager().get("max_concurrent", 1), self._get_max_concurrent_tasks()) self.resource_monitor = ResourceMonitor() self.semaphore = Semaphore(self.max_concurrent) os.makedirs(temp_dir, exist_ok=True) def run(self): try: # 检查ffmpeg是否可用 ffmpeg_available, ffmpeg_msg = check_ffmpeg_available() if not ffmpeg_available: self.error_occurred.emit("音频处理依赖缺失", f"无法处理音频: {ffmpeg_msg}\n\n安装ffmpeg并确保其在系统PATH中。\nWindows用户可从https://ffmpeg.org/download.html下载并添加到环境变量。") return if not ModelLoader.models_loaded: self.error_occurred.emit("模型未加载", "等待模型加载完成后再开始分析") return self.progress_updated.emit(0, f"最大并行任务数: {self.max_concurrent}", "") with concurrent.futures.ThreadPoolExecutor(max_workers=self.max_concurrent) as executor: future_to_path = {} for path in self.audio_paths: if not self.is_running: break self.semaphore.acquire() future = executor.submit(self.analyze_audio, path, self._get_available_batch_size()) future_to_path[future] = path future.add_done_callback(lambda f: self.semaphore.release()) for i, future in enumerate(concurrent.futures.as_completed(future_to_path)): if not self.is_running: break path = future_to_path[future] self.current_file = os.path.basename(path) if self._check_memory_usage(): self.memory_warning.emit() self.is_running = False break try: result = future.result() if result: self.result_ready.emit(result) progress = int((i + 1) / len(self.audio_paths) * 100) self.progress_updated.emit(progress, f"完成: {self.current_file} ({i + 1}/{len(self.audio_paths)})", self.current_file) except Exception as e: result = {"file_name": self.current_file, "status": "error", "error": f"分析失败: {str(e)}"} self.result_ready.emit(result) if self.is_running: self.finished_all.emit() except Exception as e: self.error_occurred.emit("系统错误", str(e)) traceback.print_exc() finally: self.resource_cleanup.emit() self._cleanup_resources() def analyze_audio(self, audio_path: str, batch_size: int) -> Dict: result = {"file_name": os.path.basename(audio_path), "status": "processing"} wav_paths = [] try: wav_paths = AudioProcessor.convert_to_wav(audio_path, self.temp_dir) if not wav_paths: result["error"] = "格式转换失败,检查文件是否损坏或格式是否支持" result["status"] = "error" return result audio_features = self._extract_audio_features(wav_paths) result.update(audio_features) result["duration_str"] = self._format_duration(audio_features["duration"]) all_segments, full_text = self._process_asr_segments(wav_paths) agent_segments, customer_segments = self._identify_speakers(all_segments) result["asr_text"] = self._generate_labeled_text(all_segments, agent_segments, customer_segments).strip() text_analysis = self._analyze_text(agent_segments, customer_segments, batch_size) result.update(text_analysis) service_check = self._check_service_rules(agent_segments) result.update(service_check) result["issue_resolved"] = self._check_issue_resolution(customer_segments, agent_segments) result["status"] = "success" except Exception as e: result["error"] = f"分析失败: {str(e)}" result["status"] = "error" finally: self._cleanup_temp_files(wav_paths) self._cleanup_resources() return result def _identify_speakers(self, segments: List[Dict]) -> Tuple[List[Dict], List[Dict]]: """使用四层逻辑识别客服""" if not segments: return [], [] # 逻辑1:前三片段开场白关键词 agent_id = self._identify_by_opening(segments) # 逻辑2:后三片段结束语关键词 if agent_id is None: agent_id = self._identify_by_closing(segments) # 逻辑3:称呼与敬语关键词 if agent_id is None: agent_id = self._identify_by_salutation(segments) # 逻辑4:安抚语关键词 if agent_id is None: agent_id = self._identify_by_reassurance(segments) # 后备策略:说话模式识别 if agent_id is None and len(segments) >= 4: agent_id = self._identify_by_speech_patterns(segments) if agent_id is None: # 最后手段:选择说话最多的说话人 spk_counts = {} for seg in segments: spk_id = seg["spk_id"] spk_counts[spk_id] = spk_counts.get(spk_id, 0) + 1 agent_id = max(spk_counts, key=spk_counts.get) if spk_counts else None if agent_id is None: return [], [] return ( [seg for seg in segments if seg["spk_id"] == agent_id], [seg for seg in segments if seg["spk_id"] != agent_id] ) def _identify_by_opening(self, segments: List[Dict]) -> Optional[str]: """逻辑1:前三片段开场白关键词""" keywords = DialectProcessor.KEYWORDS["opening"] for seg in segments[:3]: if any(kw in seg["text"] for kw in keywords): return seg["spk_id"] return None def _identify_by_closing(self, segments: List[Dict]) -> Optional[str]: """逻辑2:后三片段结束语关键词""" keywords = DialectProcessor.KEYWORDS["closing"] last_segments = segments[-3:] if len(segments) >= 3 else segments for seg in reversed(last_segments): if any(kw in seg["text"] for kw in keywords): return seg["spk_id"] return None def _identify_by_salutation(self, segments: List[Dict]) -> Optional[str]: """逻辑3:称呼与敬语关键词""" keywords = DialectProcessor.KEYWORDS["salutation"] for seg in segments: if any(kw in seg["text"] for kw in keywords): return seg["spk_id"] return None def _identify_by_reassurance(self, segments: List[Dict]) -> Optional[str]: """逻辑4:安抚语关键词""" keywords = DialectProcessor.KEYWORDS["reassurance"] for seg in segments: if any(kw in seg["text"] for kw in keywords): return seg["spk_id"] return None def _identify_by_speech_patterns(self, segments: List[Dict]) -> Optional[str]: """后备策略:说话模式识别""" speaker_features = {} for seg in segments: spk_id = seg["spk_id"] if spk_id not in speaker_features: speaker_features[spk_id] = {"total_duration": 0.0, "turn_count": 0, "question_count": 0} features = speaker_features[spk_id] features["total_duration"] += (seg["end"] - seg["start"]) features["turn_count"] += 1 if any(q_word in seg["text"] for q_word in ["吗", "呢", "?", "?", "如何", "怎样"]): features["question_count"] += 1 if speaker_features: max_duration = max(f["total_duration"] for f in speaker_features.values()) question_rates = {spk_id: f["question_count"] / f["turn_count"] for spk_id, f in speaker_features.items()} candidates = [] for spk_id, features in speaker_features.items(): score = (0.6 * (features["total_duration"] / max_duration) + 0.4 * question_rates[spk_id]) candidates.append((spk_id, score)) return max(candidates, key=lambda x: x[1])[0] return None def _analyze_text(self, agent_segments: List[Dict], customer_segments: List[Dict], batch_size: int) -> Dict: """优化情感分析方法""" def split_long_sentences(texts: List[str]) -> List[str]: splitted = [] for text in texts: if len(text) > 128: parts = re.split(r'(?<=[。!?;,])', text) current = "" for part in parts: if len(current) + len(part) < 128: current += part else: if current: splitted.append(current) current = part if current: splitted.append(current) else: splitted.append(text) return splitted def enhance_with_keywords(texts: List[str]) -> List[str]: enhanced = [] emotion_keywords = { "positive": ["满意", "高兴", "感谢", "专业", "解决", "帮助", "谢谢", "很好", "不错"], "negative": ["生气", "愤怒", "不满", "投诉", "问题", "失望", "差劲", "糟糕", "投诉"], "neutral": ["了解", "明白", "知道", "确认", "查询", "记录", "需要", "提供"] } for text in texts: found_emotion = None for emotion, keywords in emotion_keywords.items(): if any(kw in text for kw in keywords): found_emotion = emotion break if found_emotion: enhanced.append(f"[{found_emotion}] {text}") else: enhanced.append(text) return enhanced # 分析单个说话者 def analyze_speaker(segments: List[Dict], speaker_type: str) -> Dict: if not segments: return { f"{speaker_type}_negative": 0.0, f"{speaker_type}_neutral": 1.0, f"{speaker_type}_positive": 0.0, f"{speaker_type}_emotions": "无" } texts = [seg["text"] for seg in segments] processed_texts = DialectProcessor.preprocess_text(texts) splitted_texts = split_long_sentences(processed_texts) enhanced_texts = enhance_with_keywords(splitted_texts) with ModelLoader.model_lock: inputs = ModelLoader.sentiment_tokenizer( enhanced_texts, padding=True, truncation=True, max_length=128, return_tensors="pt" ) dataset = TensorDataset(inputs['input_ids'], inputs['attention_mask']) dataloader = DataLoader(dataset, batch_size=batch_size, shuffle=False) device = "cuda" if torch.cuda.is_available() else "cpu" sentiment_dist = [] emotions = [] for batch in dataloader: input_ids, attention_mask = batch inputs = {'input_ids': input_ids.to(device), 'attention_mask': attention_mask.to(device)} with torch.no_grad(): outputs = ModelLoader.sentiment_model(**inputs) batch_probs = torch.nn.functional.softmax(outputs.logits, dim=-1) sentiment_dist.append(batch_probs.cpu()) emotion_keywords = ["愤怒", "生气", "鬼火", "不耐烦", "搞哪样嘛", "恼火", "背时", "失望", "不满"] for text in enhanced_texts: if any(kw in text for kw in emotion_keywords): if any(kw in text for kw in ["愤怒", "生气", "鬼火", "恼火"]): emotions.append("愤怒") elif any(kw in text for kw in ["不耐烦", "搞哪样嘛"]): emotions.append("不耐烦") elif "背时" in text: emotions.append("沮丧") elif any(kw in text for kw in ["失望", "不满"]): emotions.append("失望") if sentiment_dist: all_probs = torch.cat(sentiment_dist, dim=0) avg_sentiment = torch.mean(all_probs, dim=0).tolist() else: avg_sentiment = [0.0, 1.0, 0.0] return { f"{speaker_type}_negative": round(avg_sentiment[0], 4), f"{speaker_type}_neutral": round(avg_sentiment[1], 4), f"{speaker_type}_positive": round(avg_sentiment[2], 4), f"{speaker_type}_emotions": ",".join(set(emotions)) if emotions else "无" } return {** analyze_speaker(agent_segments, "agent"), **analyze_speaker(customer_segments, "customer") } def _check_service_rules(self, agent_segments: List[Dict]) -> Dict: keywords = DialectProcessor.KEYWORDS found_forbidden = [] found_opening = any(kw in seg["text"] for seg in agent_segments[:3] for kw in keywords["opening"]) found_closing = any( kw in seg["text"] for seg in (agent_segments[-3:] if len(agent_segments) >= 3 else agent_segments) for kw in keywords["closing"]) for seg in agent_segments: for kw in keywords["forbidden"]: if kw in seg["text"]: found_forbidden.append(kw) break return { "opening_found": found_opening, "closing_found": found_closing, "forbidden_words": ", ".join(set(found_forbidden)) if found_forbidden else "无" } def _check_issue_resolution(self, customer_segments: List[Dict], agent_segments: List[Dict]) -> bool: if not customer_segments or not agent_segments: return False resolution_keywords = ["解决", "处理", "完成", "已", "好了", "可以了", "没问题", "明白", "清楚", "满意", "行"] unresolved_keywords = ["没解决", "不行", "不对", "还是", "仍然", "再", "未", "无法", "不能", "不行", "不满意"] negation_words = ["不", "没", "未", "非", "无"] gratitude_keywords = ["谢谢", "感谢", "多谢", "麻烦", "辛苦", "有劳"] full_conversation = " ".join(seg["text"] for seg in customer_segments + agent_segments) last_customer_text = customer_segments[-1]["text"] for kw in unresolved_keywords: if kw in full_conversation: negation_context = re.search(rf".{{0,5}}{kw}", full_conversation) if negation_context: context = negation_context.group(0) if not any(neg in context for neg in negation_words): return False else: return False if any(kw in last_customer_text for kw in gratitude_keywords): if not any(neg + kw in last_customer_text for neg in negation_words): return True for agent_text in [seg["text"] for seg in agent_segments[-3:]]: if any(kw in agent_text for kw in resolution_keywords): if not any(neg in agent_text for neg in negation_words): return True for cust_seg in customer_segments[-2:]: if any(kw in cust_seg["text"] for kw in ["好", "行", "可以", "明白"]): if not any(neg in cust_seg["text"] for neg in negation_words): return True if any("?" in seg["text"] or "?" in seg["text"] for seg in customer_segments[-2:]): return False return False # ====================== 辅助方法 ====================== def _get_available_batch_size(self) -> int: if not torch.cuda.is_available(): return 4 total_mem = torch.cuda.get_device_properties(0).total_memory / (1024 ** 3) per_task_mem = total_mem / self.max_concurrent return 2 if per_task_mem < 2 else 4 if per_task_mem < 4 else 8 def _get_max_concurrent_tasks(self) -> int: if torch.cuda.is_available(): total_mem = torch.cuda.get_device_properties(0).total_memory / (1024 ** 3) return 1 if total_mem < 6 else 2 if total_mem < 12 else 3 return max(1, os.cpu_count() // 2) def _check_memory_usage(self) -> bool: try: mem_percent = self.resource_monitor.memory_percent() return mem_percent.get("cpu", 0) > 85 or mem_percent.get("gpu", 0) > 85 except: return False def _extract_audio_features(self, wav_paths: List[str]) -> Dict[str, float]: combined_y = np.array([], dtype=np.float32) sr = ConfigManager().get("sample_rate", 16000) for path in wav_paths: y, _ = librosa.load(path, sr=sr) combined_y = np.concatenate((combined_y, y)) return AudioProcessor.extract_features_from_audio(combined_y, sr) def _process_asr_segments(self, wav_paths: List[str]) -> Tuple[List[Dict], str]: segments = [] full_text = "" batch_size = min(4, len(wav_paths), self._get_available_batch_size()) for i in range(0, len(wav_paths), batch_size): if not self.is_running: break batch_paths = wav_paths[i:i + batch_size] try: results = ModelLoader.asr_pipeline(batch_paths, output_dir=None, batch_size=batch_size) for result in results: for seg in result[0]["sentences"]: segments.append({ "start": seg["start"], "end": seg["end"], "text": seg["text"], "spk_id": seg.get("spk_id", "0") }) full_text += seg["text"] + " " except Exception as e: print(f"ASR批处理错误: {str(e)}") for path in batch_paths: try: result = ModelLoader.asr_pipeline(path, output_dir=None) for seg in result[0]["sentences"]: segments.append({ "start": seg["start"], "end": seg["end"], "text": seg["text"], "spk_id": seg.get("spk_id", "0") }) full_text += seg["text"] + " " except: continue return segments, full_text.strip() def _generate_labeled_text(self, all_segments: List[Dict], agent_segments: List[Dict], customer_segments: List[Dict]) -> str: agent_spk_id = agent_segments[0]["spk_id"] if agent_segments else None customer_spk_id = customer_segments[0]["spk_id"] if customer_segments else None labeled_text = [] for seg in all_segments: if seg["spk_id"] == agent_spk_id: speaker = "客服" elif seg["spk_id"] == customer_spk_id: speaker = "客户" else: speaker = f"说话人{seg['spk_id']}" labeled_text.append(f"[{speaker}]: {seg['text']}") return "\n".join(labeled_text) def _cleanup_temp_files(self, paths: List[str]): def safe_remove(path): if os.path.exists(path): try: os.remove(path) except: pass for path in paths: safe_remove(path) now = time.time() for file in os.listdir(self.temp_dir): file_path = os.path.join(self.temp_dir, file) if os.path.isfile(file_path) and (now - os.path.getmtime(file_path)) > 3600: safe_remove(file_path) def _format_duration(self, seconds: float) -> str: minutes, seconds = divmod(int(seconds), 60) hours, minutes = divmod(minutes, 60) return f"{hours:02d}:{minutes:02d}:{seconds:02d}" def _cleanup_resources(self): gc.collect() if torch.cuda.is_available(): torch.cuda.empty_cache() def stop(self): self.is_running = False # ====================== 模型加载线程 ====================== class ModelLoadThread(QThread): progress_updated = pyqtSignal(int, str) finished = pyqtSignal(bool, str) def run(self): try: config = ConfigManager() # 先检查模型路径是否有效 paths_valid, errors = config.check_model_paths() if not paths_valid: self.finished.emit(False, f"模型路径无效:\n{chr(10).join(errors)}") return self.progress_updated.emit(20, "加载语音识别模型...") ModelLoader._load_asr_model(config.get("model_paths")["asr"]) self.progress_updated.emit(60, "加载情感分析模型...") ModelLoader._load_sentiment_model(config.get("model_paths")["sentiment"]) self.progress_updated.emit(100, "模型加载完成") self.finished.emit(True, "模型加载成功") except Exception as e: self.finished.emit(False, f"模型加载失败: {str(e)}") # ====================== GUI主界面(简化版) ====================== class MainWindow(QMainWindow): def __init__(self): super().__init__() self.setWindowTitle("贵州方言客服质检系统") self.setGeometry(100, 100, 1200, 800) self.setup_ui() self.setup_menu() self.analysis_thread = None self.model_load_thread = None self.temp_dir = "temp_wav" os.makedirs(self.temp_dir, exist_ok=True) self.model_loaded = False # 初始化时检查依赖和模型配置 self.check_initial_setup() def setup_ui(self): main_widget = QWidget() main_layout = QVBoxLayout() main_widget.setLayout(main_layout) self.setCentralWidget(main_widget) toolbar = QToolBar("主工具栏") self.addToolBar(toolbar) actions = [ ("添加文件", "icons/add.png", self.add_files), ("开始分析", "icons/start.png", self.start_analysis), ("停止分析", "icons/stop.png", self.stop_analysis), ("设置", "icons/settings.png", self.open_settings) ] for name, icon, func in actions: action = QAction(QIcon(icon), name, self) action.triggered.connect(func) toolbar.addAction(action) splitter = QSplitter(Qt.Horizontal) main_layout.addWidget(splitter) left_widget = QWidget() left_layout = QVBoxLayout() left_widget.setLayout(left_layout) left_layout.addWidget(QLabel("待分析文件列表")) self.file_list = QListWidget() self.file_list.setSelectionMode(QListWidget.ExtendedSelection) left_layout.addWidget(self.file_list) right_widget = QWidget() right_layout = QVBoxLayout() right_widget.setLayout(right_layout) right_layout.addWidget(QLabel("分析进度")) self.progress_bar = QProgressBar() self.progress_bar.setRange(0, 100) right_layout.addWidget(self.progress_bar) self.current_file_label = QLabel("当前文件: 无") right_layout.addWidget(self.current_file_label) self.tab_widget = QTabWidget() right_layout.addWidget(self.tab_widget, 1) text_tab = QWidget() text_layout = QVBoxLayout() text_tab.setLayout(text_layout) self.text_result = QTextEdit() self.text_result.setReadOnly(True) text_layout.addWidget(self.text_result) self.tab_widget.addTab(text_tab, "文本结果") detail_tab = QWidget() detail_layout = QVBoxLayout() detail_tab.setLayout(detail_layout) self.result_table = QTableWidget() self.result_table.setColumnCount(10) self.result_table.setHorizontalHeaderLabels([ "文件名", "时长", "语速", "音量稳定性", "客服情感", "客户情感", "开场白", "结束语", "禁用词", "问题解决" ]) self.result_table.horizontalHeader().setSectionResizeMode(QHeaderView.Stretch) detail_layout.addWidget(self.result_table) self.tab_widget.addTab(detail_tab, "详细结果") splitter.addWidget(left_widget) splitter.addWidget(right_widget) splitter.setSizes([300, 900]) def setup_menu(self): menu_bar = self.menuBar() file_menu = menu_bar.addMenu("文件") file_actions = [ ("添加文件", self.add_files), ("导出结果", self.export_results), ("退出", self.close) ] for name, func in file_actions: action = QAction(name, self) action.triggered.connect(func) file_menu.addAction(action) analysis_menu = menu_bar.addMenu("分析") analysis_actions = [ ("开始分析", self.start_analysis), ("停止分析", self.stop_analysis) ] for name, func in analysis_actions: action = QAction(name, self) action.triggered.connect(func) analysis_menu.addAction(action) settings_menu = menu_bar.addMenu("设置") settings_actions = [ ("系统配置", self.open_settings), ("加载模型", self.load_models) ] for name, func in settings_actions: action = QAction(name, self) action.triggered.connect(func) settings_menu.addAction(action) def check_initial_setup(self): """检查初始设置,包括依赖和模型路径""" # 检查ffmpeg ffmpeg_available, ffmpeg_msg = check_ffmpeg_available() if not ffmpeg_available: QMessageBox.critical( self, "音频处理依赖缺失", f"无法处理音频: {ffmpeg_msg}\n\n安装ffmpeg并确保其在系统PATH中。\nWindows用户可从https://ffmpeg.org/download.html下载并添加到环境变量。" ) # 检查模型路径 config = ConfigManager() paths_valid, errors = config.check_model_paths() if not paths_valid: msg = QMessageBox() msg.setIcon(QMessageBox.Warning) msg.setText("模型路径配置不正确") msg.setInformativeText(f"检测到以下问题:\n{chr(10).join(errors)}\n\n是否现在进行配置?") msg.setWindowTitle("配置模型路径") msg.setStandardButtons(QMessageBox.Yes | QMessageBox.No) if msg.exec_() == QMessageBox.Yes: self.open_settings() def add_files(self): files, _ = QFileDialog.getOpenFileNames( self, "选择音频文件", "", "音频文件 (*.mp3 *.wav *.amr *.m4a)" ) for file in files: self.file_list.addItem(file) def start_analysis(self): # 先检查ffmpeg是否可用 ffmpeg_available, ffmpeg_msg = check_ffmpeg_available() if not ffmpeg_available: QMessageBox.critical( self, "音频处理依赖缺失", f"无法开始分析: {ffmpeg_msg}\n\n安装ffmpeg并确保其在系统PATH中。\nWindows用户可从https://ffmpeg.org/download.html下载并添加到环境变量。" ) return if self.file_list.count() == 0: QMessageBox.warning(self, "警告", "先添加要分析的音频文件") return # 检查模型路径 config = ConfigManager() paths_valid, errors = config.check_model_paths() if not paths_valid: msg = QMessageBox() msg.setIcon(QMessageBox.Warning) msg.setText("模型路径配置不正确") msg.setInformativeText(f"检测到以下问题:\n{chr(10).join(errors)}\n\n是否现在进行配置?") msg.setWindowTitle("配置模型路径") msg.setStandardButtons(QMessageBox.Yes | QMessageBox.No) if msg.exec_() == QMessageBox.Yes: self.open_settings() # 再次检查配置 paths_valid, _ = config.check_model_paths() if not paths_valid: return else: return if not self.model_loaded: # 询问是否加载模型 reply = QMessageBox.question( self, "模型未加载", "模型尚未加载,是否立即加载?", QMessageBox.Yes | QMessageBox.No, QMessageBox.Yes ) if reply == QMessageBox.Yes: self.load_models() # 等待模型加载完成 return else: return audio_paths = [self.file_list.item(i).text() for i in range(self.file_list.count())] self.text_result.clear() self.result_table.setRowCount(0) self.analysis_thread = AnalysisThread(audio_paths, self.temp_dir) self.analysis_thread.progress_updated.connect(self.update_progress) self.analysis_thread.result_ready.connect(self.handle_result) self.analysis_thread.finished_all.connect(self.analysis_finished) self.analysis_thread.error_occurred.connect(self.show_error) self.analysis_thread.memory_warning.connect(self.handle_memory_warning) self.analysis_thread.start() def stop_analysis(self): if self.analysis_thread and self.analysis_thread.isRunning(): self.analysis_thread.stop() self.analysis_thread.wait() QMessageBox.information(self, "信息", "分析已停止") def load_models(self): # 先检查模型路径 config = ConfigManager() paths_valid, errors = config.check_model_paths() if not paths_valid: msg = QMessageBox() msg.setIcon(QMessageBox.Warning) msg.setText("模型路径配置不正确") msg.setInformativeText(f"检测到以下问题:\n{chr(10).join(errors)}\n\n是否现在进行配置?") msg.setWindowTitle("配置模型路径") msg.setStandardButtons(QMessageBox.Yes | QMessageBox.No) if msg.exec_() == QMessageBox.Yes: self.open_settings() # 再次检查配置 paths_valid, _ = config.check_model_paths() if not paths_valid: return else: return if self.model_load_thread and self.model_load_thread.isRunning(): return self.model_load_thread = ModelLoadThread() self.model_load_thread.progress_updated.connect(lambda value, _: self.progress_bar.setValue(value)) self.model_load_thread.finished.connect(self.handle_model_load_result) self.model_load_thread.start() def update_progress(self, progress: int, message: str, current_file: str): self.progress_bar.setValue(progress) self.current_file_label.setText(f"当前文件: {current_file}") def handle_result(self, result: Dict): if result["status"] == "success": self.text_result.append( f"文件: {result['file_name']}\n状态: {result['status']}\n时长: {result['duration_str']}") self.text_result.append( f"语速: {result['syllable_rate']} 音节/秒\n音量稳定性: {result['volume_stability']}") self.text_result.append( f"客服情感: 负面({result['agent_negative']:.2%}) 中性({result['agent_neutral']:.2%}) 正面({result['agent_positive']:.2%})") self.text_result.append(f"客服情绪: {result['agent_emotions']}") self.text_result.append( f"客户情感: 负面({result['customer_negative']:.2%}) 中性({result['customer_neutral']:.2%}) 正面({result['customer_positive']:.2%})") self.text_result.append(f"客户情绪: {result['customer_emotions']}") self.text_result.append( f"开场白: {'有' if result['opening_found'] else '无'}\n结束语: {'有' if result['closing_found'] else '无'}") self.text_result.append( f"禁用词: {result['forbidden_words']}\n问题解决: {'是' if result['issue_resolved'] else '否'}") self.text_result.append("\n=== 对话文本 ===\n" + result["asr_text"] + "\n" + "=" * 50 + "\n") row = self.result_table.rowCount() self.result_table.insertRow(row) items = [ result["file_name"], result["duration_str"], str(result["syllable_rate"]), str(result["volume_stability"]), f"负:{result['agent_negative']:.2f} 中:{result['agent_neutral']:.2f} 正:{result['agent_positive']:.2f}", f"负:{result['customer_negative']:.2f} 中:{result['customer_neutral']:.2f} 正:{result['customer_positive']:.2f}", "是" if result["opening_found"] else "否", "是" if result["closing_found"] else "否", result["forbidden_words"], "是" if result["issue_resolved"] else "否" ] for col, text in enumerate(items): item = QTableWidgetItem(text) if col in [6, 7] and text == "否": item.setBackground(QColor(255, 200, 200)) if col == 8 and text != "无": item.setBackground(QColor(255, 200, 200)) if col == 9 and text == "否": item.setBackground(QColor(255, 200, 200)) self.result_table.setItem(row, col, item) elif result["status"] == "error": self.text_result.append(f"文件: {result['file_name']}\n状态: 错误\n原因: {result['error']}\n" + "=" * 50 + "\n") def analysis_finished(self): QMessageBox.information(self, "完成", "所有音频分析完成") self.progress_bar.setValue(100) def show_error(self, title: str, message: str): QMessageBox.critical(self, title, message) def handle_memory_warning(self): QMessageBox.warning(self, "内存警告", "内存使用过高,分析已停止") def handle_model_load_result(self, success: bool, message: str): if success: self.model_loaded = True QMessageBox.information(self, "成功", message) else: QMessageBox.critical(self, "错误", message) def open_settings(self): settings_dialog = QDialog(self) settings_dialog.setWindowTitle("系统设置") settings_dialog.setFixedSize(500, 300) layout = QVBoxLayout() config = ConfigManager().get("model_paths") settings = [ ("ASR模型路径:", config["asr"], self.browse_directory), ("情感模型路径:", config["sentiment"], self.browse_directory) ] for label, value, func in settings: h_layout = QHBoxLayout() h_layout.addWidget(QLabel(label)) line_edit = QLineEdit(value) browse_btn = QPushButton("浏览...") browse_btn.clicked.connect(lambda _, le=line_edit: func(le)) h_layout.addWidget(line_edit) h_layout.addWidget(browse_btn) layout.addLayout(h_layout) spin_settings = [ ("最大并发任务:", "max_concurrent", 1, 8), ("最大音频时长(秒):", "max_audio_duration", 60, 86400) ] for label, key, min_val, max_val in spin_settings: h_layout = QHBoxLayout() h_layout.addWidget(QLabel(label)) spin_box = QSpinBox() spin_box.setRange(min_val, max_val) spin_box.setValue(ConfigManager().get(key, min_val)) h_layout.addWidget(spin_box) layout.addLayout(h_layout) button_box = QDialogButtonBox(QDialogButtonBox.Ok | QDialogButtonBox.Cancel) button_box.accepted.connect(settings_dialog.accept) button_box.rejected.connect(settings_dialog.reject) layout.addWidget(button_box) settings_dialog.setLayout(layout) if settings_dialog.exec_() == QDialog.Accepted: # 保存模型路径配置 ConfigManager().set("model_paths", { "asr": layout.itemAt(0).layout().itemAt(1).widget().text(), "sentiment": layout.itemAt(1).layout().itemAt(1).widget().text() }) # 保存其他配置 ConfigManager().set("max_concurrent", layout.itemAt(2).layout().itemAt(1).widget().value()) ConfigManager().set("max_audio_duration", layout.itemAt(3).layout().itemAt(1).widget().value()) # 重新加载模型 if self.model_loaded: reply = QMessageBox.question( self, "配置已更新", "模型路径已更改,是否立即重新加载模型?", QMessageBox.Yes | QMessageBox.No, QMessageBox.Yes ) if reply == QMessageBox.Yes: self.load_models() def browse_directory(self, line_edit): path = QFileDialog.getExistingDirectory(self, "选择目录") if path: line_edit.setText(path) def export_results(self): if self.result_table.rowCount() == 0: QMessageBox.warning(self, "警告", "没有可导出的结果") return path, _ = QFileDialog.getSaveFileName(self, "保存结果", "", "CSV文件 (*.csv)") if not path: return try: with open(path, "w", encoding="utf-8") as f: headers = [self.result_table.horizontalHeaderItem(col).text() for col in range(self.result_table.columnCount())] f.write(",".join(headers) + "\n") for row in range(self.result_table.rowCount()): row_data = [self.result_table.item(row, col).text() for col in range(self.result_table.columnCount())] # 处理包含逗号的文本 row_data = [f'"{data}"' if ',' in data else data for data in row_data] f.write(",".join(row_data) + "\n") QMessageBox.information(self, "成功", f"结果已导出到: {path}") except Exception as e: QMessageBox.critical(self, "错误", f"导出失败: {str(e)}") def closeEvent(self, event): if self.analysis_thread and self.analysis_thread.isRunning(): self.analysis_thread.stop() self.analysis_thread.wait() try: for file in os.listdir(self.temp_dir): file_path = os.path.join(self.temp_dir, file) if os.path.isfile(file_path): for _ in range(3): try: os.remove(file_path); break except: time.sleep(0.1) os.rmdir(self.temp_dir) except: pass event.accept() # ====================== 程序入口 ====================== if __name__ == "__main__": torch.set_num_threads(4) app = QApplication(sys.argv) app.setStyle('Fusion') window = MainWindow() window.show() sys.exit(app.exec_())
最新发布
08-05
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值