MiniCPM-o 2.6 是 MiniCPM-o 系列中最新、功能最强大的型号。该模型以端到端方式构建,基于 SigLip-400M、Whisper-medium-300M、ChatTTS-200M 和 Qwen2.5-7B,共有 8B 参数。
音频理解和语音对话结果。
音频理解:
Task | Size | ASR (zh) | ASR (en) | AST | Emotion | |||||
---|---|---|---|---|---|---|---|---|---|---|
Metric | CER↓ | WER↓ | BLEU↑ | ACC↑ | ||||||
Dataset | AISHELL-1 | Fleurs zh | WenetSpeech test-net | LibriSpeech test-clean | GigaSpeech | TED-LIUM | CoVoST en2zh | CoVoST zh2en | MELD emotion | |
Proprietary | ||||||||||
GPT-4o-Realtime | - | 7.3* | 5.4* | 28.9* | 2.6* | 12.9* | 4.8* | 37.1* | 15.7* | 33.2* |
Gemini 1.5 Pro | - | 4.5* | 5.9* | 14.3* | 2.9* | 10.6* | 3.0* | 47.3* | 22.6* | 48.4* |
Open-Source | ||||||||||
Qwen2-Audio-7B | 8B | - | 7.5 | - | 1.6 | - | - | 45.2 | 24.4 | 55.3 |
Qwen2-Audio-7B-Instruct | 8B | 2.6* | 6.9* | 10.3* | 3.1* | 9.7* | 5.9* | 39.5* | 22.9* | 17.4* |
GLM-4-Voice-Base | 9B | 2.5 | - | - | 2.8 | - | - | - | - | |
MiniCPM-o 2.6 | 8B | 1.6 | 4.4 | 6.9 | 1.7 | 8.7 | 3.0 | 48.2 | 27.2 | 52.4 |
- 我们自行评估官方发布的checkpoint。
演讲稿生成:
Task | Size | SpeechQA | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
Metric | ACC↑ | G-Eval (10 point)↑ | Semantic ELO score↑ | Acoustic ELO score↑ | Overall ELO score↑ | UTMOS↑ | ASR-WER↓ | |||
Dataset | Speech Llama Q. | Speech Web Q. | Speech Trivia QA | Speech AlpacaEval | AudioArena | |||||
Proprietary | ||||||||||
GPT-4o-Realtime | 71.7 | 51.6 | 69.7 | 7.4 | 1157 | 1203 | 1200 | 4.2 | 2.3 | |
Open-Source | ||||||||||
GLM-4-Voice | 9B | 50.0 | 32.0 | 36.4 | 5.1 | 999 | 1147 | 1035 | 4.1 | 11.7 |
Llama-Omni | 8B | 45.3 | 22.9 | 10.7 | 3.9 | 960 | 878 | 897 | 3.2 | 24.3 |
Moshi | 7B | 43.7 | 23.8 | 16.7 | 2.4 | 871 | 808 | 875 | 2.8 | 8.2 |
Mini-Omni | 1B | 22.0 | 12.8 | 6.9 | 2.5 | 926 | 803 | 865 | 3.4 | 10.0 |
MiniCPM-o 2.6 | 8B | 61.0 | 40.0 | 40.2 | 5.1 | 1088 | 1163 | 1131 | 4.2 | 9.8 |
所有结果均来自 AudioEvals,评估方法和更多详情可参阅 UltraEval-Audio。
端对端语音克隆
Task | Voice cloning | |
---|---|---|
Metric | SIMO↑ | SIMO↑ |
Dataset | Seed-TTS test-zh | Seed-TTS test-en |
F5-TTS | 76 | 67 |
CosyVoice | 75 | 64 |
FireRedTTS | 63 | 46 |
MiniCPM-o 2.6 | 57 | 47 |
多模式实时流结果。
多模式实时流:StreamingBench 上的结果
Model | Size | Real-Time Video Understanding | Omni-Source Understanding | Contextual Understanding | Overall | |||
---|---|---|---|---|---|---|---|---|
Proprietary | ||||||||
Gemini 1.5 Pro | - | 77.4 | 67.8 | 51.1 | 70.3 | |||
GPT-4o-202408 | - | 74.5 | 51.0 | 48.0 | 64.1 | |||
Claude-3.5-Sonnet | - | 74.0 | 41.4 | 37.8 | 59.7 | |||
Open-source | ||||||||
VILA-1.5 | 8B | 61.5 | 37.5 | 26.7 | 49.5 | |||
LongVA | 7B | 63.1 | 35.9 | 30.2 | 50.7 | |||
LLaVA-Next-Video-34B | 34B | 69.8 | 41.7 | 34.3 | 56.7 | |||
Qwen2-VL-7B | 8B | 71.2 | 40.7 | 33.1 | 57.0 | |||
InternVL2-8B | 8B | 70.1 | 42.7 | 34.1 | 57.0 | |||
VITA-1.5 | 8B | 70.9 | 40.8 | 35.8 | 57.4 | |||
LLaVA-OneVision-7B | 8B | 74.3 | 40.8 | 31.0 | 58.4 | |||
InternLM-XC2.5-OL-7B | 8B | 75.4 | 46.2 | 33.6 | 60.8 | |||
MiniCPM-V 2.6 | 8B | 72.4 | 40.2 | 33.4 | 57.7 | |||
MiniCPM-o 2.6 | 8B | 79.9 | 53.4 | 38.5 | 66.0 |
示例
代码
在英伟达™(NVIDIA®)图形处理器上使用 Huggingface transformers 进行推理。 请确保安装了 transformers===4.44.2
,因为其他版本可能存在兼容性问题。 我们正在调查这一问题。 在 python 3.10 上测试的要求: 在 python 3.10 上测试的要求: 在 python 3.10 上测试的要求
Pillow==10.1.0
torch==2.3.1
torchaudio==2.3.1
torchvision==0.18.1
transformers==4.44.2
librosa==0.9.0
soundfile==0.12.1
vector-quantize-pytorch==1.18.5
vocos==0.1.0
decord
moviepy
模型初始化
import torch
from PIL import Image
from transformers import AutoModel, AutoTokenizer
# load omni model default, the default init_vision/init_audio/init_tts is True
# if load vision-only model, please set init_audio=False and init_tts=False
# if load audio-only model, please set init_vision=False
model = AutoModel.from_pretrained(
'openbmb/MiniCPM-o-2_6',
trust_remote_code=True,
attn_implementation='sdpa', # sdpa or flash_attention_2
torch_dtype=torch.bfloat16,
init_vision=True,
init_audio=True,
init_tts=True
)
model = model.eval().cuda()
tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-o-2_6', trust_remote_code=True)
# In addition to vision-only mode, tts processor and vocos also needs to be initialized
model.init_tts()
如果您使用的是旧版本的 PyTorch,您可能会遇到 “weight_norm_fwd_first_dim_kernel”(权重标准_fwd_first_dim_kernel)未用于 "BFloat16 "的问题,请将 TTS 转换为 float32 类型。