LLaVa 家族 (L
arge L
anguage a
nd V
ision A
ssistant )
官方资源汇总
: 项目主页 || https://huggingface.co/liuhaotian
23.04.LLaVA1.0论文: Large Language and Vision Assistant(Visual Instruction Tuning)
23.06 LLaVA-Med(医学图片视觉助手): Training a Large Language-and-Vision Assistant for Biomedicine in One Day
23.10 LLaVA-1.5论文: Improved Baselines with Visual Instruction Tuning
23.11 LLaVA-Plus(外接其他模型工具)**:LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills
24.01 LLaVA-1.6 博客(论文还未出): LLaVA-NeXT: Improved reasoning, OCR, and world knowledge
一、前置解析博客、论文
23.04.
LLaVA1.0
:.论文解析、原理、本地部署: (一)
【LLaVA所用的预训练大语言模型LLMs
】23.03.Vicuna: 类似GPT4的开源聊天机器人( 90%* ChatGPT Quality)
参考的论文 (可跳过)
22.02.BLIP 图片简单描述生成: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
23.06.BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models
23.06.InstructBLIP
: Towards General-purpose Vision-Language Models with Instruction Tuning
23.08.Qwen-VL
阿里的视觉语言模型: A Frontier Large Vision-Language Model with Versatile Abilities
二、LLaVA1.5的简介
2.1 结构与改进
下图左边部分为LLaVA1.0的模型结构与训练数据量,右侧为改进LLaVA1.5
- 结构上,将视觉特征提取器从 CLIP-vit-L-14 (224x224图像输入)改为了CLIP-vit-L/336(将真实图像resize到