任务要求:
一、环境安装
# 安装一些必要的库
conda install pytorch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 pytorch-cuda=12.1 -c pytorch -c nvidia -y
# 安装其他依赖
pip install transformers==4.39.3
pip install streamlit==1.36.0
git clone -b v0.1.21 https://github.com/InternLM/XTuner /root/InternLM/code/XTuner
pip install -e '.[deepspeed]'
二、模型准备
链接到模型:
# 创建一个目录,用来存放微调的所有资料,后续的所有操作都在该路径中进行
mkdir -p /root/InternLM/XTuner
cd /root/InternLM/XTuner
mkdir -p Shanghai_AI_Laboratory
ln -s /root/share/new_models/Shanghai_AI_Laboratory/internlm2-chat-1_8b Shanghai_AI_Laboratory/internlm2-chat-1_8b
执行上述操作后,Shanghai_AI_Laboratory/internlm2-chat-1_8b
将直接成为一个符号链接,这个链接指向 /root/share/new_models/Shanghai_AI_Laboratory/internlm2-chat-1_8b
的位置。
这意味着,当我们访问 Shanghai_AI_Laboratory/internlm2-chat-1_8b
时,实际上就是在访问 /root/share/new_models/Shanghai_AI_Laboratory/internlm2-chat-1_8b
目录下的内容。通过这种方式,我们无需复制任何数据,就可以直接利用现有的模型文件进行后续的微调操作,从而节省存储空间并简化文件管理。
模型文件准备好后,我们可以使用tree
命令来观察目录结构。
apt-get install -y tree
tree -l
├── Shanghai_AI_Laboratory
│ └── internlm2-chat-1_8b -> /root/share/new_models/Shanghai_AI_Laboratory/internlm2-chat-1_8b
│ ├── README.md
│ ├── config.json
│ ├── configuration.json
│ ├── configuration_internlm2.py
│ ├── generation_config.json
│ ├── model-00001-of-00002.safetensors
│ ├── model-00002-of-00002.safetensors
│ ├── model.safetensors.index.json
│ ├── modeling_internlm2.py
│ ├── special_tokens_map.json
│ ├── tokenization_internlm2.py
│ ├── tokenization_internlm2_fast.py
│ ├── tokenizer.model
│ └── tokenizer_config.json
三、开始微调
我们可以通过网页端的 Demo 来看看微调前 internlm2-chat-1_8b
的对话效果。
首先,我们需要准备一个Streamlit程序的脚本。
import copy
import warnings
from dataclasses import asdict, dataclass
from typing import Callable, List, Optional
import streamlit as st
import torch
from torch import nn
from transformers.generation.utils import (LogitsProcessorList,
StoppingCriteriaList)
from transformers.utils import logging
from transformers import AutoTokenizer, AutoModelForCausalLM # isort: skip
logger = logging.get_logger(__name__)
model_name_or_path = "/root/InternLM/XTuner/Shanghai_AI_Laboratory/internlm2-chat-1_8b"
@dataclass
class GenerationConfig:
# this config is used for chat to provide more diversity
max_length: int = 2048
top_p: float = 0.75
temperature: float = 0.1
do_sample: bool = True
repetition_penalty: float = 1.000
@torch.inference_mode()
def generate_interactive(
model,
tokenizer,
prompt,
generation_config: Optional[GenerationConfig] = None,
logits_processor: Optional[LogitsProcessorList] = None,
stopping_criteria: Optional[StoppingCriteriaList] = None,
prefix_allowed_tokens_fn: Optional[Callable[[int, torch.Tensor],
List[int]]] = None,
additional_eos_token_id: Optional[int] = None,
**kwargs,
):
inputs = tokenizer([prompt], padding=True, return_tensors='pt')
input_length = len(inputs['input_ids'][0])
for k, v in inputs.items():
inputs[k] = v.cuda()
input_ids = inputs['input_ids']
_, input_ids_seq_length = input_ids.shape[0], input_ids.shape