一、部署
Web Demo 部署
运行 web_demo.py
streamlit run ~/Llama3-Tutorial/tools/internstudio_web_demo.py \ ~/model/Meta-Llama-3-8B-Instruct
Llama3初体验
浏览器打开http://localhost:8501/和Llama3对话
二、XTuner微调
XTuner微调训练
cd ~/Llama3-Tutorial
# 开始训练,使用 deepspeed 加速,A100 40G显存 耗时24分钟
xtuner train configs/assistant/llama3_8b_instruct_qlora_assistant.py --work-dir /root/llama3_pth
使用A100 24GB GPU,训练过程还是挺快的,大概20分钟。
# Adapter PTH 转 HF 格式
xtuner convert pth_to_hf /root/llama3_pth/llama3_8b_instruct_qlora_assistant.py \
/root/llama3_pth/iter_500.pth \
/root/llama3_hf_adapter
# 模型合并
export MKL_SERVICE_FORCE_INTEL=1
xtuner convert merge /root/model/Meta-Llama-3-8B-Instruct \
/root/llama3_hf_adapter\
/root/llama3_hf_merged