目录
写这篇文章的目的
这篇文章是对:大模型学习-在colab中训练并更换模型_colab调整模型-优快云博客的一个优化,因为在之前的博文中,我是提供了一个现成的文件夹demo,然后基于这个文件夹进行修改完成的大模型的更换和训练。
那如果我不提供文件夹demo呢?好像就没办法进行下去了。所以我决定从文件准备阶段开始,重新过一遍流程。这样以后训练大模型就不用依赖现成的demo了,自己就可以构建训练所需的文件,然后训练。
1.准备训练所需的文件
首先创建一个大文件夹,用于存放所有的文件和文件夹,命名为train,我将它放到了e盘
然后进行train文件夹,在里面创建model文件夹,该文件夹用于存放要进行训练的大模型相关文件:
接着,在huggingface中选择一个大模型,这里我选择qwen-math-1.5B:Qwen/Qwen2.5-Math-1.5B at main
.gitattributes、LICENSE、README.md,这些都是相关的说明文件,不用下载,剩下的文件都下载下来,放到model文件夹中:
编写训练脚本train.py
# 加载模型和分词器
from unsloth import FastLanguageModel
from local_dataset import LocalJsonDataset
from safetensors.torch import load_model, save_model
max_seq_length = 4096
dtype = None
load_in_4bit = False
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="./model",
max_seq_length=max_seq_length,
dtype=dtype,
load_in_4bit=load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 16, # Choose any number > 0 ! Suggested 8, 16, 32, 64, 128
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 16,
lora_dropout = 0, # Supports any, but = 0 is optimized
bias = "none", # Supports any, but = "none" is optimized
# [NEW] "unsloth" uses 30% less VRAM, fits 2x larger batch sizes!
use_gradient_checkpointing = "unsloth", # True or "unsloth" for very long context
random_state = 3407,
use_rslora = False, # We support rank stabilized LoRA
loftq_config = None, # And LoftQ
)
# 加载和预处理数据集
custom_dataset = LocalJsonDataset(json_file=