lm3.py(下载,安装,部署,应用)
1.https://ollama.com/
下载,安装,ollama run llama3:8b 默认端口 http://localhost:11434/v1
2.SentenceTransformer 加载 all-MiniLM-L6-v2 ,
手动下载:https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2/tree/main
3.function-calling:这个可以参考openai的写,
https://platform.openai.com/docs/guides/function-calling
测试技能
app.py(RAG)
1.pip install - r require.txt
2.安装docker (为了装向量数据库) https://docs.docker.com/desktop/install/windows-install/
docker pull phidata/pgvector:16
docker volume create pgvolume
docker run -itd -e POSTGRES_DB=ai -e POSTGRES_USER=ai -e POSTGRES_PASSWORD=ai -e PGDATA=/var/lib/postgresql/data/pgdata -v pgvolume:/var/lib/postgresql/data -p 5532:5432 --name pgvector phidata/pgvector:16
3.GROQ_API_KEY https://console.groq.com/keys 配置好自己的key 这样就可以用70B的了
4.ollama run nomic-embed-text 知识库向量化,这个感觉一般,也可以换openai的
5.streamlit run app.py
LLAMA3微调-量化-部署-应用 一条龙
微调
1.先下载一个完整版本,官网版或者别人微调过的中文版也可以的,例如:
https://huggingface.co/hfl/llama-3-chinese-8b-instruct-v2/tree/main
下载所有files文件,保存到一个文件夹里面,这个可以任选模型,huggingface上好多的
2.指令微调,执行run_clm_sft_with_peft,所需参数:
--model_name_or_path
D:\\PycharmProject\\2024\\llama-3-chinese-8b-instruct-v2
--tokenizer_name_or_path
D:\\PycharmProject\\2024\\llama-3-chinese-8b-instruct-v2
--dataset_dir
D:\\PycharmProject\\2024\\Chinese-LLaMA-Alpaca-3-main\\data
--per_device_train_batch_size
1
--per_device_eval_batch_size
1
--do_train
1
--do_eval
1
--seed
42
--bf16
1
--num_train_epochs
3
--lr_scheduler_type
cosine
--learning_rate
1e-4
--warmup_ratio
0.05
--weight_decay
0.1
--logging_strategy
steps
--logging_steps
10
--save_strategy
steps
--save_total_limit
3
--evaluation_strategy
steps
--eval_steps
100
--save_steps
200
--gradient_accumulation_steps
8
--preprocessing_num_workers
8
--max_seq_length
1024
--output_dir
D:\\PycharmProject\\2024\\llama3-lora
--overwrite_output_dir
1
--ddp_timeout
30000
--logging_first_step
True
--lora_rank
64
--lora_alpha
128
--trainable
"q_proj,v_proj,k_proj,o_proj,gate_proj,down_proj,up_proj"
--lora_dropout
0.05
--modules_to_save
"embed_tokens,lm_head"
--torch_dtype
bfloat16
--validation_file
D:\\PycharmProject\\2024\\Chinese-LLaMA-Alpaca-3-main\\eval\\ruozhiba_qa2449_gpt4turbo.json
--load_in_kbits
16
3.合并LORA,现在只训练和保存了一部分权重,需要和原始的合并在一起
1.执行merge_llama3_with_chinese_lora_low_mem.py,需要传入的参数(改成自己的):
--base_model
D:\\PycharmProject\\2024\\llama-3-chinese-8b-instruct-v2
--lora_model
D:\\PycharmProject\\2024\\llama3-lora
--output_dir
D:\\PycharmProject\\2024\\llama3-lora-merge
量化
1.用llama.cpp(https://github.com/ggerganov/llama.cpp)这个项目
需要先装好CMAKE:https://cmake.org/download/
然后
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
pip install -r requirements/requirements-convert-hf-to-gguf.txt
cmake -B build
cmake --build build --config Release
2.在项目文件里面找到咱们要用的转换工具,
convert-hf-to-gguf.py D:\\PycharmProject\\2024\\llama3-lora-merge --outtype f16 --outfile D:\\PycharmProject\\2024\\my_llama3.gguf
3.进入到这个路径D:\PycharmProject\2024\test_llama3.cpp\llama.cpp\build\bin\Release
quantize.exe D:\\PycharmProject\\2024\\my_llama3.gguf D:\\PycharmProject\\2024\\quantized_model.gguf q4_0
部署
ollama,lmstudio都是比较简单的方法,用法都差不多,而且都自带启动服务
ollama create 名字 -f Modelfile
量化colab
1.!git clone https://github.com/ggerganov/llama.cpp
先下载这个项目,GG哥们太狠了
2.!cd llama.cpp && LLAMA_CUBLAS=1 make && pip install -r requirements/requirements-convert-hf-to-gguf.txt
编译安装
3.下载huggingface上的模型,注意coloab内存有限制,如果在这上面只能搞小的
from huggingface_hub import snapshot_download
model_name = "Qwen/Qwen1.5-1.8B"
methods = ['q4_k_m']
base_model = "./original_model2/"
quantized_path = "./quantized_model/"
snapshot_download(repo_id=model_name, local_dir=base_model , local_dir_use_symlinks=False)
original_model = quantized_path+'/FP16.gguf'
4.执行量化
!mkdir ./quantized_model/
!python llama.cpp/convert-hf-to-gguf.py ./original_model2/ --outtype f16 --outfile ./quantized_model/FP16.gguf
5.模型可以上传到huggingface,方便以后下载
from huggingface_hub import notebook_login
notebook_login()
from huggingface_hub import HfApi, HfFolder, create_repo, upload_file
model_path = "./quantized_model/Q4_K_M.gguf" # Your model's local path
repo_name = "qwen1.5-llm" # Desired HF Hub repository name
repo_url = create_repo(repo_name, private=False)
api = HfApi()
api.upload_file(
path_or_fileobj=model_path,
path_in_repo="Q4_K_M.gguf",
repo_id="skuma307/qwen1.5-llm",
repo_type="model",
)