Qwen3 模型基础
Qwen3 作为推理模型,如果开启了推理模式,输出形式为 CotOutput
源码
https://github.com/hiyouga/LLaMA-Factory/blob/main/README_zh.md
官方的文档已经很详细了,可以参考
git clone --depth 1 https://github.com/hiyouga/LLaMA-Factory.git
环境配置
除了torch、metrics之外,可选项还包括,可以自行添加
torch、torch-npu、metrics、deepspeed、liger-kernel、bitsandbytes、hqq、eetq、gptq、aqlm、vllm、sglang、galore、apollo、badam、adam-mini、qwen、minicpm_v、modelscope、openmind、swanlab、dev
cd LLaMA-Factory
pip install -e ".[torch,metrics]" --no-build-isolation
数据集导入
数据形式优先考虑 Alpaca 格式,如下:
[
{
"instruction": "用户指令(必填)",
"input": "用户输入(选填)",
"output": "模型回答(必填)",
"system": "系统提示词(选填)",
"history": [
["第一轮指令(选填)", "第一轮回答(选填)"],
["第二轮指令(选填)", "第二轮回答(选填)"]
]
}
]
以一条 Text2Cypher 数据为示例,output 当中有 标签,train-00000-of-00001-5000-cot-en-alpaca.json
{
"instruction": "You are a Cypher expert. Given the database schema, help convert the user's question to Cypher.\n\n",
"input": "\nDatabase Schema:\n\nNode properties:\nPatient {name: STRING, dob: DATETIME, gender: STRING, patient_id: STRING}\nDoctor {name: STRING, dob: DATETIME, gender: STRING, specialization: STRING, doctor_id: STRING}\nNurse {name: STRING, dob: DATETIME, gender: STRING, specialization: STRING, nurse_id: STRING}\nPharmacist {name: STRING, dob: DATETIME, gender: STRING, pharmacist_id: STRING}\nHospital {name: STRING, location: POINT, hospital_id: STRING}\nPharmacy {name: STRING, location: POINT, pharmacy_id: STRING}\nDisease {name: STRING, icd_code: STRING}\nSymptom {name: STRING, description: STRING}\nMedication {name: STRING, medication_id: STRING, manufacturer: STRING}\nClinical_Trial {name: STRING, trial_id: STRING, start_date: DATETIME, end_date: DATETIME}\nMedical_Device {name: STRING, device_id: STRING, manufacturer: STRING}\nInsurance_Provider {name: STRING, provider_id: STRING, coverage_type: STRING}\nPayer {name: STRING, payer_id: STRING}\nResearcher {name: STRING, researcher_id: STRING, affiliation: STRING}\nMedical_Journal {name: STRING, journal_id: STRING, publisher: STRING}\nPatient_Record {patient_id: STRING, date: DATETIME, record: STRING}\nDoctor_Record {doctor_id: STRING, date: DATETIME, record: STRING}\nHospital_Record {hospital_id: STRING, date: DATETIME, record: STRING}\nPharmacy_Record {pharmacy_id: STRING, date: DATETIME, record: STRING}\nResearch_Record {researcher_id: STRING, date: DATETIME, record: STRING}\nMedical_Journal_Record {journal_id: STRING, date: DATETIME, record: STRING}\nInsurance_Provider_Record {provider_id: STRING, date: DATETIME, record: STRING}\nPayer_Record {payer_id: STRING, date: DATETIME, record: STRING}\nResearcher_Record {researcher_id: STRING, date: DATETIME, record: STRING}\nMedication_Record {medication_id: STRING, date: DATETIME, record: STRING}\nMedical_Device_Record {device_id: STRING, date: DATETIME, record: STRING}\nDisease_Record {icd_code: STRING, date: DATETIME, record: STRING}\nSymptom_Record {symptom_id: STRING, date: DATETIME, record: STRING}\nClinical_Trial_Record {trial_id: STRING, date: DATETIME, record: STRING}\nTreatment_Plan {patient_id: STRING, start_date: DATETIME, end_date: DATETIME, plan: STRING}\nInsurance_Coverage {patient_id: STRING, provider_id: STRING, coverage_details: STRING}\nPayment {patient_id: STRING, doctor_id: STRING, nurse_id: STRING, pharmacist_id: STRING, hospital_id: STRING, pharmacy_id: STRING, payer_id: STRING, amount: INTEGER, date: DATETIME}\nPublication {author_id: STRING, journal_id: STRING, publication_date: DATETIME, title: STRING, content: STRING}\nFunding {researcher_id: STRING, provider_id: STRING, amount: INTEGER, date: DATETIME}\nStudy_Result {researcher_id: STRING, trial_id: STRING, result: STRING, date: DATETIME}\nProduct_Review {reviewer_id: STRING, product_id: STRING, rating: INTEGER, review: STRING, date: DATETIME}\nProduct_Rating {product_id: STRING, rating: INTEGER, date: DATETIME}\nProduct_Endorsement {endorser_id: STRING, product_id: STRING, date: DATETIME}\nProduct_Recall {recall_id: STRING, product_id: STRING, date: DATETIME}\nProduct_Safety {product_id: STRING, safety_status: STRING, date: DATETIME}\n\nRelationship properties:\nTREATS {start_date: DATETIME, end_date: DATETIME}\nPRESCRIBES {start_date: DATETIME, end_date: DATETIME}\nRECOMMENDS {start_date: DATETIME, end_date: DATETIME}\nWORKS_AT {start_date: DATETIME, end_date: DATETIME}\nPROVIDES {start_date: DATETIME, end_date: DATETIME}\nDIAGNOSED_WITH {diagnosis_date: DATETIME}\nEXHIBITS {start_date: DATETIME, end_date: DATETIME}\nRESEARCH_ON {start_date: DATETIME, end_date: DATETIME}\nAUTHORS {publication_date: DATETIME}\nPUBLISHES {publication_date: DATETIME}\nFUNDING_PROVIDED_BY {date: DATETIME}\nENDORSES {endorsement_date: DATETIME}\nREVIEWS {review_date: DATETIME}\nRATES {rating_date: DATETIME}\nRECALLS {recall_date: DATETIME}\nENSURES_SAFETY {safety_date: DATETIME}\n\nThe relationships:\n(:Patient)-[:TREATS {start_date, end_date}]->(:Doctor)\n(:Patient)-[:TREATS {start_date, end_date}]->(:Nurse)\n(:Patient)-[:TREATS {start_date, end_date}]->(:Pharmacist)\n(:Patient)-[:TREATS {start_date, end_date}]->(:Hospital)\n(:Patient)-[:TREATS {start_date, end_date}]->(:Pharmacy)\n(:Patient)-[:DIAGNOSED_WITH {diagnosis_date}]->(:Disease)\n(:Patient)-[:EXHIBITS {start_date, end_date}]->(:Symptom)\n(:Doctor)-[:PRESCRIBES {start_date, end_date}]->(:Medication)\n(:Doctor)-[:PRESCRIBES {start_date, end_date}]->(:Medical_Device)\n(:Doctor)-[:PRESCRIBES {start_date, end_date}]->(:Clinical_Trial)\n(:Doctor)-[:TREATS {start_date, end_date}]->(:Patient)\n(:Doctor)-[:RESEARCH_ON {start_date, end_date}]->(:Disease)\n(:Doctor)-[:RESEARCH_ON {start_date, end_date}]->(:Medication)\n(:Doctor)-[:RESEARCH_ON {start_date, end_date}]->(:Medical_Device)\n(:Doctor)-[:RESEARCH_ON {start_date, end_date}]->(:Clinical_Trial)\n(:Doctor)-[:WORKS_AT {start_date, end_date}]->(:Hospital)\n(:Doctor)-[:WORKS_AT {start_date, end_date}]->(:Pharmacy)\n(:Doctor)-[:PROVIDES {start_date, end_date}]->(:Insurance_Provider)\n(:Doctor)-[:AUTHORS]->(:Publication)\n(:Doctor)-[:PUBLISHES]->(:Medical_Journal)\n(:Doctor)-[:FUNDING_PROVIDED_BY]->(:Funding)\n(:Doctor)-[:ENDORSES]->(:Product_Endorsement)\n(:Doctor)-[:REVIEWS]->(:Product_Review)\n(:Doctor)-[:RATES]->(:Product_Rating)\n(:Doctor)-[:RECALLS]->(:Product_Recall)\n(:Doctor)-[:ENSURES_SAFETY]->(:Product_Safety)\n(:Nurse)-[:TREATS {start_date, end_date}]->(:Patient)\n(:Nurse)-[:WORKS_AT {start_date, end_date}]->(:Hospital)\n(:Nurse)-[:WORKS_AT {start_date, end_date}]->(:Pharmacy)\n(:Nurse)-[:PROVIDES {start_date, end_date}]->(:Insurance_Provider)\n(:Pharmacist)-[:TREATS {start_date, end_date}]->(:Patient)\n(:Pharmacist)-[:WORKS_AT {start_date, end_date}]->(:Hospital)\n(:Pharmacist)-[:WORKS_AT {start_date, end_date}]->(:Pharmacy)\n(:Pharmacist)-[:PROVIDES {start_date, end_date}]->(:Insurance_Provider)\n(:Hospital)-[:PROVIDES {start_date, end_date}]->(:Insurance_Provider)\n(:Pharmacy)-[:PROVIDES {start_date, end_date}]->(:Insurance_Provider)\n(:Insurance_Provider)-[:PROVIDES {start_date, end_date}]->(:Insurance_Coverage)\n(:Payer)-[:PROVIDES {start_date, end_date}]->(:Payment)\n(:Researcher)-[:RESEARCH_ON {start_date, end_date}]->(:Disease)\n(:Researcher)-[:RESEARCH_ON {start_date, end_date}]->(:Medication)\n(:Researcher)-[:RESEARCH_ON {start_date, end_date}]->(:Medical_Device)\n(:Researcher)-[:RESEARCH_ON {start_date, end_date}]->(:Clinical_Trial)\n(:Researcher)-[:AUTHORS]->(:Publication)\n(:Researcher)-[:PUBLISHES]->(:Medical_Journal)\n(:Researcher)-[:FUNDING_PROVIDED_BY]->(:Funding)\n(:Researcher)-[:REVIEWS]->(:Product_Review)\n(:Researcher)-[:RATES]->(:Product_Rating)\n(:Researcher)-[:ENDORSES]->(:Product_Endorsement)\n(:Researcher)-[:RECALLS]->(:Product_Recall)\n(:Researcher)-[:ENSURES_SAFETY]->(:Product_Safety)\n(:Medical_Journal)-[:PUBLISHES]->(:Publication)\n(:Medical_Journal)-[:REVIEWS]->(:Product_Review)\n(:Medical_Journal)-[:RATES]->(:Product_Rating)\n(:Medical_Journal)-[:ENDORSES]->(:Product_Endorsement)\n(:Medical_Journal)-[:RECALLS]->(:Product_Recall)\n(:Medical_Journal)-[:ENSURES_SAFETY]->(:Product_Safety)\n\n\n\n\nUser's question: Which patients are connected to the pharmacy with the ID 'pharmacy_1' through the TREATS relationship?\n",
"output": "<think>\nBased on the user's question about finding patients connected to a specific pharmacy via the TREATS relationship, I'll construct the reasoning chain while adhering to the graph schema:\n\n1. **User Intent Analysis**: The question requires identifying Patient nodes directly linked to a Pharmacy node (ID 'pharmacy_1') through a TREATS relationship. This implies a directional relationship from Patient to Pharmacy per the schema.\n\n2. **Entity Recognition**: \n - Source entity: Patient nodes (target for result projection)\n - Target entity: Pharmacy nodes (filtered by pharmacy_id)\n - Relationship: TREATS (direct connection)\n\n3. **Relationship Paths**: The schema specifies (:Patient)-[:TREATS]->(:Pharmacy) as a direct relationship. No intermediate nodes or multi-hop traversals are needed since the connection is explicit.\n\n4. **Filtering Conditions**: The Pharmacy node must satisfy pharmacy_id = 'pharmacy_1'. This requires an exact string match filter. No pattern matching or date filters are needed as relationship properties aren't constrained in the question.\n\n5. **Return Fields**: The patient_id property from Patient nodes satisfies the requirement to identify connected patients. The schema confirms patient_id exists as a unique identifier.\n\n6. **Boundary Conditions**: \n - Case sensitivity: pharmacy_id filter uses exact case-sensitive matching\n - Null handling: Implicitly excludes patients without TREATS relationships to this pharmacy\n - Indexing: pharmacy_id should be leveraged for efficient lookups\n\n7. **Cypher Structure**: \n - MATCH clause defines the directed relationship pattern\n - WHERE clause applies the pharmacy filter\n - RETURN clause specifies the target identifier\n\nThe logical flow begins from Patient nodes, traverses outgoing TREATS relationships directly to Pharmacy nodes, applies the ID filter, and projects the requested patient identifiers. This aligns with the schema's relationship directionality and attribute definitions without requiring complex operations.\n\nFinal Cypher: \nMATCH (p:Patient)-[:TREATS]->(ph:Pharmacy) \nWHERE ph.pharmacy_id = 'pharmacy_1' \nRETURN p.patient_id</think>\nMATCH (p:Patient)-[:TREATS]->(ph:Pharmacy)\nWHERE ph.pharmacy_id = 'pharmacy_1'\nRETURN p.patient_id"
},
把该文件添加到配置文件 dataset_info.json 当中,关于 dataset_info.json 更多信息参考 LLaMA-Factory-main/data/README_zh.md
{
"text2cypher": {
"file_name": "train-00000-of-00001-5000-cot-en-alpaca.json"
},
"identity": {
"file_name": "identity.json"
},
....
}
默认的字段对应关系如下,如果 xxx.json 的字段不是instruction,input…,那么需要在 dataset_info.json 添加 columns 项,显示声明对应关系
"数据集名称": {
"file_name": "data.json",
"columns": {
"prompt": "instruction",
"query": "input",
"response": "output",
"system": "system",
"history": "history"
}
}
训练
启动客户端
llamafactory-cli webui
配置模型路径
注意:训练和推理对于思考开关应保持一致
可以开启 unsloth ,加速训练,减少内存占用
模型导出
方式 1 - Llama-factory webui:训练完毕之后,点击界面的 export 导出模型,就会得到合并后的完整 SFT 模型
方式 2 - 合并脚本,合并 Lora 层权重和原始模型(目前 Llama-factory 已经支持 Qwen3 系列,导出权重 \ 模型都包含 chat_template.jinja)
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
# 加载原始模型
model_path = "./models/finetune/Qwen3-14B-sft-en-5000"
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
device_map="auto"
)
# 加载 tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_path)
# 加载 LoRA 层的权重
checkpoint_path = "./text2cypher/grpo/output/"
model = PeftModel.from_pretrained(model, checkpoint_path)
# 合并 LoRA 到原始模型(可选,但推荐用于 Llama-factory)
merged_model = model.merge_and_unload()
# 保存完整模型到 final_model 目录
output_dir = "./text2cypher/grpo/output/final_model"
merged_model.save_pretrained(output_dir)
部署和调用
方式1:配置 qwen3-sft.yaml 文件,template 已经支持 qwen3 因此直接用
# API_PORT=8112 llamafactory-cli api ~/text2cypher/qwen3-sft.yaml
# CUDA_VISIBLE_DEVICES=2,3 API_PORT=8112 llamafactory-cli api ~/text2cypher/qwen3-sft.yaml
model_name_or_path: ~/models/finetune/Qwen3-14B-sft-en-5000
template: qwen3
infer_backend: vllm # choices: [huggingface, vllm, sglang]
trust_remote_code: true
vllm 部署(默认卡 \ 指定卡)
API_PORT=8112 llamafactory-cli api ~/text2cypher/qwen3-sft.yaml
CUDA_VISIBLE_DEVICES=2,3 API_PORT=8112 llamafactory-cli api ~/text2cypher/qwen3-sft.yaml
之后使用 openai-api 形式的接口调用就好了
API_KEY = "sk-no-key-needed" # 这个随便填,llama-factory 不校验
client = OpenAI(api_key=API_KEY, base_url="http://localhost:8112/v1", timeout=300)
completion = client.chat.completions.create(
model="~/models/finetune/Qwen3-14B-sft-en-5000",
messages=[{"role": "user", "content": prompt}],
stream=False,
temperature=0.7,
top_p=0.95,
)
response = completion.choices[0].message
sft_reasoning, sft_answer = extract_thinking_answer(response)
方式2:python 代码加载
from modelscope import AutoModelForCausalLM, AutoTokenizer # 或者 transformers,unosloth 等
# 加载 Qwen3 SFT 模型和分词器
model_path = "~models/finetune/Qwen3-14B-sft-en-5000"
# model_path = "~/models/Qwen/Qwen3-14B"
max_seq_length = 8192
lora_rank = 64
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
# PEFT 模型
lora_rank = 64
peft_config = LoraConfig(
task_type=TaskType.CAUSAL_LM,
r=lora_rank,
target_modules=["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj"],
lora_alpha=lora_rank,
lora_dropout=0.0,
bias="none",
inference_mode=False # 训练模式
)
model = get_peft_model(model, peft_config)