Qwen学习笔记2:Qwen模型基于ReAct原理实现function calling

前言

这也是一篇笔记,再探索一下Qwen模型的function calling功能。

Qwen1.8B模型能够进行function calling功能吗?

我们使用之前的reason_prompt模板进行测试:

PROMPT_REACT = """
Use the following format:

Question: the input question you must answer
Thought: you should always think about what to do

Begin!

Question:{query}"""
Use the following format:

Question: the input question you must answer
Thought: you should always think about what to do

Begin!

Question:请帮我查一下:我们的大模型技术实战课程目前一共上线了多少节?

1.8B模型无法进入到思维链模式

7B模型下测试,确实还有区别的,开始进行思考了!

在上一篇笔记的基础上再增加一个功能,进一步事件模型的函数调用功能。

再次注明该文章的代码来源于:

【太牛了】Qwen结合ReAct,几分钟就能构建一个AI Agent,保姆级实操讲解,理论与实践相结合,讲述ReAct是如何作用于Qwen模型的_哔哩哔哩_bilibili

大家可以自行去学习,这里我只是记录一下学习的过程。

主要代码

导入相关库

from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation import GenerationConfig
D:\ProgramData\anaconda3\envs\qwen\Lib\site-packages\tqdm\auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
  from .autonotebook import tqdm as notebook_tqdm

加载模型

# model_path = './model/qwen/Qwen-1_8B-Chat'
model_path = './model/qwen/Qwen-7B-Chat'
tokenizer=AutoTokenizer.from_pretrained(model_path,trust_remote_code=True)
model=AutoModelForCausalLM.from_pretrained(model_path,device_map="auto",trust_remote_code=True)

注意我这里修改为本地的模型路径,按照大家安装的实际位置为准。

The model is automatically converting to bf16 for faster inference. If you want to disable the automatic precision, please manually add bf16/fp16/fp32=True to "AutoModelForCausalLM.from_pretrained".
Try importing flash-attention for faster inference...
Warning: import flash_attn rotary fail, please install FlashAttention rotary to get higher efficiency https://github.com/Dao-AILab/flash-attention/tree/main/csrc/rotary
Warning: import flash_attn rms_norm fail, please install FlashAttention layer_norm to get higher efficiency flash-attention/csrc/layer_norm at main · Dao-AILab/flash-attention · GitHub
Loading checkpoint shards: 100%|████████████████████
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值