openai.BadRequestError: Error code: 400 - {‘error’: {‘code’: ‘invalid_parameter_error’, ‘param’: None, ‘message’: ‘<400> InternalError.Algo.InvalidParameter: The tool call is not supported.’, ‘type’: ‘invalid_request_error’}, ‘id’: ‘chatcmpl-fd1b6d5d-fff7-9e04-91a7-7766d15f9a39’, ‘request_id’: ‘fd1b6d5d-fff7-9e04-91a7-7766d15f9a39’}
用大模型进行格式化输出的时候,一直报错,排查一番后发现with_structured_output对很多大模型都不支持,或者支持的不好
将大模型改为chatgpt之后,就不再报错了
from langchain_openai import ChatOpenAI
import os
from pydantic import BaseModel, Field
# 初始化模型deepseek-r1
# llm = ChatOpenAI(
# model="deepseek-r1",
# openai_api_key=os.environ["BAILIAN_API_KEY"],
# openai_api_base="https://dashscope.aliyuncs.com/compatible-mode/v1",
# streaming=False # 禁用流式传输
# )
# 初始化模型gpt-3.5-turbo
llm = ChatOpenAI(
model="gpt-3.5-turbo",
openai_api_key=os.environ["GPT_API_KEY"],
openai_api_base="https://api.chatanywhere.tech/v1",
streaming=False # 禁用流式传输
)
class StructuredBody(BaseModel):
company_name: str = Field(description="大模型所属公司名称")
assist_name: str = Field(description="大模型助手名称")
# 简单调用
struLlm = llm.with_structured_output(StructuredBody, method="function_calling")
response = struLlm.invoke("你好,请用中文介绍一下你自己")
print(f"company: {response.company_name}")
print(f"name: {response.assist_name}")
执行结果