一、使用场景
复习下常规调用步骤:
from langchain.chat_models import init_chat_model
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
- 生成model
model = init_chat_model("claude-3-5-sonnet-20240620", model_provider="anthropic")
- 生成prompt模板
prompt_template = ChatPromptTemplate.from_template("tell me a joke about {topic}")
prompt = prompt_template.invoke({"topic": "bears"}))
# messages=[HumanMessage(content='tell me a joke about bears', additional_kwargs={}, response_metadata={})]
- 大模型调用
result = model.invoke(prompt)
- 解析输出格式
parser = StrOutputParser()
print(parser.invoke(result))
# Here's a joke about bears: Why don't bears wear shoes?
二、链式调用
LangChain提供链式调用的方法来简化上述代码
chain = prompt_template | model | StrOutputParser()
chain.invoke({"topic": "cat"})
以下是如何再接另一条链的调用方式:
analysis_prompt = ChatPromptTemplate.from_template("is this a funny joke? {joke}")
composed_chain = {"joke": chain} | analysis_prompt | model | StrOutputParser()
composed_chain.invoke({"topic": "bears"})
也可以使用lamda让调用链更美观:
composed_chain_with_lambda = (
chain
| (lambda input: {"joke": input})
| analysis_prompt
| model
| StrOutputParser()
)
composed_chain_with_lambda.invoke({"topic": "bears"})
chain的输出会作为输入,嵌入到joke键值对中