1.什么是函数式API
函数式API提供了标准的Python代码整合持久化(Persistence)、记忆(Memory)、人机回环(Human-in-the-Loop)和流(Stream)等特性的方法。不需要重构代码构建流水线或有向无换图(DAG)就可以构建工作流。
函数式API包括两个关键组件:
第一:@entrypoint。用来注解作为工作流的起始节点的函数,在函数中处理业务逻辑,执行长期任务,处理终端。可看做工作流的主函数。
第二:@task。能够在起始节点内部异步执行的任务,执行API调用、文件访问或数据处理。task函数返回值是一个类Future对象,需要调用result方法
使用函数式API实现工作流,不必考虑图的结构,可以基于标准的Python语法实现工作流;函数式API也不需要向Graph一样去管理全局状态,此时状态在函数闭包内,不会跨函数共享,所以不需要对状态进行管理;起始节点有一个检查点,一个任务的执行结果会保存到检查点。
以下是一个使用函数式API实现的简单工作流,该工作流包括两个任务,一个用于执行两个证书的加法,另一个用于对输出做格式化,具体代码如下:
from langgraph.func import entrypoint, task
from langgraph.checkpoint.memory import InMemorySavercheckpointer = InMemorySaver()
@task
def add(a: int, b: int)->int:
return a + b
@task
def format(sum: int)->str:
return f'the sum is {sum}'
@entrypoint(checkpointer=checkpointer)
def workflow(input: dict)->int:
return format(add(input['number_one'], input['number_two']).result()).result()
config = {'configurable': {'thread_id': 1}}
result = workflow.invoke({'number_one':2, 'number_two': 5}, config=config)
print(result)
*可以用@entrypoint注解多个函数,在一个entrypoint函数中可以调用其他的entrypoint函数,在task函数内部,也可以调用entrypoint函数。
2.并行操作
可以使用task执行并行任务,从而提高处理效率,适合处理类似大数据中的Map-Reduce任务。以下例子计算一个整数数组的平方和,启动多个任务,每个任务计算一个元素的平方,在主函数中收集所有的平方并求和,具体代码如下:
from langgraph.func import entrypoint, task
from langgraph.checkpoint.memory import InMemorySavercheckpointer = InMemorySaver()
@task
def square(num: int)->int:
return pow(num, 2)
@entrypoint(checkpointer=checkpointer)
def workflow(numbers: list[int])->int:
futures = [square(i) for i in numbers]
return sum([f.result() for f in futures])
config = {'configurable': {'thread_id': 1}}
result = workflow.invoke([1,2,3,4,5], config=config)
print(result)
3.与Graph整合
在函数式API的主函数中可以直接调用已有的Graph,就跟普通的模块调用一样。以下例子中,先生成一个包括两节点的图,全局状态为v,第一个节点中执行v*2操作,第二节点中执行v*3操作。在主函数中直接调用图,并用返回结果中的v做平方,具体代码如下:
#以下为两节点图
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
class State(TypedDict):
v: int
def node1(state: State):
return {'v': state['v'] * 2}
def node2(state: State):
return {'v': state['v'] * 3}graph_builder = StateGraph(State)
graph_builder.add_node("node1", node1)
graph_builder.add_node("node2", node2)
graph_builder.add_edge(START, "node1")
graph_builder.add_edge("node1", "node2")
graph_builder.add_edge("node2", END)
graph = graph_builder.compile()
graph.invoke({'v': 10})#以下代码为主函数调用图
from langgraph.func import entrypoint
from langgraph.checkpoint.memory import InMemorySavercheckpointer = InMemorySaver()
@entrypoint(checkpointer=checkpointer)
def workflow(num: int)->int:
result = graph.invoke({'v': num})
return pow(result['v'], 2)
config = {'configurable': {'thread_id': 1}}
result = workflow.invoke(5, config=config)
print(result)
4.流式调用
函数式API的流式调用与Graph API的一样,具体代码如下:
from langgraph.func import entrypoint, task
from langgraph.checkpoint.memory import InMemorySaver
from langgraph.config import get_stream_writercheckpointer = InMemorySaver()
@task
def square(v: int)->int:
return pow(v, 2)@entrypoint(checkpointer=checkpointer)
def workflow(input: dict)->int:
writer = get_stream_writer() #用于输出cumstom模式的流
writer("Started processing")
result = square(input['v']).result()
writer(f"Result is {result}")
return result
config = {'configurable': {'thread_id': 1}}
for mode, chunk in workflow.stream(
{"v": 5},
stream_mode=["custom", "updates"],
config=config
):
print(f"{mode}: {chunk}")
输出如下:
custom: Started processing
updates: {'square': 25}
custom: Result is 25
updates: {'workflow': 25}
5.失败重试
在函数式API中,如果执行任务失败,比如执行API调用或数据处理,可以借助框架执行重试,以下示例代码模式网络请求失败,重试3次后返回成功:
from langgraph.checkpoint.memory import InMemorySaver
from langgraph.func import entrypoint, task
from langgraph.types import RetryPolicy
import requestsattempts = 0
@task(retry_policy=RetryPolicy(retry_on=requests.exceptions.ConnectionError))
def remote_call():
global attempts
attempts += 1if attempts < 3:
raise requests.exceptions.ConnectionError('Failure')
return "OK"checkpointer = InMemorySaver()
@entrypoint(checkpointer=checkpointer)
def workflow(inputs, writer):
return remote_call().result()config = {
"configurable": {
"thread_id": "1"
}
}workflow.invoke({'foo': 'foo'}, config=config)
6.错误恢复
在一个主函数中执行多个任务时,执行成功的任务,其执行结果保存在检查点中,所以在重新执行时,执行成功的任务不会重复执行。
如下代码中有两个任务,some_action模式执行失败,slow_task模拟一个耗时的任务。先执行耗时任务,然后执行some_action,耗时任务执行成功后把结果持久化,由于some_action执行失败,所以没有预期的输出。然后再次执行(注意调用invoke时第一个参数为None),不会执行slow_task,然后执行some_action成功,输出预期结果。
具体代码如下:
import time, random
from langgraph.checkpoint.memory import InMemorySaver
from langgraph.func import entrypoint, task
from langgraph.types import StreamWriter
attempts = 0
@task
def some_action()->str:
global attempts
attempts += 1
if attempts < 2:
raise ValueError("Failure")
return "OK"checkpointer = InMemorySaver()
@task
def slow_task():
time.sleep(1)
return "Run the slow task."@entrypoint(checkpointer=checkpointer)
def workflow(inputs):
slow_task_result = slow_task().result()
some_action().result()
return slow_task_resultconfig = {
"configurable": {
"thread_id": "1"
}
}
try:
result = workflow.invoke({'foo': 'foo'}, config=config)
print(result)
except ValueError:
pass
再次调用,执行成功,输出Run the slow task。
workflow.invoke(None, config=config)
7. 人机回环
7.1 基本流程
以下示例代码根据用户的查询,用户追加一个反馈。在human_assistan任务中完成该操作,在workflow中直接调用human_assistant,产生中断,等待用户反馈。具体代码如下:
from langgraph.func import entrypoint, task
from langgraph.types import Command, interrupt
from langgraph.checkpoint.memory import InMemorySavercheckpointer = InMemorySaver()
@entrypoint(checkpointer=checkpointer)
def workflow(inputs):
result = human_assistant(inputs['query']).result()
return result@task
def human_assistant(query):
feedback = interrupt(f"Please confirm verb's tense: {query}")
return f"{query}, {feedback}"
config = {"configurable": {"thread_id": "1"}}for event in workflow.stream({'query': 'how to make a cake?'}, config):
print(event)
print("\n")
输出如下:
{'__interrupt__': (Interrupt(value="Please confirm verb's tense: how to make a cake?", id='4656c3174593fc3b74ae8dedc8275eab'),)}
用户进行反馈:
for event in workflow.stream(Command(resume="follow me step by step"), config):
print(event)
print("\n")
输出如下:
{'human_assistant': 'how to make a cake?, follow me step by step'}
{'workflow': 'how to make a cake?, follow me step by step'}
7.2检查工具调用
函数式API也可以跟Graph一样,在调用工具前对调用参数进行检查,用户可以接受现有参数,也可以更新参数,还可以直接给出应答。具体代码如下:
先定义review_tool_call函数,该函数在主函数中被调用,产生中断,并接受用户的输入。
from typing import Union
from langchain_core.messages import ToolCall, ToolMessage
from langchain.schema import BaseMessage
def review_tool_call(tool_call: ToolCall) -> Union[ToolCall, ToolMessage]:
"""检查工具调用参数,返回修正后的版本."""
human_review = interrupt(
{
"question": "Is this correct?",
"tool_call": tool_call,
}
)
review_action = human_review["action"]
review_data = human_review.get("data")
if review_action == "continue": #接收
return tool_call
elif review_action == "update":#修订参数
updated_tool_call = {**tool_call, **{"args": review_data}}
return updated_tool_call
elif review_action == "feedback":#直接给出反馈
return ToolMessage(
content=review_data, name=tool_call["name"], tool_call_id=tool_call["id"]
)
下面的代码定义大模型并绑定搜索工具,然后定义两个任务,call_model调用大模型,call_tool执行工具调用。
from langgraph.checkpoint.memory import InMemorySaver
from langgraph.graph.message import add_messages
from langgraph.types import Command, interrupt
from langchain.schema import BaseMessage
from langchain_tavily import TavilySearch
from langchain_openai import ChatOpenAI
import os, jsonos.environ["TAVILY_API_KEY"] = "tvly-*"
search_tool = TavilySearch(max_results=2)
tools = [search_tool, ]
tools_by_name = {tool.name: tool for tool in tools}
llm = ChatOpenAI(
model = 'qwen-plus',
api_key = "sk-*",
base_url = "https://dashscope.aliyuncs.com/compatible-mode/v1")model = llm.bind_tools(tools=tools)
checkpointer = InMemorySaver()
@task
def call_model(messages: list[BaseMessage]):
response = model.invoke(messages)
return response@task
def call_tool(tool_call):
tool_result = tools_by_name[tool_call["name"]].invoke(
tool_call["args"]
)
return ToolMessage(
content=json.dumps(tool_result),
name=tool_call["name"],
tool_call_id=tool_call["id"],
)@entrypoint(checkpointer=checkpointer)
def workflow(messages, previous):
if previous is not None:
messages = add_messages(previous, messages)llm_response = call_model(messages).result()#调用大模型
while True:
if not llm_response.tool_calls:#如果没有工具调用,则跳出循环
breaktool_results = []
tool_calls = []
for i, tool_call in enumerate(llm_response.tool_calls):
review = review_tool_call(tool_call)#产生中断,等待用户输入
if isinstance(review, ToolMessage):#对应review_tool_call中的feedback
tool_results.append(review)
else: # 对应一个function calling
tool_calls.append(review)
if review != tool_call:#对一个review_tool_call中的update
llm_response.tool_calls[i] = review #更新调用工具的参数# 执行工具调用
tool_result_futures = [call_tool(tool_call) for tool_call in tool_calls]
remaining_tool_results = [fut.result() for fut in tool_result_futures]# 把工具调用结果及直接反馈追加到messages中
messages = add_messages(
messages,
[llm_response, *tool_results, *remaining_tool_results],)llm_response = call_model(messages).result()#携带工具调用结果调用大模型
# 合成最后的应答,也就是把把大模型应答追加到messages列表中
messages = add_messages(messages, llm_response)
return entrypoint.final(value=llm_response, save=messages)
805

被折叠的 条评论
为什么被折叠?



