langchain学习(十一)

文章介绍了Langchain中的memory功能,特别是如何通过RunnableWithMessageHistory在链中添加对话历史,以及ConversationBufferMemory用于保存和检索聊天记录。文章还展示了如何使用不同的数据结构存储对话历史,如BaseMessage序列、dict以及与Redis的集成。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

关于langchain中的memory,即对话历史(message history)

1、

Add message history (memory) | 🦜️🔗 Langchain

RunnableWithMessageHistory,可用于任何的chain中添加对话历史,将以下之一作为输入

(1)一个BaseMessage序列

(2)一个dict,其中一个键的值是一个BaseMessage序列

(3)一个dict,其中一个键的值存储最后一次对话信息,另外一个键的值存储之前的历史对话信息

输出以下之一

(1)一个可以作为AIMessage的content的字符串

(2)一个BaseMessage序列

(3)一个dict,其中一个键的值是一个BaseMessage序列

首先需要一个返回BaseChatMessageHistory实例的可调用函数,这里我们将历史对话存储在内存中,同时langchain也支持将历史对话存储在redis中(RedisChatMessageHistory)更持久的存储,

from langchain_community.chat_message_histories import ChatMessageHistory
def get_session_history(session_id):#一轮对话的内容只存储在一个key/session_id
    if session_id not in store:
        store[session_id] = ChatMessageHistory()
    return store[session_id]

(1)输入是一个BaseMessage序列的示例

from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_core.messages import HumanMessage
with_message_history=RunnableWithMessageHistory(
    ChatOpenAI(),
    get_session_history,
)
print(with_message_history.invoke(input=HumanMessage("介绍下王阳明")
                                  ,config={'configurable':{'session_id':'id123'}}))

(2)输入是一个dict,其中一个键的值是一个BaseMessage序列的示例

from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_openai.chat_models import ChatOpenAI
from langchain_core.messages import HumanMessage
from langchain_core.runnables.history import RunnableWithMessageHistory
model = ChatOpenAI()
prompt = ChatPromptTemplate.from_messages(
    [
        (
            "system",
            "你是一个助手,擅长能力{ability}。用20个字以内回答",
        ),
        MessagesPlaceholder(variable_name="history"),
        ("human", "{input}"),
    ]
)
runnable = prompt | model
with_message_history = RunnableWithMessageHistory(
    runnable,
    get_session_history,
    input_messages_key="input",
    history_messages_key="history",
)
i1=with_message_history.invoke(
    {"ability": "数学", "input": HumanMessage("什么是余弦定理")},
    config={"configurable": {"session_id": "abc123"}},#历史信息存入session_id
)
print(i1)
i2=with_message_history.invoke(
    {"ability": "math", "input": HumanMessage("重新回答一次")},
    config={"configurable": {"session_id": "abc123"}},#历史信息存入session_id
)
print(i2)#记忆到了
print(store)

a0bafcf88f0545008ef5903190f1b692.png

(3)前面的是dict输入message输出,下面是其他的方案

输入message,输出dict

c721967e9dac4995b71139d47f2e3b59.png

from langchain_core.runnables import RunnableParallel
chain = RunnableParallel({"output_message": ChatOpenAI()})
with_message_history = RunnableWithMessageHistory(
    chain,
    get_session_history,
    output_messages_key="output_message",
)

i1=with_message_history.invoke(
    [HumanMessage(content="白雪公主是哪里的人物")],
    config={"configurable": {"session_id": "baz"}},
)
print(i1)

输入message,输出message:简易实现对话系统

2e10d5fe0d5f4e7280bf4663d89c3979.png

from operator import itemgetter
with_message_history =RunnableWithMessageHistory(
    itemgetter("input_messages") | ChatOpenAI(),
    get_session_history,
    input_messages_key="input_messages",
)
while True:
    # print(store)
    query=input('user:')
    answer=with_message_history.invoke(
            input={'input_messages':query},
            config={'configurable':{'session_id':'id123'}})
    print(answer)

2、对话系统:ConversationBufferMemory

Adding memory | 🦜️🔗 Langchain

from operator import itemgetter
from langchain.memory import ConversationBufferMemory
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables import RunnableLambda, RunnablePassthrough
from langchain_openai import ChatOpenAI
model = ChatOpenAI()
prompt = ChatPromptTemplate.from_messages(
    [
        (
            "system",
            "你是一个助手,擅长能力{ability}。用20个字以内回答",
        ),
        MessagesPlaceholder(variable_name="history"),
        ("human", "{input}"),
    ]
)
memory = ConversationBufferMemory(return_messages=True)
chain = (
    RunnablePassthrough.assign(
        history=RunnableLambda(memory.load_memory_variables) | itemgetter("history")
    )
    | prompt
    | model
)
while True:
    import re
    query = input('user:')
    response = chain.invoke({"ability": "数学", "input": query})#<class 'langchain_core.messages.ai.AIMessage'>
    print(response.content)
    memory.save_context({"input": query}, {"output": response.content})

3、

[Beta] Memory | 🦜️🔗 Langchain

LangChain中大多数与memory相关的功能都标记为测试版,原因一是大部分还未开发完全,二是大部分是历史版本的使用规则。除外的可使用的有ChatMessageHistoryLCEL Runnables

(1)相关介绍

记忆系统需要有两个动作:读和写;chain将与其memory交互两次。

50876788bbb8485ea7b9e03337a6502e.png

(2)构建memory

构建memory有2个核心的设计:存储与查询;

由简单到复杂:返回最近一次对话信息;返回最近K次对话信息的总结;返回与当前主题相关的信息总结

Memory types | 🦜️🔗 Langchain

(3)实践使用

ConversationBufferMemory:将聊天消息列表保存在缓冲区中,并将其传递到提示模板中

memory=ConversationBufferMemory()
memory.chat_memory.add_user_message("hi")
memory.chat_memory.add_ai_message("what's up")
print(memory)
#chat_memory=ChatMessageHistory(messages=[HumanMessage(content='hi'), AIMessage(content="what's up")])

memory的变量/属性:memory.load_memory_variables({})

{'history': "Human: hi\nAI: what's up"},这个示例返回字典中有个键是“history”,所以在chain构建过程中需要一个输入变量名为“history”,这个键的名称是可以修改的,方法如下

memory=ConversationBufferMemory(memory_key="chat_history")
memory.chat_memory.add_user_message("hi")
memory.chat_memory.add_ai_message("what's up")
print(memory)
print(memory.load_memory_variables({}))
#chat_memory=ChatMessageHistory(messages=[HumanMessage(content='hi'), AIMessage(content="what's up")]) memory_key='chat_history'
#{'chat_history': "Human: hi\nAI: what's up"}

控制返回的是string或者是list,采用return_messages参数,return_messages=True返回list;string常用于LLMs,list常用于ChatModels

memory = ConversationBufferMemory(return_messages=True)
memory.chat_memory.add_user_message("hi!")
memory.chat_memory.add_ai_message("what's up?")
# {'history': [HumanMessage(content='hi!', additional_kwargs={}, example=False),
  AIMessage(content='what's up?', additional_kwargs={}, example=False)]}

控制哪些key需要被存储:参数input_key and output_key 

总结

示例1:LLM

from langchain_openai import OpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain.memory import ConversationBufferMemory

llm = OpenAI(temperature=0)
# Notice that "chat_history" is present in the prompt template
template = """You are a nice chatbot having a conversation with a human.

Previous conversation:
{chat_history}

New human question: {question}
Response:"""
prompt = PromptTemplate.from_template(template)
# Notice that we need to align the `memory_key`
memory = ConversationBufferMemory(memory_key="chat_history")
memory.chat_memory.add_ai_message("it is sunny today")
print(memory)
conversation = LLMChain(
    llm=llm,
    prompt=prompt,
    # verbose=True,#显示过程
    memory=memory
)
print(conversation.invoke({"question": "tomorrow how"}))

chat_memory=ChatMessageHistory(messages=[AIMessage(content='it is sunny today')]) memory_key='chat_history'
{'question': 'tomorrow how', 'chat_history': 'AI: it is sunny today', 'text': " Tomorrow's forecast is partly cloudy with a chance of rain in the afternoon. Make sure to bring an umbrella just in case!"}

示例2:chat_model

from langchain_openai import ChatOpenAI
from langchain.prompts import (
    ChatPromptTemplate,
    MessagesPlaceholder,
    SystemMessagePromptTemplate,
    HumanMessagePromptTemplate,
)
from langchain.chains import LLMChain
from langchain.memory import ConversationBufferMemory
llm = ChatOpenAI()
prompt = ChatPromptTemplate(
    messages=[
        SystemMessagePromptTemplate.from_template(
            "You are a nice chatbot having a conversation with a human."
        ),
        # The `variable_name` here is what must align with memory
        MessagesPlaceholder(variable_name="chat_history"),
        HumanMessagePromptTemplate.from_template("{question}")
    ]
)
# Notice that we `return_messages=True` to fit into the MessagesPlaceholder
# Notice that `"chat_history"` aligns with the MessagesPlaceholder name.
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
memory.chat_memory.add_ai_message("it is sunny today")
conversation = LLMChain(
    llm=llm,
    prompt=prompt,
    # verbose=True,
    memory=memory
)
print(conversation.invoke({"question": "tomorrow how"}))

{'question': 'tomorrow how', 'chat_history': [AIMessage(content='it is sunny today'), HumanMessage(content='tomorrow how'), AIMessage(content="I'm not sure about tomorrow's weather forecast, but I hope it's just as nice as today! Do you have any plans for tomorrow?")], 'text': "I'm not sure about tomorrow's weather forecast, but I hope it's just as nice as today! Do you have any plans for tomorrow?"}


有关更高级主题的学习,如自定义内存、多个内存等,请参阅后续笔记。

 

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值