Skip to main content

内存记忆 ( Memory )

LangChain

默认情况下,链式模型和代理模型都是无状态的,这意味着它们将每个传入的查询独立处理(就像底层的 LLMs 和聊天模型本身一样)。在某些应用程序中,比如聊天机器人,记住先前的交互是至关重要的。无论是短期还是长期,都要记住先前的交互。Memory 类正是做到了这一点。 LangChain 提供了两种形式的记忆组件。首先,LangChain 提供了用于管理和操作以前的聊天消息的辅助工具。这些工具被设计成模块化的,无论如何使用都很有用。其次,LangChain 提供了将这些工具轻松整合到链式模型中的方法。

入门

记忆涉及在用户与语言模型的交互过程中始终保留状态的概念。用户与语言模型的交互被捕获在 ChatMessages 的概念中,因此这归结为在一系列聊天消息中摄取、捕获、转换和提取知识。有许多不同的方法可以做到这一点,每种方法都作为自己的记忆类型存在。 通常情况下,对于每种类型的记忆,有两种理解和使用记忆的方式。一种是独立的函数,从一系列消息中提取信息,然后是您可以在链式模型中使用此类型的记忆的方式。 记忆可以返回多个信息片段(例如,最近的 N 条消息和所有先前消息的摘要)。返回的信息可以是字符串或消息列表。

我们将介绍最简单的存储形式:“缓冲”存储,它只涉及保留所有先前的消息的缓冲区。我们将展示如何在这里使用模块化实用函数,然后展示它如何在链中使用(返回字符串以及消息列表)。

聊天消息历史 (ChatMessageHistory)

大多数(如果不是全部)内存模块的核心实用类之一是 ChatMessageHistory 类。这是一个超轻量级的包装器,它公开了方便的方法来保存人类消息、AI 消息,然后获取它们全部。

如果您在链外管理内存,可能需要直接使用此类。

from langchain.memory import ChatMessageHistory

history = ChatMessageHistory()

history.add_user_message("hi!")

history.add_ai_message("whats up?")
history.messages
    [HumanMessage(content='hi!', additional_kwargs={}),
AIMessage(content='whats up?', additional_kwargs={})]

ConversationBufferMemory

现在我们展示如何在链中使用这个简单的概念。我们首先展示 ConversationBufferMemory,它只是 ChatMessageHistory 的一个包装器,可以提取变量中的消息。

我们可以首先将其提取为字符串。

from langchain.memory import ConversationBufferMemory
memory = ConversationBufferMemory()
memory.chat_memory.add_user_message("hi!")
memory.chat_memory.add_ai_message("whats up?")
memory.load_memory_variables({})
    {'history': 'Human: hi!\nAI: whats up?'}

我们还可以将历史记录作为消息列表获取

memory = ConversationBufferMemory(return_messages=True)
memory.chat_memory.add_user_message("hi!")
memory.chat_memory.add_ai_message("whats up?")
memory.load_memory_variables({})
    {'history': [HumanMessage(content='hi!', additional_kwargs={}),
AIMessage(content='whats up?', additional_kwargs={})]}

Using in a chain

Finally, let's take a look at using this in a chain (setting verbose=True so we can see the prompt).

from langchain.llms import OpenAI
from langchain.chains import ConversationChain


llm = OpenAI(temperature=0)
conversation = ConversationChain(
llm=llm,
verbose=True,
memory=ConversationBufferMemory()
)
conversation.predict(input="Hi there!")


> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.

Current conversation:

Human: Hi there!
AI:

> Finished chain.





" Hi there! It's nice to meet you. How can I help you today?"
conversation.predict(input="I'm doing well! Just having a conversation with an AI.")


> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.

Current conversation:
Human: Hi there!
AI: Hi there! It's nice to meet you. How can I help you today?
Human: I'm doing well! Just having a conversation with an AI.
AI:

> Finished chain.





" That's great! It's always nice to have a conversation with someone new. What would you like to talk about?"
conversation.predict(input="Tell me about yourself.")


> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.

Current conversation:
Human: Hi there!
AI: Hi there! It's nice to meet you. How can I help you today?
Human: I'm doing well! Just having a conversation with an AI.
AI: That's great! It's always nice to have a conversation with someone new. What would you like to talk about?
Human: Tell me about yourself.
AI:

> Finished chain.





" Sure! I'm an AI created to help people with their everyday tasks. I'm programmed to understand natural language and provide helpful information. I'm also constantly learning and updating my knowledge base so I can provide more accurate and helpful answers."

保存消息历史

您经常需要保存消息,然后加载它们以便再次使用。这可以通过先将消息转换为普通的 Python 字典,保存这些字典(如 json 或其他格式),然后加载它们来轻松完成。以下是一个示例。

import json

from langchain.memory import ChatMessageHistory
from langchain.schema import messages_from_dict, messages_to_dict

history = ChatMessageHistory()

history.add_user_message("hi!")

history.add_ai_message("whats up?")
dicts = messages_to_dict(history.messages)
dicts
    [{'type': 'human', 'data': {'content': 'hi!', 'additional_kwargs': {}}},
{'type': 'ai', 'data': {'content': 'whats up?', 'additional_kwargs': {}}}]
new_messages = messages_from_dict(dicts)
new_messages
    [HumanMessage(content='hi!', additional_kwargs={}),
AIMessage(content='whats up?', additional_kwargs={})]

这就是入门的全部内容!有许多不同类型的内存,请查看我们的示例以了解全部内容