Using LangChain: How to Add Conversational Memory to an LLM?

12 months ago 101

Recognizing the need for continuity in user interactions, LangChain, a versatile software framework designed for building applications around LLMs, introduces a pivotal feature known as Conversational Memory. This feature empowers developers to seamlessly integrate memory capabilities into LLMs, enabling...

Recognizing the need for continuity in user interactions, LangChain, a versatile software framework designed for building applications around LLMs, introduces a pivotal feature known as Conversational Memory. This feature empowers developers to seamlessly integrate memory capabilities into LLMs, enabling them to retain information from previous interactions and respond contextually.

Conversational Memory is a fundamental aspect of LangChain that proves instrumental in creating applications, particularly chatbots. Unlike stateless conversations, where each interaction is treated in isolation, Conversational Memory allows LLMs to remember and leverage information from prior exchanges. This breakthrough feature transforms the user experience, ensuring a more natural and coherent flow of conversation.

Initialize the LLM and ConversationChain

Let’s start by initializing the large language model and the conversational chain using langchain. This will set the stage for implementing conversational memory.

from langchain import OpenAI from langchain.chains import ConversationChain # first initialize the large language model llm = OpenAI(     temperature=0,     openai_api_key="OPENAI_API_KEY",     model_name="text-davinci-003" ) # now initialize the conversation chain conversation_chain = ConversationChain(llm) ConversationBufferMemory

The ConversationBufferMemory in LangChain stores past interactions between the user and AI in its raw form, preserving the complete history. This enables the model to understand and respond contextually by considering the entire conversation flow during subsequent interactions.

from langchain.chains.conversation.memory import ConversationBufferMemory # Assuming you have already initialized the OpenAI model (llm) elsewhere # Initialize the ConversationChain with ConversationBufferMemory conversation_buf = ConversationChain(     llm=llm,     memory=ConversationBufferMemory() ) Counting the Tokens

We have added a count_tokens function so that we can keep a count of the tokens used in each interaction.

from langchain.callbacks import get_openai_callback def count_tokens(chain, query):     # Using the get_openai_callback to track token usage     with get_openai_callback() as cb:         # Run the query through the conversation chain         result = chain.run(query)         # Print the total number of tokens used         print(f'Spent a total of {cb.total_tokens} tokens')     return result Checking the history

To check if the ConversationBufferMemory has saved the history or not, we can print the conversation history just as shown below. This will show that the buffer saves every interaction in the chat history.

ConversationSummaryMemory

When using ConversationSummaryMemory in LangChain, the conversation history is summarized before being provided to the history parameter. This helps control token usage, preventing the quick exhaustion of tokens and overcoming context window limits in advanced LLMs. 

from langchain.chains.conversation.memory import ConversationSummaryMemory # Assuming you have already initialized the OpenAI model (llm) conversation = ConversationChain(     llm=llm,     memory=ConversationSummaryMemory(llm=llm) ) # Access and print the template attribute from ConversationSummaryMemory print(conversation.memory.prompt.template)

Using ConversationSummaryMemory in LangChain offers an advantage for longer conversations as it initially consumes more tokens but grows more slowly as the conversation progresses. This summarization approach is beneficial for cases with extended interactions, providing more efficient use of tokens compared to ConversationBufferMemory, which grows linearly with the number of tokens in the chat. However, it is important to note that even with summarization, there are still inherent limitations due to token constraints over time.

ConversationBufferWindowMemory

We initialize the ConversationChain with ConversationBufferWindowMemory, setting the parameter `k` to 1. This indicates that we are using a windowed buffer memory approach with a window size of 1. This means that only the most recent interaction is retained in memory, discarding previous conversations beyond the most recent exchange. This windowed buffer memory is beneficial when you want to maintain contextual understanding with a limited history.

from langchain.chains.conversation.memory import ConversationBufferWindowMemory # Assuming you have already initialized llm # Initialize ConversationChain with ConversationBufferWindowMemory conversation = ConversationChain(     llm=llm,     memory=ConversationBufferWindowMemory(k=1) ) ConversationSummaryBufferMemory

Here, a ConversationChain named conversation_sum_bufw is initialized with the ConversationSummaryBufferMemory. This memory type utilizes summarization and buffer window techniques to remember essential early interactions while maintaining recent tokens, with a specified token limit of 650 to control memory usage.

In conclusion, using conversational memory in LangChain offers a variety of options to manage the state of conversations with Large Language Models. The examples provided demonstrate different ways to tailor the conversation memory based on specific scenarios. Apart from the ones listed above, we have some more options like ConversationKnowledgeGraphMemory and ConversationEntityMemory.

Whether it’s sending the entire history, utilizing summaries, tracking token counts, or combining these methods, exploring the available options and selecting the appropriate pattern for the use case is key. LangChain provides flexibility, allowing users to implement custom memory modules, combine multiple memory types within the same chain, integrate them with agents, and more.

References

https://medium.com/@michael.j.hamilton/conversational-memory-with-langchain-82c25e23ec60 https://www.pinecone.io/learn/series/langchain/langchain-conversational-memory/

The post Using LangChain: How to Add Conversational Memory to an LLM? appeared first on MarkTechPost.


View Entire Post

Read Entire Article