Skip to Content
guidesPersistence Memory

Last Updated: 3/11/2026


Persistence & Memory

Add state persistence and memory to your LangGraph agents.

Why Persistence?

Persistence enables:

  • Human-in-the-loop: Pause and resume execution
  • Conversation memory: Remember context across messages
  • Fault tolerance: Recover from failures
  • Time travel: Replay and debug past executions

Checkpointing

Checkpointers save graph state at each step.

Basic Setup

Python:

from langgraph.checkpoint.memory import MemorySaver from langgraph.graph import StateGraph checkpointer = MemorySaver() app = builder.compile(checkpointer=checkpointer) # Invoke with thread_id config = {"configurable": {"thread_id": "conversation-1"}} result = app.invoke(inputs, config)

JavaScript:

import { MemorySaver } from "@langchain/langgraph"; const checkpointer = new MemorySaver(); const app = builder.compile({ checkpointer }); const config = { configurable: { thread_id: "conversation-1" } }; const result = await app.invoke(inputs, config);

Threads

A thread is a unique conversation session identified by thread_id. All state for that conversation is saved under that ID.

Production Checkpointers

For production, use persistent storage:

Python:

# PostgreSQL from langgraph.checkpoint.postgres import PostgresSaver checkpointer = PostgresSaver.from_conn_string("postgresql://...") # SQLite from langgraph.checkpoint.sqlite import SqliteSaver import sqlite3 conn = sqlite3.connect("checkpoints.db") checkpointer = SqliteSaver(conn)

JavaScript:

// PostgreSQL import { PostgresSaver } from "@langchain/langgraph-checkpoint-postgres"; const checkpointer = PostgresSaver.fromConnString("postgresql://..."); // SQLite import { SqliteSaver } from "@langchain/langgraph-checkpoint-sqlite"; const checkpointer = SqliteSaver.fromConnString(":memory:");

Short-Term Memory

Short-term memory is conversation history within a thread.

Message History

Use MessagesState to track conversation:

Python:

from langgraph.graph import MessagesState class State(MessagesState): custom_field: str # Messages automatically accumulate result = app.invoke( {"messages": [{"role": "user", "content": "Hello"}]}, config={"configurable": {"thread_id": "1"}} ) # Continue conversation result = app.invoke( {"messages": [{"role": "user", "content": "Follow-up question"}]}, config={"configurable": {"thread_id": "1"}} )

Managing Message History

Long conversations can exceed context windows. Filter messages:

def filter_messages(state: State) -> dict: # Keep only last 10 messages messages = state["messages"][-10:] return {"messages": messages}

Long-Term Memory

Long-term memory persists across threads using the memory store.

Memory Store Basics

Python:

from langgraph.store.memory import InMemoryStore store = InMemoryStore() # Save memory user_id = "user-123" namespace = (user_id, "preferences") store.put(namespace, "food", {"preference": "vegetarian"}) # Retrieve memory memories = store.search(namespace) print(memories[0].value) # {"preference": "vegetarian"}

JavaScript:

import { InMemoryStore } from "@langchain/langgraph"; const store = new InMemoryStore(); const userId = "user-123"; const namespace = [userId, "preferences"]; await store.put(namespace, "food", { preference: "vegetarian" }); const memories = await store.search(namespace); console.log(memories[0].value); // { preference: "vegetarian" }

Using Store in Graphs

Python:

from langgraph.runtime import Runtime def call_model(state: State, runtime: Runtime): # Access store user_id = runtime.context.user_id memories = runtime.store.search((user_id, "preferences")) # Use memories in prompt preferences = memories[0].value if memories else {} prompt = f"User preferences: {preferences}\n{state['query']}" return {"response": llm.invoke(prompt)} # Compile with store app = builder.compile(checkpointer=checkpointer, store=store) # Invoke with user context result = app.invoke( inputs, config={"configurable": {"thread_id": "1"}}, context={"user_id": "user-123"} )

Enable semantic search with embeddings:

Python:

from langchain.embeddings import init_embeddings store = InMemoryStore( index={ "embed": init_embeddings("openai:text-embedding-3-small"), "dims": 1536, "fields": ["$"] # Index all fields } ) # Search by meaning memories = store.search( (user_id, "preferences"), query="What does the user like to eat?", limit=3 )

Get and Update State

Get Current State

config = {"configurable": {"thread_id": "1"}} state = app.get_state(config) print(state.values) # Current state print(state.next) # Next nodes to execute

Get State History

history = list(app.get_state_history(config)) for snapshot in history: print(f"Step {snapshot.metadata['step']}: {snapshot.values}")

Update State

# Manually update state app.update_state( config, {"custom_field": "new value"}, as_node="my_node" # Treat update as coming from this node )

Best Practices

  1. Use threads for conversations: Each conversation gets a unique thread_id
  2. Use store for cross-thread data: User preferences, facts, rules
  3. Manage message history: Filter old messages to stay within context limits
  4. Use production checkpointers: MemorySaver is for development only
  5. Namespace memories logically: (user_id, category) for easy retrieval

Next Steps

  • Streaming: Stream real-time updates
  • Human-in-the-Loop: Add approval workflows
  • Graph API Reference: Explore persistence APIs