Last Updated: 3/11/2026
Human-in-the-Loop
Add human oversight to your LangGraph agents with interrupts.
Why Human-in-the-Loop?
Human-in-the-loop enables:
- Approval workflows: Review before executing actions
- Quality control: Verify AI-generated content
- Error correction: Fix mistakes before proceeding
- Compliance: Ensure regulatory requirements are met
Requirements
Human-in-the-loop requires a checkpointer to save state while waiting for human input.
Using Interrupts
The interrupt() function pauses execution and waits for human input.
Basic Interrupt
Python:
from langgraph.types import interrupt, Command
from langgraph.checkpoint.memory import MemorySaver
def human_review(state: State):
# Pause and wait for approval
approval = interrupt({"message": "Review this draft", "draft": state["draft"]})
if approval["approved"]:
return {"status": "approved"}
else:
return {"status": "rejected"}
# Compile with checkpointer
app = builder.compile(checkpointer=MemorySaver())
# First invocation hits interrupt
config = {"configurable": {"thread_id": "1"}}
result = app.invoke(inputs, config)
print(result["__interrupt__"]) # Shows interrupt data
# Resume with human input
result = app.invoke(
Command(resume={"approved": True}),
config
)JavaScript:
import { interrupt, Command } from "@langchain/langgraph";
import { MemorySaver } from "@langchain/langgraph";
const humanReview = (state) => {
const approval = interrupt({ message: "Review this draft", draft: state.draft });
if (approval.approved) {
return { status: "approved" };
}
return { status: "rejected" };
};
const app = builder.compile({ checkpointer: new MemorySaver() });
const config = { configurable: { thread_id: "1" } };
const result = await app.invoke(inputs, config);
// Resume
const resumed = await app.invoke(
new Command({ resume: { approved: true } }),
config
);Common Patterns
Approval Before Action
Review generated content before sending:
def draft_email(state: State) -> dict:
draft = llm.invoke(f"Write email about: {state['topic']}")
return {"draft": draft.content}
def review_email(state: State):
response = interrupt({
"message": "Approve email?",
"draft": state["draft"]
})
if response.get("edited_draft"):
return {"draft": response["edited_draft"]}
return {}
def send_email(state: State) -> dict:
email_service.send(state["draft"])
return {"status": "sent"}
builder.add_edge("draft_email", "review_email")
builder.add_edge("review_email", "send_email")Tool Approval
Approve tool calls before execution:
def approve_tools(state: State):
tool_calls = state["messages"][-1].tool_calls
approval = interrupt({
"message": "Approve these tool calls?",
"tools": [tc["name"] for tc in tool_calls]
})
if not approval["approved"]:
return {"messages": [{"role": "assistant", "content": "Tool use denied"}]}
return {} # Continue to tool executionEdit Before Continue
Allow editing state before resuming:
def generate_report(state: State):
report = llm.invoke(state["data"])
edited = interrupt({
"message": "Edit report if needed",
"report": report.content
})
final_report = edited.get("edited_report", report.content)
return {"report": final_report}Multiple Interrupts
Handle multiple interrupts in sequence:
def multi_review(state: State):
# First interrupt
draft = interrupt({"message": "Review draft", "draft": state["draft"]})
# Second interrupt
final = interrupt({"message": "Final approval", "draft": draft["edited_draft"]})
return {"final_draft": final["approved_draft"]}
# Resume first interrupt
app.invoke(Command(resume={"edited_draft": "..."}), config)
# Resume second interrupt
app.invoke(Command(resume={"approved_draft": "..."}), config)Conditional Interrupts
Only interrupt when needed:
def conditional_review(state: State):
if state["confidence"] < 0.8:
approval = interrupt({"message": "Low confidence, review?"})
if not approval["proceed"]:
return {"status": "rejected"}
return {"status": "approved"}Get Interrupt Status
Check if execution is interrupted:
state = app.get_state(config)
if state.next: # Has next nodes to execute
print("Execution paused")
print(f"Next nodes: {state.next}")
# Check for interrupts
for task in state.tasks:
if task.interrupts:
print(f"Interrupt data: {task.interrupts}")Best Practices
- Always use checkpointers: Interrupts require state persistence
- Provide context: Include relevant data in interrupt payload
- Validate input: Check human responses before proceeding
- Handle rejections: Plan for “no” responses
- Set timeouts: Don’t wait forever for human input
Next Steps
- Graph API Reference: Explore interrupt APIs
- Error Handling: Handle interrupt failures