Skip to Content
guidesBuilding Graphs

Last Updated: 3/11/2026


Building Graphs

Learn how to design and build LangGraph workflows using the Graph API.

The Graph API

The Graph API is LangGraph’s primary interface for building agent workflows. It gives you explicit control over nodes, edges, and state.

When to Use Graph API vs Functional API

Use Graph API when:

  • You want explicit control over graph structure
  • Your workflow has complex branching or parallel execution
  • You need to visualize the graph
  • You’re building multi-agent systems

Use Functional API when:

  • Your workflow is primarily sequential
  • You prefer writing standard Python/JavaScript control flow
  • You don’t need graph visualization

Designing Your Workflow

Before writing code, map out your workflow:

  1. Identify discrete steps: What are the distinct operations?
  2. Define state: What data needs to persist between steps?
  3. Determine routing: Which steps always follow each other? Which depend on conditions?
  4. Plan for errors: How should each step handle failures?

Example: Customer Support Email Agent

Let’s design an agent that processes customer support emails:

Steps:

  1. Read email
  2. Classify intent (question, bug, billing, feature request)
  3. Route based on classification:
    • Question → Search docs → Draft reply
    • Bug → Create ticket → Draft reply
    • Billing/Complex → Human review
  4. Send reply (if approved)

State:

class State(TypedDict): email_content: str sender: str classification: dict | None search_results: list[str] | None draft_response: str | None

State Management Patterns

Keep State Raw

Store raw data in state, not formatted strings:

# Good: Raw data state = { "search_results": ["doc1 content", "doc2 content"], "classification": {"intent": "question", "urgency": "low"} } # Bad: Pre-formatted strings state = { "search_results": "Results:\n- doc1 content\n- doc2 content", "classification": "Intent: question, Urgency: low" }

Format data on-demand inside nodes when building prompts.

Use Reducers for Accumulation

When you need to accumulate values (not overwrite), use reducers:

Python:

from typing import Annotated import operator class State(TypedDict): # Accumulates messages messages: Annotated[list, operator.add] # Overwrites count count: int

JavaScript:

import { StateSchema, ReducedValue } from "@langchain/langgraph"; import { z } from "zod"; const State = new StateSchema({ messages: new ReducedValue( z.array(z.any()).default(() => []), { reducer: (x, y) => x.concat(y) } ), count: z.number(), });

Multiple State Schemas

Use different schemas for graph input, output, and internal state:

Python:

class InputState(TypedDict): user_input: str class OutputState(TypedDict): response: str class InternalState(TypedDict): user_input: str response: str intermediate_data: dict # Not exposed to input/output builder = StateGraph( InternalState, input_schema=InputState, output_schema=OutputState )

Node Patterns

LLM Nodes

Nodes that call language models:

from langchain.chat_models import init_chat_model model = init_chat_model("gpt-4o-mini") def classify_email(state: State) -> dict: prompt = f"Classify this email: {state['email_content']}" response = model.invoke([{"role": "user", "content": prompt}]) return {"classification": response.content}

Data Retrieval Nodes

Nodes that fetch external data:

def search_docs(state: State) -> dict: query = state["classification"]["topic"] results = vector_store.similarity_search(query, k=3) return {"search_results": [doc.page_content for doc in results]}

Action Nodes

Nodes that perform external actions:

def send_email(state: State) -> dict: email_service.send( to=state["sender"], body=state["draft_response"] ) return {}

Routing Patterns

Static Routing

Use add_edge for deterministic flow:

builder.add_edge("read_email", "classify_email") builder.add_edge("search_docs", "draft_response")

Conditional Routing

Use add_conditional_edges for dynamic routing:

from typing import Literal def route_by_intent(state: State) -> Literal["search_docs", "create_ticket", "human_review"]: intent = state["classification"]["intent"] if intent == "question": return "search_docs" elif intent == "bug": return "create_ticket" else: return "human_review" builder.add_conditional_edges( "classify_email", route_by_intent, ["search_docs", "create_ticket", "human_review"] )

Command-Based Routing

Combine state updates with routing using Command:

Python:

from langgraph.types import Command from typing import Literal def classify_email(state: State) -> Command[Literal["search_docs", "human_review"]]: # Classify the email classification = llm.invoke(state["email_content"]) # Determine next node if classification["urgency"] == "high": next_node = "human_review" else: next_node = "search_docs" # Return both update and routing return Command( update={"classification": classification}, goto=next_node )

Error Handling

Transient Errors (Automatic Retry)

Add retry policies for network failures:

Python:

from langgraph.types import RetryPolicy builder.add_node( "search_docs", search_docs, retry_policy=RetryPolicy(max_attempts=3, initial_interval=1.0) )

LLM-Recoverable Errors

Let the LLM see errors and try again:

def execute_tool(state: State): try: result = run_tool(state["tool_call"]) return {"tool_result": result} except ToolError as e: # LLM will see the error and can adjust return {"tool_result": f"Error: {str(e)}"}

User-Fixable Errors

Pause for user input:

from langgraph.types import interrupt def lookup_customer(state: State): if not state.get("customer_id"): user_input = interrupt({"message": "Need customer ID"}) return {"customer_id": user_input["customer_id"]} # Continue with lookup return {"customer_data": fetch_customer(state["customer_id"])}

Complete Example

Here’s a complete email agent:

from langgraph.graph import StateGraph, START, END from typing_extensions import TypedDict from typing import Literal class State(TypedDict): email_content: str sender: str classification: dict | None search_results: list[str] | None draft_response: str | None def classify_email(state: State) -> dict: # Call LLM to classify classification = {"intent": "question", "urgency": "low"} return {"classification": classification} def search_docs(state: State) -> dict: # Search documentation results = ["How to reset password: Go to Settings > Security"] return {"search_results": results} def draft_response(state: State) -> dict: # Generate response response = f"Based on our docs: {state['search_results'][0]}" return {"draft_response": response} def route_by_intent(state: State) -> Literal["search_docs", END]: if state["classification"]["intent"] == "question": return "search_docs" return END # Build graph builder = StateGraph(State) builder.add_node("classify_email", classify_email) builder.add_node("search_docs", search_docs) builder.add_node("draft_response", draft_response) builder.add_edge(START, "classify_email") builder.add_conditional_edges("classify_email", route_by_intent, ["search_docs", END]) builder.add_edge("search_docs", "draft_response") builder.add_edge("draft_response", END) app = builder.compile() # Run result = app.invoke({ "email_content": "How do I reset my password?", "sender": "user@example.com" }) print(result["draft_response"])

Best Practices

  1. Keep nodes focused: Each node should do one thing well
  2. Store raw data: Format on-demand in prompts
  3. Use reducers wisely: Accumulate messages, overwrite scalars
  4. Handle errors explicitly: Retry transient errors, expose recoverable errors to LLM
  5. Design before coding: Map your workflow on paper first

Next Steps

  • Persistence & Memory: Add checkpoints and memory stores
  • Streaming: Stream real-time progress to users
  • Human-in-the-Loop: Add approval workflows with interrupts
  • Graph API Reference: Explore the complete API