Skip to Content
getting-startedQuickstart

Last Updated: 3/11/2026


Quickstart: Build a Calculator Agent

This tutorial shows you how to build a calculator agent that can perform arithmetic using tools. You’ll learn the basics of LangGraph’s Graph API by creating an agent that decides when to use tools and when to respond directly.

Prerequisites

  • LangGraph installed (pip install langgraph or npm install @langchain/langgraph)
  • An Anthropic API key (sign up at anthropic.com )
  • Set ANTHROPIC_API_KEY environment variable

What You’ll Build

A calculator agent that:

  1. Receives a math question from the user
  2. Decides whether to use tools (add, multiply, divide)
  3. Calls the appropriate tools
  4. Returns the final answer

Step 1: Define Tools and Model

First, define the arithmetic tools your agent can use:

Python:

from langchain.tools import tool from langchain.chat_models import init_chat_model model = init_chat_model("claude-sonnet-4-6", temperature=0) @tool def add(a: int, b: int) -> int: """Add two numbers.""" return a + b @tool def multiply(a: int, b: int) -> int: """Multiply two numbers.""" return a * b @tool def divide(a: int, b: int) -> float: """Divide two numbers.""" return a / b tools = [add, multiply, divide] tools_by_name = {tool.name: tool for tool in tools} model_with_tools = model.bind_tools(tools)

JavaScript:

import { ChatAnthropic } from "@langchain/anthropic"; import { tool } from "@langchain/core/tools"; import { z } from "zod"; const model = new ChatAnthropic({ model: "claude-sonnet-4-6", temperature: 0 }); const add = tool( ({ a, b }) => a + b, { name: "add", description: "Add two numbers", schema: z.object({ a: z.number().describe("First number"), b: z.number().describe("Second number"), }), } ); // Define multiply and divide similarly... const tools = [add, multiply, divide]; const modelWithTools = model.bindTools(tools);

Step 2: Define State

The graph state stores messages and tracks how many times the LLM is called:

Python:

from langchain.messages import AnyMessage from typing_extensions import TypedDict, Annotated import operator class MessagesState(TypedDict): messages: Annotated[list[AnyMessage], operator.add] llm_calls: int

The Annotated type with operator.add ensures new messages are appended to the list.

JavaScript:

import { StateSchema, MessagesValue, ReducedValue } from "@langchain/langgraph"; import { z } from "zod"; const MessagesState = new StateSchema({ messages: MessagesValue, llmCalls: new ReducedValue( z.number().default(0), { reducer: (x, y) => x + y } ), });

Step 3: Define the LLM Node

This node calls the LLM to decide whether to use a tool:

Python:

from langchain.messages import SystemMessage def llm_call(state: dict): """LLM decides whether to call a tool or not.""" return { "messages": [ model_with_tools.invoke( [SystemMessage(content="You are a helpful assistant tasked with performing arithmetic.")] + state["messages"] ) ], "llm_calls": state.get('llm_calls', 0) + 1 }

JavaScript:

import { SystemMessage } from "@langchain/core/messages"; const llmCall = async (state) => { const response = await modelWithTools.invoke([ new SystemMessage("You are a helpful assistant tasked with performing arithmetic."), ...state.messages, ]); return { messages: [response], llmCalls: 1 }; };

Step 4: Define the Tool Node

This node executes the tools selected by the LLM:

Python:

from langchain.messages import ToolMessage def tool_node(state: dict): """Execute the tool calls.""" result = [] for tool_call in state["messages"][-1].tool_calls: tool = tools_by_name[tool_call["name"]] observation = tool.invoke(tool_call["args"]) result.append(ToolMessage(content=str(observation), tool_call_id=tool_call["id"])) return {"messages": result}

JavaScript:

import { AIMessage, ToolMessage } from "@langchain/core/messages"; const toolNode = async (state) => { const lastMessage = state.messages.at(-1); if (!lastMessage || !AIMessage.isInstance(lastMessage)) { return { messages: [] }; } const result = []; for (const toolCall of lastMessage.tool_calls ?? []) { const tool = toolsByName[toolCall.name]; const observation = await tool.invoke(toolCall); result.push(observation); } return { messages: result }; };

Step 5: Define Routing Logic

This function determines whether to call tools or end:

Python:

from typing import Literal from langgraph.graph import END def should_continue(state: MessagesState) -> Literal["tool_node", END]: """Route to tool node or end based on tool calls.""" last_message = state["messages"][-1] if last_message.tool_calls: return "tool_node" return END

JavaScript:

import { END } from "@langchain/langgraph"; const shouldContinue = (state) => { const lastMessage = state.messages.at(-1); if (!lastMessage || !AIMessage.isInstance(lastMessage)) { return END; } if (lastMessage.tool_calls?.length) { return "toolNode"; } return END; };

Step 6: Build and Run the Agent

Python:

from langgraph.graph import StateGraph, START # Build the graph agent_builder = StateGraph(MessagesState) agent_builder.add_node("llm_call", llm_call) agent_builder.add_node("tool_node", tool_node) agent_builder.add_edge(START, "llm_call") agent_builder.add_conditional_edges("llm_call", should_continue, ["tool_node", END]) agent_builder.add_edge("tool_node", "llm_call") # Compile the agent agent = agent_builder.compile() # Run it from langchain.messages import HumanMessage result = agent.invoke({"messages": [HumanMessage(content="Add 3 and 4.")]}) for m in result["messages"]: print(f"{m.type}: {m.content}")

JavaScript:

import { StateGraph, START, END } from "@langchain/langgraph"; import { HumanMessage } from "@langchain/core/messages"; const agent = new StateGraph(MessagesState) .addNode("llmCall", llmCall) .addNode("toolNode", toolNode) .addEdge(START, "llmCall") .addConditionalEdges("llmCall", shouldContinue, ["toolNode", END]) .addEdge("toolNode", "llmCall") .compile(); const result = await agent.invoke({ messages: [new HumanMessage("Add 3 and 4.")], }); for (const message of result.messages) { console.log(`${message.type}: ${message.content}`); }

How It Works

  1. User input enters the graph as a HumanMessage
  2. LLM node receives the message and decides to call the add tool
  3. Routing detects tool calls and routes to the tool node
  4. Tool node executes add(3, 4) and returns a ToolMessage with result 7
  5. LLM node receives the tool result and generates a final response
  6. Routing detects no more tool calls and ends execution

Next Steps

  • Core Concepts: Learn about state, nodes, edges, and reducers in depth
  • Building Graphs: Explore advanced graph patterns and state management
  • Persistence & Memory: Add checkpoints and memory to your agents
  • Human-in-the-Loop: Pause execution for human approval

Tracing with LangSmith

To debug your agent, enable LangSmith tracing:

export LANGSMITH_TRACING=true export LANGSMITH_API_KEY=your-api-key

LangSmith will capture every step, showing you exactly how your agent makes decisions.