ScriptsMar 30, 2026·2 min read

LangGraph — Build Stateful AI Agents as Graphs

LangChain framework for building resilient, stateful AI agents as graphs. Supports cycles, branching, persistence, human-in-the-loop, and streaming. 28K+ stars.

TL;DR
LangGraph models AI agents as stateful graphs with cycles, branching, persistence, and human-in-the-loop for production reliability.
§01

What it is

LangGraph is the LangChain framework for building AI agents as stateful directed graphs rather than linear chains. Each node represents a computation (LLM call, tool use, decision), and edges define how state flows between them. The graph abstraction supports cycles (retry loops), conditional branching, parallel execution, and human-in-the-loop gates.

LangGraph targets developers building production-grade AI agents that need persistence, observability, and failure recovery. It is the recommended approach for agents within the LangChain ecosystem.

§02

How it saves time or tokens

Linear chain architectures break down when agents need to loop, branch, or wait for human input. LangGraph provides these patterns as first-class primitives. State is automatically checkpointed, so crashed agents resume from the last successful node instead of restarting entirely.

Streaming support lets your UI show progress in real time as the agent works through graph nodes, reducing perceived latency.

§03

How to use

  1. Install LangGraph:
pip install langgraph
  1. Define state and build a graph:
from langgraph.graph import StateGraph, START, END
from typing import TypedDict, List

class State(TypedDict):
    messages: List[str]

def chatbot(state: State):
    return {'messages': [llm.invoke(state['messages'])]}

graph = StateGraph(State)
graph.add_node('chatbot', chatbot)
graph.add_edge(START, 'chatbot')
graph.add_edge('chatbot', END)
app = graph.compile()
  1. Invoke the agent:
result = app.invoke({'messages': ['Hello']})
§04

Example

# Multi-agent supervisor pattern
from langgraph.graph import StateGraph, START, END

def supervisor(state):
    # Decide which agent to call next
    if state['task'] == 'research':
        return 'researcher'
    return 'coder'

graph = StateGraph(State)
graph.add_node('researcher', research_fn)
graph.add_node('coder', code_fn)
graph.add_node('reviewer', review_fn)
graph.add_conditional_edges(START, supervisor, {
    'researcher': 'researcher',
    'coder': 'coder'
})
graph.add_edge('researcher', 'reviewer')
graph.add_edge('coder', 'reviewer')
graph.add_edge('reviewer', END)
app = graph.compile()
§05

Related on TokRepo

§06

Common pitfalls

  • Every graph needs a path from START to END. Missing edges cause the graph to hang without error messages in some versions.
  • Human-in-the-loop (interrupt_before/after) requires a persistence backend. Without persistence, interrupted state is lost.
  • Streaming requires the caller to iterate over the stream object. Forgetting to consume the stream means no output appears.

Frequently Asked Questions

When should I use LangGraph vs LangChain?+

Use LangChain for building blocks: LLM wrappers, tool definitions, prompt templates. Use LangGraph to orchestrate those blocks into agents with loops, branching, or human approval. For any non-trivial agent, LangGraph is the recommended approach.

Does LangGraph support persistence across restarts?+

Yes. State is automatically checkpointed between nodes with backends for PostgreSQL, SQLite, and Redis. If a node crashes, the agent resumes from the last checkpoint.

How does human-in-the-loop work?+

Use interrupt_before or interrupt_after on any node. Execution pauses, returns control to your application, and resumes when you provide input. This enables approval gates, edit-then-continue flows, or rejection branches.

What observability does LangGraph provide?+

Every node execution is traced to LangSmith when a LANGCHAIN_API_KEY is set. You get token usage, latency per node, tool call traces, and a timeline view. LangSmith has a free tier for small projects.

Can LangGraph run without LangChain?+

Technically yes. LangGraph depends only on langchain-core, which is lightweight. You can model any stateful graph without LangChain's full LLM or prompt abstractions, though most users pair the two.

Citations (3)
🙏

Source & Thanks

Created by LangChain. Licensed under MIT. langchain-ai/langgraph — 28,000+ GitHub stars

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets