LangGraph — Build Stateful AI Agents as Graphs
LangChain framework for building resilient, stateful AI agents as graphs. Supports cycles, branching, persistence, human-in-the-loop, and streaming. 28K+ stars.
What it is
LangGraph is the LangChain framework for building AI agents as stateful directed graphs rather than linear chains. Each node represents a computation (LLM call, tool use, decision), and edges define how state flows between them. The graph abstraction supports cycles (retry loops), conditional branching, parallel execution, and human-in-the-loop gates.
LangGraph targets developers building production-grade AI agents that need persistence, observability, and failure recovery. It is the recommended approach for agents within the LangChain ecosystem.
How it saves time or tokens
Linear chain architectures break down when agents need to loop, branch, or wait for human input. LangGraph provides these patterns as first-class primitives. State is automatically checkpointed, so crashed agents resume from the last successful node instead of restarting entirely.
Streaming support lets your UI show progress in real time as the agent works through graph nodes, reducing perceived latency.
How to use
- Install LangGraph:
pip install langgraph
- Define state and build a graph:
from langgraph.graph import StateGraph, START, END
from typing import TypedDict, List
class State(TypedDict):
messages: List[str]
def chatbot(state: State):
return {'messages': [llm.invoke(state['messages'])]}
graph = StateGraph(State)
graph.add_node('chatbot', chatbot)
graph.add_edge(START, 'chatbot')
graph.add_edge('chatbot', END)
app = graph.compile()
- Invoke the agent:
result = app.invoke({'messages': ['Hello']})
Example
# Multi-agent supervisor pattern
from langgraph.graph import StateGraph, START, END
def supervisor(state):
# Decide which agent to call next
if state['task'] == 'research':
return 'researcher'
return 'coder'
graph = StateGraph(State)
graph.add_node('researcher', research_fn)
graph.add_node('coder', code_fn)
graph.add_node('reviewer', review_fn)
graph.add_conditional_edges(START, supervisor, {
'researcher': 'researcher',
'coder': 'coder'
})
graph.add_edge('researcher', 'reviewer')
graph.add_edge('coder', 'reviewer')
graph.add_edge('reviewer', END)
app = graph.compile()
Related on TokRepo
- Multi-Agent Frameworks -- Compare LangGraph with CrewAI, AutoGen, and others
- AI Tools for Agents -- Browse agent development frameworks
Common pitfalls
- Every graph needs a path from START to END. Missing edges cause the graph to hang without error messages in some versions.
- Human-in-the-loop (interrupt_before/after) requires a persistence backend. Without persistence, interrupted state is lost.
- Streaming requires the caller to iterate over the stream object. Forgetting to consume the stream means no output appears.
Frequently Asked Questions
Use LangChain for building blocks: LLM wrappers, tool definitions, prompt templates. Use LangGraph to orchestrate those blocks into agents with loops, branching, or human approval. For any non-trivial agent, LangGraph is the recommended approach.
Yes. State is automatically checkpointed between nodes with backends for PostgreSQL, SQLite, and Redis. If a node crashes, the agent resumes from the last checkpoint.
Use interrupt_before or interrupt_after on any node. Execution pauses, returns control to your application, and resumes when you provide input. This enables approval gates, edit-then-continue flows, or rejection branches.
Every node execution is traced to LangSmith when a LANGCHAIN_API_KEY is set. You get token usage, latency per node, tool call traces, and a timeline view. LangSmith has a free tier for small projects.
Technically yes. LangGraph depends only on langchain-core, which is lightweight. You can model any stateful graph without LangChain's full LLM or prompt abstractions, though most users pair the two.
Citations (3)
- LangGraph GitHub— LangGraph models agents as stateful directed graphs
- LangChain Docs— LangChain recommends LangGraph for agent development since 2024
- LangGraph Persistence Docs— Persistence backends: PostgreSQL, SQLite, Redis
Related on TokRepo
Source & Thanks
Created by LangChain. Licensed under MIT. langchain-ai/langgraph — 28,000+ GitHub stars
Discussion
Related Assets
Flax — Neural Network Library for JAX
A high-performance neural network library built on JAX, providing a flexible module system used extensively across Google DeepMind and the JAX research community.
PyCaret — Low-Code Machine Learning in Python
An open-source AutoML library that wraps scikit-learn, XGBoost, LightGBM, CatBoost, and other ML libraries into a unified low-code interface for rapid experimentation.
DGL — Deep Graph Library for Scalable Graph Neural Networks
A high-performance framework for building graph neural networks on top of PyTorch, TensorFlow, or MXNet, designed for both research prototyping and production-scale graph learning.