Cette page est affichée en anglais. Une traduction française est en cours.
WorkflowsApr 8, 2026·2 min de lecture

LangGraph — Build Stateful AI Agent Workflows

Framework for building stateful, multi-step AI agent workflows as graphs. LangGraph enables cycles, branching, human-in-the-loop, and persistent state for complex agent systems.

What is LangGraph?

LangGraph is a framework by LangChain for building stateful, multi-step AI agent workflows as directed graphs. Unlike simple chains, LangGraph supports cycles (agent loops), conditional branching, human-in-the-loop approval, and persistent state across steps. It is the go-to framework for complex agent orchestration that goes beyond linear pipelines.

Answer-Ready: LangGraph builds stateful AI agent workflows as graphs. Supports cycles, branching, human-in-the-loop, and persistent state. By LangChain team. Used for multi-step agents, tool-calling loops, and approval workflows. LangGraph Cloud for managed deployment. 8k+ GitHub stars.

Best for: Teams building complex multi-step agent workflows. Works with: OpenAI, Claude, any LLM via LangChain. Setup time: Under 5 minutes.

Core Concepts

1. State

class AgentState(TypedDict):
    messages: list      # Conversation history
    context: str        # Retrieved context
    plan: list          # Agent's plan
    iteration: int      # Loop counter

2. Nodes (Steps)

Each node is a function that transforms state:

def analyze(state: AgentState) -> AgentState:
    # LLM call, tool use, or pure logic
    return {"messages": state["messages"] + [analysis]}

3. Edges (Transitions)

# Unconditional
graph.add_edge("research", "write")

# Conditional branching
def should_continue(state):
    if state["iteration"] > 3:
        return "end"
    return "retry"

graph.add_conditional_edges("check", should_continue, {"retry": "research", "end": END})

4. Cycles (Agent Loops)

# Agent keeps iterating until satisfied
graph.add_edge("act", "observe")
graph.add_conditional_edges("observe", check_done, {"continue": "act", "done": END})

5. Human-in-the-Loop

from langgraph.checkpoint.memory import MemorySaver

checkpointer = MemorySaver()
app = graph.compile(checkpointer=checkpointer, interrupt_before=["deploy"])

# Runs until "deploy" node, then pauses for human approval
result = app.invoke(state, config={"configurable": {"thread_id": "1"}})
# Human reviews...
app.invoke(None, config={"configurable": {"thread_id": "1"}})  # Resume

Common Patterns

Pattern Graph Shape
ReAct Agent think → act → observe → (loop)
Research + Write research → outline → write → review
Approval Workflow draft → review → [approve/reject] → publish
Multi-Agent coordinator → [agent_a, agent_b] → merge
Retry with Feedback execute → validate → [pass/fail → retry]

LangGraph vs Alternatives

Feature LangGraph CrewAI AutoGen
Graph-based Yes No (sequential/parallel) No
Cycles Yes Limited Yes
Human-in-loop Built-in No Yes
State persistence Built-in No Limited
Managed cloud LangGraph Cloud No No

FAQ

Q: Do I need LangChain to use LangGraph? A: No, LangGraph is a standalone library. It integrates well with LangChain but doesn't require it.

Q: Can I use Claude with LangGraph? A: Yes, use ChatAnthropic from langchain-anthropic as the LLM in your nodes.

Q: What is LangGraph Cloud? A: Managed hosting for LangGraph applications with API endpoints, monitoring, and auto-scaling.

🙏

Source et remerciements

Created by LangChain. Licensed under MIT.

langchain-ai/langgraph — 8k+ stars

Discussion

Connectez-vous pour rejoindre la discussion.
Aucun commentaire pour l'instant. Soyez le premier à partager votre avis.