LangGraph

Also known as: LangChain LangGraph, LangGraph framework, LangGraph runtime

LangGraph
LangGraph is a graph-based runtime from LangChain for building stateful AI agents and multi-agent systems. Each step is a node, control flow is an edge, and state, persistent memory, and human-in-the-loop checkpoints are built into the runtime.

LangGraph is a graph-based runtime from LangChain for building stateful AI agents, where the workflow is modeled as nodes and edges with built-in memory, checkpoints, and multi-agent coordination patterns like supervisor and swarm.

What It Is

Most early agent frameworks treated an LLM call as a one-shot pipeline: prompt in, answer out. That works for a chatbot but breaks the moment you need a real agent — one that remembers what happened three turns ago, pauses for a human to approve a refund, retries a failed tool call, or hands off to another specialist agent. LangGraph exists to make those patterns first-class, so teams stop rebuilding the same state plumbing in every project.

The mental model is a directed graph. According to LangChain Docs, the core abstractions are nodes, edges, state, a checkpointer, a long-term store, and interrupts. A node is a function or agent that does work — usually an LLM call, a tool call, or a routing decision. An edge moves the graph from one node to the next, either unconditionally or based on the current state. State is a shared object that every node reads from and writes to, so the graph carries context across steps without stuffing everything into the prompt.

The checkpointer snapshots state after every node, which enables three of LangGraph’s most useful features: persistence (resume an agent run hours later), human-in-the-loop interrupts (pause until a person approves the next action), and time-travel debugging (rewind to an earlier checkpoint and try a different path). The store holds memory that survives across separate runs — for example, what a user told the agent last week.

For multi-agent setups, LangGraph offers two layered patterns. According to LangGraph Supervisor Docs, langgraph-supervisor implements a hierarchical pattern where one supervisor agent routes work to specialist sub-agents, and langgraph-swarm implements a peer handoff pattern where any agent can pass control to any other. Both are built on the same create_handoff_tool primitive, so teams can mix the two styles inside one application.

How It’s Used in Practice

The mainstream use case is wrapping a multi-step AI workflow that a single prompt cannot reliably handle: a research agent that searches, reads, summarizes, and asks a human before publishing; a customer support agent that escalates to a billing specialist for refunds; a coding agent that plans, edits, runs tests, and retries on failure. Each step becomes a node, edges connect them, and LangGraph handles state, retries, and checkpointing.

The other common entry point is multi-agent coordination. Teams use the supervisor pattern when one “manager” agent should decide which specialist gets the next task, and the swarm pattern when specialists should hand off directly to each other based on context — which is exactly the architecture choice covered in the article above.

Pro Tip: According to LangChain Docs, the current recommendation is to build the supervisor directly with tool calls rather than reaching for langgraph-supervisor first. The library is convenient, but writing the supervisor as a plain agent with handoff tools gives tighter control over the prompt and context each sub-agent sees — which is usually what determines whether a multi-agent system actually works in production.

When to Use / When Not

ScenarioUseAvoid
Agent needs to pause for human approval mid-run
Single-prompt classification or summarization
Multi-agent system with supervisor or swarm coordination
Quick prototype where you want zero state machinery
Long-running workflow that must resume after a crash
You want a vendor SDK with no LangChain ties at all

Common Misconception

Myth: LangGraph is just LangChain with a new name, or a chatbot framework. Reality: LangGraph is a separate runtime focused on stateful, graph-shaped agent execution. LangChain (the library) gives you LLM and tool wrappers; LangGraph gives you the orchestration layer with checkpoints, interrupts, and multi-agent primitives. Teams adopt LangGraph without using the rest of LangChain all the time.

One Sentence to Remember

If your AI workflow has more than one step, needs to remember context, or might pause for a human, LangGraph gives you the graph runtime to express it without hand-rolling state machines, retries, and checkpoints from scratch.

FAQ

Q: Is LangGraph the same as LangChain? A: No. LangChain is a library of LLM and tool integrations. LangGraph is a separate runtime for stateful, graph-based agents. They share an organization and integrate cleanly, but either can be adopted on its own.

Q: What is the difference between LangGraph supervisor and swarm? A: Supervisor is hierarchical — one manager agent routes tasks to specialists. Swarm is peer-to-peer — any specialist can hand off to any other based on context. Both use the same handoff-tool primitive underneath.

Q: Do I need LangGraph for a single-agent app? A: Not always. If your agent is one prompt with tool calls and no persistence needs, a thinner SDK is fine. Reach for LangGraph when state, checkpoints, or human approvals are part of the requirement.

Sources

Expert Takes

A graph runtime is not magic — it is an explicit state machine wearing friendly names. Nodes are pure-ish functions over a shared state object, edges are routing rules, and the checkpointer is a serializer with a key. Calling that an “agent framework” is marketing; calling it a deterministic harness around a non-deterministic model is closer to what is actually happening, and far more useful for reasoning about whether the system is correct.

The interesting question with LangGraph is not which library to import — it is what gets written into the spec before any graph code exists. Define the nodes, the state schema, the handoff conditions, and the points where a human must approve. Once those are explicit, the LangGraph code becomes a near-mechanical translation. Skip that step and the graph turns into a maze nobody, including the AI itself, can safely edit later.

Agent runtimes are consolidating fast, and LangGraph has the distribution most rivals do not. Every team building a serious multi-agent product is evaluating it against vendor SDKs, and the pattern is clear: pick the runtime that lets you swap models, add humans in the loop, and ship to production without rewriting state plumbing every quarter. That is the bet here, and the bet is working.

A graph hides a lot. Each edge is a small policy decision: when does the agent escalate, when does it retry, when does it ask a human, when does it just act? Most teams encode those decisions in code reviews nobody reads carefully. If the agent can spend money, send messages, or change records, the question is not whether LangGraph is powerful enough — it is whether anyone outside engineering understands what the graph is allowed to do.