LangChain and LangGraph are both from the same company (LangChain Inc.) and are often mentioned together. But they're designed for fundamentally different use cases. Choosing the wrong one will frustrate you — using the right one will make you significantly more productive.
The One-Line Summary
- LangChain = toolkit of composable components for building LLM pipelines
- LangGraph = framework for building stateful, cyclic agent workflows
If your agent is a straight line from input to output, use LangChain. If your agent loops, branches, or maintains complex state, use LangGraph.
LangChain: The Swiss Army Knife
LangChain was built around the idea of chains — composable sequences of operations. You combine prompts, LLMs, parsers, retrievers, and tools into pipelines using a consistent interface.
from langchain.chat_models import ChatAnthropic
from langchain.prompts import ChatPromptTemplate
from langchain.schema.output_parser import StrOutputParser
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
("human", "{question}")
])
chain = prompt | ChatAnthropic(model="claude-3-5-sonnet") | StrOutputParser()
result = chain.invoke({"question": "What is RAG?"})
LangChain excels at:
- RAG pipelines — load documents, embed them, retrieve relevant chunks, augment a prompt
- Simple tool-using agents — ReAct-style agents with a fixed set of tools
- Prompt management — templates, few-shot examples, output parsers
- Integrations — 400+ built-in integrations (vector stores, databases, APIs)
When LangChain works great: You're building a Q&A system over your docs, a summarizer, a simple research assistant, or any pipeline where the flow is largely linear.
LangGraph: For When Agents Get Complex
LangGraph models your agent as a graph — nodes are operations, edges are transitions, and a central state object flows through the graph and gets updated at each step.
from langgraph.graph import StateGraph, END
from typing import TypedDict, Annotated
import operator
class AgentState(TypedDict):
messages: Annotated[list, operator.add]
tool_calls: list
final_answer: str | None
graph = StateGraph(AgentState)
graph.add_node("reason", reason_node)
graph.add_node("act", tool_node)
graph.add_conditional_edges("reason", should_continue, {
"continue": "act",
"end": END
})
graph.add_edge("act", "reason")
graph.set_entry_point("reason")
agent = graph.compile()
LangGraph excels at:
- Multi-step reasoning — agents that loop until a task is complete
- Human-in-the-loop — pausing for approval before taking actions
- Multi-agent systems — orchestrating multiple specialized agents
- Complex branching — different paths based on tool results or confidence scores
- Persistence — built-in support for checkpointing and resuming agent runs
When LangGraph is worth the complexity: You're building a code review bot that iterates until tests pass, a customer service agent that escalates when it's unsure, or any workflow where the number of steps isn't fixed in advance.
Code Comparison: The Same Agent, Two Frameworks
LangChain (ReAct agent)
from langchain.agents import create_react_agent, AgentExecutor
from langchain.tools import tool
@tool
def search(query: str) -> str:
"""Search the web for information."""
return web_search(query)
agent = create_react_agent(llm, [search], prompt)
executor = AgentExecutor(agent=agent, tools=[search], verbose=True)
result = executor.invoke({"input": "What's the latest on LLM benchmarks?"})
LangGraph (same agent, explicit state)
from langgraph.prebuilt import create_react_agent
agent = create_react_agent(llm, tools=[search])
result = agent.invoke({"messages": [("human", "What's the latest on LLM benchmarks?")]})
For a simple ReAct agent, LangGraph is actually less code. The complexity difference shows up when you start adding interrupts, custom logic, and multi-agent coordination.
Decision Framework
Use LangChain when:
- Your pipeline has a fixed number of steps
- You're doing document retrieval (RAG)
- You need fast integrations with external services
- You're prototyping and want to move fast
- The agent doesn't need to loop or branch significantly
Use LangGraph when:
- The agent needs to loop until a condition is met
- You need to pause for human approval mid-run
- You're orchestrating multiple agents
- You need built-in checkpointing and replay
- Your workflow has complex conditional branching
Use both when: LangChain handles your document loading, retrieval, and prompt management; LangGraph handles the control flow. They're designed to work together.
The Production Reality
In production, most serious agent systems end up on LangGraph. Here's why: linear pipelines break in the real world. Agents need to retry failed tool calls, handle partial results, and pause for edge cases that weren't anticipated at design time.
LangGraph's explicit state machine model also makes agents dramatically easier to debug. You can inspect the state at any node, add logging, and replay specific runs. With LangChain's implicit chain execution, debugging complex agents becomes a nightmare at scale.
The learning path: Start with LangChain to understand the fundamentals — prompts, retrievers, tool use, output parsing. Once you've built a few working pipelines, move to LangGraph for any agent that needs real-world robustness. Both are covered in Phase 5 of MindloomHQ.