The most common mistake new AI engineers make is treating LangChain and LangGraph as competitors. They aren't. They're complements — one handles pipelines, one handles agents — and using the wrong one for a given task produces code that works awkwardly or not at all.
Here's the decision guide. When to use LangChain, when to use LangGraph, when to use both, and the code that illustrates exactly why the distinction matters.
The One-Sentence Version
LangChain is for pipelines where you know the steps in advance.
LangGraph is for agents where the steps are determined at runtime.
If that sentence makes immediate sense, the rest of this is detail. If it doesn't, keep reading.
LangChain: What It's Actually For
LangChain is a library for composing LLM operations into chains. Its core abstraction — LCEL, the LangChain Expression Language — uses a pipe operator to connect components:
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
chain = (
ChatPromptTemplate.from_template("Summarize: {document}")
| llm
| StrOutputParser()
)
result = chain.invoke({"document": "..."})
A prompt template pipes into an LLM which pipes into a parser. That's a chain. The critical property: the path is fixed before execution starts. You define the full pipeline upfront, then run it.
LangChain's real value isn't the chaining syntax — it's the integrations. Hundreds of connectors for vector stores (Pinecone, pgvector, Chroma, Weaviate), document loaders (PDF, web pages, GitHub, Notion, S3), output parsers, memory implementations, and API wrappers. If your pipeline needs to pull data from somewhere and feed it to an LLM, LangChain probably has the integration already written.
LangChain shines for:
- RAG pipelines — retrieve relevant documents, inject them into context, generate an answer. The path never changes.
- Document processing — load → chunk → embed → store, or load → classify → extract → summarize
- Simple chatbots — conversation history in memory, one LLM call per message
- Structured extraction — read a document, extract specific fields into a Pydantic model
# RAG pipeline — LangChain is correct here
from langchain_community.vectorstores import Chroma
from langchain_core.runnables import RunnablePassthrough
retriever = vectorstore.as_retriever(search_kwargs={"k": 4})
rag_chain = (
{"context": retriever, "question": RunnablePassthrough()}
| ChatPromptTemplate.from_template(
"Answer based on this context:\n{context}\n\nQuestion: {question}"
)
| llm
| StrOutputParser()
)
answer = rag_chain.invoke("What is our cancellation policy?")
This is clean, readable, and exactly right for the use case. The workflow is deterministic: receive question → retrieve chunks → generate answer. There's no branching, no looping, no decision-making about what to do next.
LangGraph: What It's Actually For
LangGraph models your workflow as a directed graph. Nodes are operations (LLM calls, tool executions, data transformations). Edges define what happens next — either unconditionally or conditionally based on the current state.
The defining feature: LangGraph supports cycles. Your workflow can loop. An agent can call a tool, observe the result, decide it needs more information, call a different tool, and loop again — as many times as necessary — until it determines the task is complete.
from langgraph.graph import StateGraph, END
from typing import TypedDict, List, Annotated
import operator
class AgentState(TypedDict):
messages: Annotated[List[dict], operator.add]
iterations: int
def agent_node(state: AgentState) -> dict:
response = llm_with_tools.invoke(state["messages"])
return {
"messages": [response],
"iterations": state["iterations"] + 1
}
def should_continue(state: AgentState) -> str:
last_msg = state["messages"][-1]
# If the LLM made tool calls, execute them
if hasattr(last_msg, "tool_calls") and last_msg.tool_calls:
return "tools"
# If we've looped too many times, force stop
if state["iterations"] >= 10:
return END
# Otherwise, done
return END
workflow = StateGraph(AgentState)
workflow.add_node("agent", agent_node)
workflow.add_node("tools", tool_node)
workflow.set_entry_point("agent")
workflow.add_conditional_edges("agent", should_continue)
workflow.add_edge("tools", "agent") # Loop back after tool execution
agent = workflow.compile()
The tools → agent edge creates the loop. After tools execute, the result goes back to the agent node, which decides what to do next. This is architecturally impossible to express cleanly with LangChain chains.
LangGraph shines for:
- Tool-using agents — the agent decides which tools to call based on the task
- Multi-step reasoning — tasks that require thinking across multiple iterations
- Stateful workflows — state that accumulates and evolves across steps
- Conditional branching — different paths depending on intermediate results
- Human-in-the-loop — pausing for human approval before continuing
- Multi-agent systems — supervisor routing work to specialized sub-agents
The Decision in Practice
The practical test: does the workflow have cycles?
Draw your workflow on a whiteboard. If it's a straight line from input to output — possibly with branches, but no loops — that's a pipeline. LangChain.
If it has loops — the agent does something, evaluates, and might do it again — that's an agent. LangGraph.
A few concrete scenarios:
"I want to answer questions about my documentation." Pipeline. User asks → retrieve relevant docs → answer. LangChain.
"I want an agent that can search the web, read pages, and write a research report." Agent. It searches, decides what to read next based on what it found, reads pages, decides it needs more searches, loops until it has enough information. LangGraph.
"I want to classify customer support tickets and route them to the right team."
Pipeline. Load ticket → classify → route → maybe summarize. The path is determined by the classification, not by the system deciding to loop. LangChain (with conditional routing using RunnableBranch).
"I want an agent that autonomously fixes bugs in a codebase." Agent. It reads the bug report, examines relevant files, forms a hypothesis, makes a change, runs tests, observes output, potentially makes additional changes based on test results. Multiple loops, conditional behavior. LangGraph.
"I want to process 10,000 PDFs and extract structured data from each one." Pipeline. For each PDF: load → extract → validate → store. Same fixed path for every document. LangChain.
When You Use Both
In production, the most common pattern is LangGraph for agent orchestration with LangChain integrations inside the nodes.
from langchain_community.vectorstores import Chroma
from langchain_core.prompts import ChatPromptTemplate
from langgraph.graph import StateGraph, END
# LangChain retriever — used inside a LangGraph node
retriever = Chroma(...).as_retriever()
def research_node(state: AgentState) -> dict:
# Use LangChain retriever inside a LangGraph node
query = state["current_query"]
docs = retriever.invoke(query)
context = "\n".join([d.page_content for d in docs])
response = llm.invoke(
f"Based on this context:\n{context}\n\nAnswer: {query}"
)
return {"messages": state["messages"] + [response], "context": context}
def web_search_node(state: AgentState) -> dict:
# LangChain web search tool
results = search_tool.invoke(state["current_query"])
return {"search_results": results}
# LangGraph orchestrates which node runs and when
workflow = StateGraph(AgentState)
workflow.add_node("research", research_node)
workflow.add_node("web_search", web_search_node)
workflow.add_node("synthesize", synthesis_node)
# ... conditional routing between nodes
The agent (LangGraph) decides the flow. Each node may use LangChain components for retrieval, parsing, or integrations. This layering is the production norm.
What About Other Frameworks?
CrewAI, AutoGen, Microsoft's Semantic Kernel, Amazon Bedrock Agents — they exist, they're popular in certain contexts, and they're all higher-level abstractions over the same concepts.
The honest recommendation: learn LangGraph before any of them. LangGraph forces you to understand the primitives — state, graphs, nodes, edges, conditional routing. Once you understand those, every other framework is just a different API over the same ideas. If you start with CrewAI, you understand CrewAI. If you start with LangGraph, you understand agents.
The exception: if you're working at an organization that's already standardized on one of these, learn that one. Framework choice at scale is often a "what's already here" decision.
The Learning Order That Makes Sense
-
LangChain first. Build a RAG pipeline. Learn LCEL, retrievers, output parsers, and at least two vector store integrations. This gives you the vocabulary and the building blocks LangGraph builds on.
-
LangGraph second. Build a tool-using agent. Understand state, nodes, edges, and conditional routing. Make it fail — hit the iteration limit, observe a bad tool call — so you understand the failure modes.
-
Both together. Build something that uses LangGraph for orchestration and LangChain retrievers inside the nodes. This is the production pattern.
The entire trajectory from "I know Python" to "I can build a production agent" is roughly 8–12 weeks of focused work. Phase 5 of MindloomHQ's Agentic AI course covers exactly this — LangGraph agents with real tool use, state management, and multi-agent coordination. Explore the curriculum →