Spend enough time in AI engineering Slack channels and you will encounter two camps: people who insist on LangChain for everything, and people who insist that raw API calls are the only sane choice. Both camps are wrong about the other.
Here is the honest breakdown: what each option actually does, when frameworks earn their abstraction, and when they get in your way.
What LangChain Actually Is
LangChain is a framework for composing LLM-based pipelines. Its core value: it gives you building blocks — prompt templates, output parsers, retrieval abstractions, chain composition — so you do not have to rebuild common patterns from scratch.
The primary abstraction is the chain: a sequence of operations. You compose retrievers, LLM calls, output parsers, and tool invocations into a pipeline. With LCEL (LangChain Expression Language), this looks like Unix pipes: retriever | prompt | llm | parser.
LangChain is genuinely useful for:
- RAG pipelines — retrieval + prompt construction + generation is a pattern you will implement many times. LangChain's document loaders, text splitters, vector store integrations, and retriever abstractions save meaningful setup time.
- Multi-model workflows — routing between models, chaining multiple LLM calls, standardized interfaces across providers.
- Rapid prototyping — when you want to go from idea to working prototype in hours, not days.
LangChain's weakness: when your pipeline is straightforward, the abstraction layers add mental overhead without adding value. Debugging becomes harder because failures happen inside framework internals. For a simple chat app or a single LLM call, raw API code is more readable and easier to maintain.
What LangGraph Actually Is
LangGraph is a separate library from LangChain (same team, different tool) built specifically for stateful, multi-step agents. Where LangChain models computation as a linear chain, LangGraph models it as a directed graph with cycles.
The key primitive is the graph: nodes are processing steps (LLM calls, tool executions, condition checks), edges define flow, and state is a typed object that gets passed between nodes and can be read and written at each step.
This matters for agents because real agent workflows are not linear. An agent might:
- Research a topic using web search
- Decide the first results were insufficient and search again
- Call a different tool based on what it found
- Loop back if the answer is incomplete
- Pass control to a different sub-agent for a specialized task
LangGraph handles this naturally. LangChain chains cannot — they do not support conditional loops, they do not persist state across steps cleanly, and they break down when you need branching logic.
LangGraph is genuinely useful for:
- Stateful agents that need to remember context across multiple tool calls
- Multi-agent systems where a supervisor routes work to specialist agents
- Human-in-the-loop workflows where a human needs to approve or redirect at certain steps
- Long-running agent tasks where you need to checkpoint, resume, or recover from failures
When to Build from Scratch
Building from scratch means: calling LLM APIs directly with requests or the official SDK, implementing the agent loop yourself, managing state with a plain dict or dataclass.
Build from scratch when:
You are learning. This is the most important case. If you build an agent with LangGraph before understanding the observe-think-act loop at the code level, you will struggle to debug it, extend it, or understand why it fails. Raw Python first, frameworks second. Always.
Your use case is simple. A single LLM call, a basic RAG pipeline with one retrieval step, a classification endpoint. Adding a framework to these is over-engineering. The abstractions cost you readability and add a dependency for no benefit.
You need full control over performance. Frameworks add latency and token overhead (prompt templates often include boilerplate you do not need). In high-throughput production systems, raw API calls with manual prompt construction are often meaningfully faster.
You are debugging a production issue. When something breaks in production, raw API calls are easier to inspect and reproduce. Strip out the framework for debugging, fix the issue at the raw API level, then re-introduce the abstraction if still needed.
The Practical Decision Tree
Starting a new project?
├── Learning/first agent → build raw first
├── Simple chain (retrieval + generation) → LangChain
├── Stateful agent with loops/branches → LangGraph
└── High-throughput production endpoint → raw API
Have an existing LangChain project?
├── Adding cycles or conditional logic → migrate to LangGraph
├── Debugging a complex failure → temporarily strip to raw calls
└── Performance bottleneck → profile, consider dropping abstractions
The Migration Path: LangChain → LangGraph
If you have existing LangChain agents and your use cases are growing more complex, migrating to LangGraph is the natural path. The two libraries are designed to interoperate.
The key mental shift: instead of thinking "what does step 3 of my chain do," you think "what are the nodes in my graph, what state do they read and write, and what conditions determine which edge to follow."
In practice: take your existing chain, identify the decision points (places where behavior should differ based on the previous result), and model those as conditional edges. Your LLM calls become nodes. Tool executions become nodes. State becomes explicit.
How MindloomHQ Teaches This
At MindloomHQ, we deliberately sequence the curriculum this way:
-
Phase 3 (Agents) — build agents in raw Python. No frameworks. Implement the ReAct loop yourself. Write your own tool dispatch. This is not hazing — it is how you build the mental model you will need to use frameworks effectively.
-
Phase 5 (Frameworks) — introduce LangChain and LangGraph with full context. By this point, the abstractions map onto things you understand. You can read framework source code, debug inside it, and know when to reach for it vs. when raw code is better.
This ordering matters. Engineers who start with LangChain before understanding raw agents consistently hit walls they cannot explain. Engineers who learn raw first adopt frameworks faster and use them more effectively.
The Real Answer
Neither camp is right. The engineers who insist "only raw API calls" are excluding tools that genuinely save time on complex stateful workflows. The engineers who reach for LangChain for every use case are adding complexity where none is warranted.
Use the right tool for the problem. Raw API calls for simple pipelines and learning. LangChain when composition and retrieval abstractions save real time. LangGraph when your agent has loops, branches, persistent state, or multiple coordinating agents.
Phase 5 of the Agentic AI course at MindloomHQ covers LangChain, LangGraph, and the OpenAI Agents SDK in depth — after you have already built agents from scratch in Phase 3. By the time you hit Phase 5, the frameworks click immediately because you already know what they are abstracting.
Phase 0 and Phase 1 are free to start. No credit card required.