LangGraph Integration for Graph-Based AI Agents

We design and deploy artificial intelligence systems: from prototype to production-ready solutions. Our team combines expertise in machine learning, data engineering and MLOps to make AI work not in the lab, but in real business.
Showing 1 of 1 servicesAll 1566 services
LangGraph Integration for Graph-Based AI Agents
Medium
from 1 week to 3 months
FAQ
AI Development Areas
AI Solution Development Stages
Latest works
  • image_web-applications_feedme_466_0.webp
    Development of a web application for FEEDME
    1161
  • image_ecommerce_furnoro_435_0.webp
    Development of an online store for the company FURNORO
    1041
  • image_logo-advance_0.png
    B2B Advance company logo design
    561
  • image_crm_enviok_479_0.webp
    Development of a web application for Enviok
    823
  • image_logo-aider_0.jpg
    AIDER company logo development
    762
  • image_crm_chasseurs_493_0.webp
    CRM development for Chasseurs
    848

LangGraph Integration for Graph-Based AI Agents

LangGraph is a library built on top of LangChain for building agents and multi-agent systems as directed graphs with state. Unlike linear LCEL chains, a graph allows you to implement cycles, conditional transitions, parallel execution, and human-in-the-loop pauses. This makes LangGraph the primary tool for production-grade agent systems.

Basic Graph Structure

from langgraph.graph import StateGraph, END
from langgraph.checkpoint.memory import MemorySaver
from langgraph.prebuilt import ToolNode
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, AIMessage
from typing import TypedDict, Annotated
import operator

class AgentState(TypedDict):
    messages: Annotated[list, operator.add]  # Automatically concatenated
    user_id: str
    iteration_count: int

llm = ChatOpenAI(model="gpt-4o")

def agent_node(state: AgentState) -> AgentState:
    response = llm.bind_tools(tools).invoke(state["messages"])
    return {"messages": [response], "iteration_count": state["iteration_count"] + 1}

def should_continue(state: AgentState) -> str:
    last_msg = state["messages"][-1]
    if last_msg.tool_calls:
        return "tools"
    return END

# Build the graph
graph = StateGraph(AgentState)
graph.add_node("agent", agent_node)
graph.add_node("tools", ToolNode(tools))

graph.set_entry_point("agent")
graph.add_conditional_edges("agent", should_continue, {"tools": "tools", END: END})
graph.add_edge("tools", "agent")  # Loop: after tools — back to agent

app = graph.compile(checkpointer=MemorySaver())

Persistent State and Interrupts

LangGraph supports checkpoint-based state persistence between runs and pauses for human approval:

from langgraph.checkpoint.postgres import PostgresSaver
from psycopg import Connection

# Persistence in PostgreSQL
conn = Connection.connect("postgresql://user:pass@localhost/langgraph_db")
checkpointer = PostgresSaver(conn)

# Interrupt: graph pauses before specified node
app = graph.compile(
    checkpointer=checkpointer,
    interrupt_before=["execute_payment"],  # Requires human confirmation
)

config = {"configurable": {"thread_id": "order_12345"}}

# Run until interrupt point
result = app.invoke({"messages": [HumanMessage("Pay invoice for 50000 rubles")]}, config)
# Graph stopped before execute_payment

# After human review — continue
app.invoke(None, config)  # None = continue from current state

Multi-agent: Supervisor Pattern

from langgraph.graph import StateGraph, END
from typing import Literal

class SupervisorState(TypedDict):
    messages: Annotated[list, operator.add]
    next_agent: str

AGENTS = ["researcher", "analyst", "writer"]

supervisor_prompt = f"""You are a supervisor of a multi-agent system.
Based on the request and current progress, select the next agent: {AGENTS}
Or return FINISH if the task is complete.
"""

def supervisor_node(state: SupervisorState):
    response = llm.with_structured_output(
        {"next": {"type": "string", "enum": AGENTS + ["FINISH"]}}
    ).invoke([{"role": "system", "content": supervisor_prompt}] + state["messages"])
    return {"next_agent": response["next"]}

def route_to_agent(state: SupervisorState) -> str:
    if state["next_agent"] == "FINISH":
        return END
    return state["next_agent"]

# Create agents
def make_agent_node(name: str, system_prompt: str):
    agent_llm = ChatOpenAI(model="gpt-4o").bind_tools(get_tools_for(name))
    def node(state):
        result = agent_llm.invoke(
            [{"role": "system", "content": system_prompt}] + state["messages"]
        )
        return {"messages": [result]}
    return node

graph = StateGraph(SupervisorState)
graph.add_node("supervisor", supervisor_node)
graph.add_node("researcher", make_agent_node("researcher", "Research the topic and find facts"))
graph.add_node("analyst", make_agent_node("analyst", "Analyze data and draw conclusions"))
graph.add_node("writer", make_agent_node("writer", "Formulate the final answer"))

graph.set_entry_point("supervisor")
graph.add_conditional_edges("supervisor", route_to_agent)
for agent in AGENTS:
    graph.add_edge(agent, "supervisor")

multi_agent = graph.compile()

Streaming and Event Streaming

# Stream events from graph
async for event in app.astream_events(
    {"messages": [HumanMessage("Analyze Q1 sales")]},
    config={"configurable": {"thread_id": "analysis_001"}},
    version="v2",
):
    kind = event["event"]
    if kind == "on_chat_model_stream":
        print(event["data"]["chunk"].content, end="", flush=True)
    elif kind == "on_tool_start":
        print(f"\n[Tool invocation: {event['name']}]")
    elif kind == "on_tool_end":
        print(f"[Tool result received]")

SubGraphs: Nested Graphs

# Subgraph for document processing
doc_graph = StateGraph(DocumentState)
doc_graph.add_node("extract", extract_text)
doc_graph.add_node("classify", classify_document)
doc_graph.add_node("validate", validate_structure)
# ... build subgraph

doc_subgraph = doc_graph.compile()

# Include subgraph in parent
main_graph = StateGraph(MainState)
main_graph.add_node("process_document", doc_subgraph)  # Subgraph as node
main_graph.add_node("send_result", send_to_crm)
main_graph.add_edge("process_document", "send_result")

Practical Case Study: Contract Verification System

Task: automate verification of incoming contracts by legal department. Daily 30–50 contracts, each requiring 1–2 hours of lawyer time.

Graph:

  1. extract_node — parse PDF, extract structure
  2. classify_node — contract type (supply, services, lease, NDA)
  3. risk_check_node — parallel checks: financial terms, duration, liability
  4. legal_rules_node — verify against corporate forbidden terms list
  5. human_review — interrupt for contracts with risk_score > 7
  6. finalize_node — generate conclusion and recommendations
app = graph.compile(
    checkpointer=PostgresSaver(conn),
    interrupt_before=["human_review"],  # Pause only for risky ones
)

Routing: low risk → automatic approval; high risk → pause with agent's ready conclusion.

Results:

  • Standard contract review time: 90 min → 8 min
  • Automatic approval without lawyer: 61% of contracts
  • Missed non-standard conditions: 0 (vs ~3% manually with fatigue)
  • Legal department workload: -58%

LangGraph vs LangChain LCEL

Criterion LCEL LangGraph
Structure Linear chain Arbitrary graph
Cycles No Yes
State Passed through pipe TypedDict with merge strategy
Checkpoint No PostgreSQL, Redis, SQLite
Human-in-the-loop No interrupt_before/after
Use case Simple pipelines Agents, multi-agents

Timeline

  • Basic ReAct agent on LangGraph: 3–5 days
  • Multi-agent system with supervisor: 2–3 weeks
  • Human-in-the-loop workflow with persistence: 1–2 weeks
  • Production integration with PostgreSQL checkpoint: +3–5 days