LangGraph

View as Markdown

LangGraph is a framework for building stateful, multi-step agent applications as graphs. It gives you fine-grained control over agent behavior — tool routing, state management, cycles, and human-in-the-loop patterns — while staying compatible with the LangChain ecosystem.

The langchain-youdotcom package provides YouSearchTool and YouContentsTool, which plug directly into LangGraph agents to give them real-time web access.


Getting Started

Install the packages

$pip install -U langchain-youdotcom langgraph langchain langchain-openai

Set your API keys

$export YDC_API_KEY=your_api_key
$export OPENAI_API_KEY=your_openai_key

Get your You.com API key at you.com/platform/api-keys.

The examples below use OpenAI as the LLM provider, but any LangChain-compatible chat model works — Anthropic, Google, Mistral, local models, etc.

ReAct Agent

The fastest way to get started is create_agent, a prebuilt LangGraph agent that handles tool calling and message routing automatically.

1from langchain_openai import ChatOpenAI
2from langchain_youdotcom import YouSearchTool, YouContentsTool
3from langchain.agents import create_agent
4
5llm = ChatOpenAI(model="gpt-4o-mini")
6tools = [YouSearchTool(), YouContentsTool()]
7
8agent = create_agent(llm, tools)
9
10response = agent.invoke(
11 {"messages": [{"role": "user", "content": "What are the top AI stories this week?"}]}
12)
13print(response["messages"][-1].content)

Customizing Search Parameters

Pass an api_wrapper to control search behavior:

1from langchain_youdotcom import YouSearchAPIWrapper, YouSearchTool
2
3tool = YouSearchTool(
4 api_wrapper=YouSearchAPIWrapper(
5 count=5,
6 livecrawl="web",
7 freshness="day",
8 safesearch="moderate",
9 )
10)

Streaming

LangGraph supports token-level streaming out of the box. Use astream_events to stream agent responses as they’re generated:

1import asyncio
2from langchain_openai import ChatOpenAI
3from langchain_youdotcom import YouSearchTool
4from langchain.agents import create_agent
5
6llm = ChatOpenAI(model="gpt-4o-mini")
7agent = create_agent(llm, [YouSearchTool()])
8
9
10async def main():
11 async for event in agent.astream_events(
12 {"messages": [{"role": "user", "content": "What is the current price of Bitcoin?"}]},
13 version="v2",
14 ):
15 if event["event"] == "on_chat_model_stream":
16 token = event["data"]["chunk"].content
17 if token:
18 print(token, end="", flush=True)
19 print()
20
21
22asyncio.run(main())

Custom Graph with a Search Node

For more control, build a custom StateGraph. This example creates a simple search-then-summarize pipeline where the agent searches the web, then synthesizes an answer from the results:

1from langchain_openai import ChatOpenAI
2from langchain_youdotcom import YouSearchTool
3from langgraph.graph import START, END, StateGraph, MessagesState
4
5llm = ChatOpenAI(model="gpt-4o-mini")
6search = YouSearchTool()
7
8
9def search_web(state: MessagesState):
10 """Search the web for the user's query."""
11 user_message = state["messages"][-1].content
12 results = search.invoke(user_message)
13 return {
14 "messages": [
15 {
16 "role": "system",
17 "content": f"Search results:\n\n{results}",
18 }
19 ]
20 }
21
22
23def summarize(state: MessagesState):
24 """Summarize the search results into a final answer."""
25 response = llm.invoke(state["messages"])
26 return {"messages": [response]}
27
28
29graph = StateGraph(MessagesState)
30graph.add_node("search", search_web)
31graph.add_node("summarize", summarize)
32
33graph.add_edge(START, "search")
34graph.add_edge("search", "summarize")
35graph.add_edge("summarize", END)
36
37app = graph.compile()
38
39response = app.invoke(
40 {"messages": [{"role": "user", "content": "What happened in tech this week?"}]}
41)
42print(response["messages"][-1].content)

Search and Extract Pattern

Combine YouSearchAPIWrapper methods in a multi-step graph that searches the web, extracts full page content from the top results, and generates a comprehensive answer. The search step returns snippets and URLs (without livecrawl), then the extract step calls the Contents API to fetch full page content for the top results. This avoids fetching full content for every search result—only the most relevant URLs get crawled:

1from langchain_openai import ChatOpenAI
2from langchain_youdotcom import YouSearchAPIWrapper
3from langgraph.graph import START, END, StateGraph, MessagesState
4
5llm = ChatOpenAI(model="gpt-4o-mini")
6wrapper = YouSearchAPIWrapper(count=5)
7
8
9class SearchState(MessagesState):
10 urls: list[str]
11
12
13def search_web(state: SearchState):
14 """Search the web and extract URLs from the results."""
15 query = state["messages"][-1].content
16 docs = wrapper.results(query)
17 urls = [doc.metadata["url"] for doc in docs if "url" in doc.metadata]
18 results_text = "\n\n".join(
19 f"{doc.metadata.get('title', 'Untitled')}: {doc.page_content[:200]}"
20 for doc in docs
21 )
22 return {
23 "messages": [
24 {"role": "system", "content": f"Search results:\n\n{results_text}"}
25 ],
26 "urls": urls,
27 }
28
29
30def extract_content(state: SearchState):
31 """Extract full content from the URLs found in search."""
32 urls = state.get("urls", [])[:3]
33 if not urls:
34 return {"messages": []}
35 docs = wrapper.contents(urls)
36 content = "\n\n".join(doc.page_content for doc in docs)
37 return {
38 "messages": [
39 {"role": "system", "content": f"Extracted content:\n\n{content}"}
40 ]
41 }
42
43
44def synthesize(state: SearchState):
45 """Generate a final answer from all gathered context."""
46 response = llm.invoke(state["messages"])
47 return {"messages": [response]}
48
49
50graph = StateGraph(SearchState)
51graph.add_node("search", search_web)
52graph.add_node("extract", extract_content)
53graph.add_node("synthesize", synthesize)
54
55graph.add_edge(START, "search")
56graph.add_edge("search", "extract")
57graph.add_edge("extract", "synthesize")
58graph.add_edge("synthesize", END)
59
60app = graph.compile()
61
62response = app.invoke(
63 {"messages": [{"role": "user", "content": "Explain the latest advances in quantum computing"}]}
64)
65print(response["messages"][-1].content)

Configuration Reference

Both tools accept a YouSearchAPIWrapper via the api_wrapper parameter. Here are the key options:

Search Options

ParameterTypeDescription
countintMax results per section (default: 10)
freshnessstrFilter by age: "day", "week", "month", "year", or a custom range "YYYY-MM-DDtoYYYY-MM-DD"
countrystrTwo-letter country code (e.g., "US", "GB")
safesearchstr"off", "moderate", or "strict"
livecrawlstrFetch live page content: "web", "news", or "all"
livecrawl_formatsstrFormat for live content: "html" or "markdown"
languagestrBCP-47 language code (default: "EN")
offsetintPagination offset in multiples of count (0–9)
kintMax documents to return (wrapper-level, applied after API response)
n_snippets_per_hitintSnippets per search result (wrapper-level, applied after API response)

Contents Options

YouContentsTool accepts urls: list[str] at invocation. To control the output format or crawl timeout, call api_wrapper.contents() directly:

1from langchain_youdotcom import YouSearchAPIWrapper
2
3wrapper = YouSearchAPIWrapper()
4docs = wrapper.contents(
5 ["https://example.com"],
6 formats=["markdown", "metadata"], # default
7 crawl_timeout=30, # seconds (1–60)
8)

For full parameter details, see the Search API reference and Contents API reference.


Resources