Research API Overview

View as Markdown

What is the Research API?

The Research API returns grounded, natural language answers to questions of varying complexity. It runs multiple searches, processes the results, cross-references sources, and synthesizes everything into a thorough, Markdown-formatted answer with inline citations.

Ask a hard question, get a researched answer with sources.


The Search API and the Research API serve different purposes by delivering different outputs:

Search APIResearch API
InputQuery, several search parameters (count, language, livecrawl etc.)Query, research effort
You getRaw search results (URLs, snippets, metadata)A natural language answer with inline citations, plus search results
ProcessingReturns results as-is for you to processReads, reasons over, and synthesizes results for you
SpeedFast — single search round tripVaries — multiple searches and reasoning steps
ControlFull control over how results are usedControl depth via research_effort
Best forRAG pipelines, building your own search UI, data gatheringAnswering questions of varying complexity using multiple sources

Use the Search API when you want raw results to feed into your own pipeline. Use the Research API when you want a ready-to-use answer backed by sources.


How it works

Research operates as an agentic system that autonomously plans and executes a multi-step research strategy for your question.

Search, Contents, and Live News as retrieval primitives

Research uses You.com’s Search, Contents, and Live News APIs as its core tools. Rather than firing generic web queries, the system selects the right tool for each sub-question — search for discovery, contents for deep page reads, live news for time-sensitive information, and several other internal tools to aid in generating the best possible answer. This targeted tool selection reduces wasted calls and gives the reasoning model cleaner inputs at each step.

The system also evaluates retrieved sources for freshness, diversity, and relevance before incorporating them into the answer.

Context management at scale

Deep research generates far more information than any single LLM context window can hold. Research uses context-masking and compaction strategies that let it operate well beyond those limits — maintaining coherent reasoning across hundreds or thousands of turns without losing track of what it found, what it verified, and what remains unresolved.

At higher effort levels, a single query can run more than 1,000 reasoning turns and process up to 10 million tokens.

Budget-based planning

The system receives a compute budget determined by the research_effort tier you choose. It plans its approach around that budget, allocating more effort to verifying ambiguous or high-stakes claims and moving quickly through well-sourced facts. This is the mechanism that enables the range of latency, accuracy, and cost tradeoffs across tiers.


What you get

Every Research API response includes:

  • content: A Markdown-formatted answer with numbered inline citations (e.g., [[1, 2]]) that reference items in the sources array.
  • content_type: The format of the content field (currently text).
  • sources: The web pages the API read and cited in the answer — each with a URL, title, and relevant snippets.
1{
2 "output": {
3 "content": "## RISC-V vs ARM: Key Architectural Differences\n\nRISC-V and ARM are both reduced instruction set architectures, but they differ in licensing, extensibility, and ecosystem maturity [[1, 2]].\n\n### Licensing\nARM requires per-chip licensing fees, while RISC-V is open-source and royalty-free [[1, 3]]...",
4 "content_type": "text",
5 "sources": [
6 {
7 "url": "https://example.com/risc-v-vs-arm",
8 "title": "RISC-V vs ARM: A Technical Comparison",
9 "snippets": [
10 "RISC-V's open ISA allows custom extensions without licensing negotiations, making it attractive for specialized hardware."
11 ]
12 },
13 {
14 "url": "https://example.com/processor-architectures",
15 "title": "Modern Processor Architectures Explained",
16 "snippets": [
17 "ARM's mature ecosystem includes extensive tooling and vendor support built over three decades."
18 ]
19 }
20 ]
21 }
22}

Key features

Research effort levels

The research_effort parameter controls how much compute the API allocates to your question. Higher effort means more searches, deeper source reading, and more cross-referencing — at the cost of longer response times.

LevelBehaviorTypical latencyPrice per 1,000 requests
liteQuick answer with minimal searching< 2s$10
standardBalanced speed and depth (default)~10–30s$50
deepMore searches, deeper cross-referencing< 120s$100
exhaustiveMaximum thoroughness< 300s$300

For the same query, the difference between tiers is substantial. Here’s an abridged comparison for the question “Which global cities improved air quality the most over the past 10 years, and what measurable actions contributed?”:

1{
2 "output": {
3 "content": "Global assessments show that the largest recent urban air-quality improvements are concentrated in East China, parts of the eastern United States, Europe, and Japan, with especially strong gains in Chinese megacities and cities with aggressive traffic-emissions controls such as London [[1, 2, 3]].\n\n1) Beijing (China) — PM2.5 fell from ~89–90 µg/m³ in 2013 to ~58 µg/m³ in 2017 (about 35–36% in five years), with evidence from both satellite and surface observations [[4, 5]].\nKey drivers included coal phase-down, industrial controls, stricter vehicle/fuel standards, and regional enforcement [[6, 7, 8]].\n\n2) Chinese city clusters (BTH / YRD / PRD) — China's population-weighted PM2.5 fell ~32% from 2013–2017, with the largest modeled decline in Beijing–Tianjin–Hebei (~38%); across 367 cities, observed PM2.5 fell ~44% from 2013–2019 [[9, 10]].\nThe main drivers were national clean-air action plans, coal controls, industrial restructuring, and transport emissions standards [[7, 9, 10]].\n\n3) London (UK) — London achieved major NO2 reductions linked to LEZ/ULEZ policies, with monitoring and modeling studies showing accelerated declines after ULEZ implementation and meaningful reductions versus no-ULEZ scenarios [[11, 12, 13, 14]].",
4 "content_type": "text",
5 "sources": [
6 {
7 "url": "https://pubmed.ncbi.nlm.nih.gov/36356738/",
8 "title": "Trends in urban air pollution over the last two decades: A global perspective - PubMed",
9 "snippets": [
10 "At global scale, PM2.5 exposures declined slightly from 2000 to 2019 ... Improvements were observed in the Eastern US, Europe, Southeast China, and Japan..."
11 ]
12 }
13 ]
14 }
15}

The exhaustive response identifies additional cities (Seoul, with specific UNEP data), includes more granular measurements (µg/m³ ranges, percentage reductions over specific date ranges), and cross-references more sources to verify claims.

Citation-backed answers

Every claim in the response links back to a specific source via inline citations. Your users (or your system) can verify any statement by following the numbered references to the sources array.

Markdown output

The content field is formatted in Markdown with headers, lists, and inline citations — ready to render in a UI or feed into downstream processing.


Quickstart

1import os
2import requests
3
4API_KEY = os.environ["YDC_API_KEY"]
5
6response = requests.post(
7 "https://api.you.com/v1/research",
8 headers={
9 "X-API-Key": API_KEY,
10 "Content-Type": "application/json",
11 },
12 json={
13 "input": "Top 5 EV-selling companies worldwide in 2025 so far",
14 "research_effort": "standard",
15 },
16)
17
18data = response.json()
19print(data["output"]["content"])
20
21print(f"\n--- {len(data['output']['sources'])} sources ---")
22for i, source in enumerate(data["output"]["sources"], 1):
23 print(f"[{i}] {source['title']}: {source['url']}")

Parameters

ParameterTypeRequiredDescription
inputstringYesThe research question (max 40,000 characters)
research_effortstringNoDepth of research: lite, standard (default), deep, exhaustive

View full API reference


Common use cases

Complex question answering

When a question can’t be answered from a single source — comparative analyses, multi-factor evaluations, questions that span multiple domains — the Research API handles the synthesis for you.

1import requests
2
3API_KEY = "YDC_API_KEY"
4
5response = requests.post(
6 "https://api.you.com/v1/research",
7 headers={"X-API-Key": API_KEY, "Content-Type": "application/json"},
8 json={
9 "input": "Compare the pricing models of the top 3 vector databases and their tradeoffs for a 10M-document collection",
10 "research_effort": "deep",
11 },
12)
13
14data = response.json()
15print(data["output"]["content"])

Due diligence and market research

Quickly gather verified, cited information about companies, markets, or technologies. The citation-backed output gives you traceability that raw LLM generation can’t.

Internal tools and knowledge assistants

Build internal research tools where employees can ask complex questions and get sourced answers — product comparisons, regulatory summaries, technical deep dives — without manually reading dozens of pages.

Content creation pipelines

Use the Research API as the first step in a content pipeline: ask a research question, get a cited draft, then use it as source material for blog posts, reports, or briefings.


Best practices

Match research effort to the question

Don’t use exhaustive for simple factual questions — lite or standard will be faster and cheaper. Save deep and exhaustive for questions where thoroughness and accuracy justify the longer response time.

Verify citations for high-stakes use cases

The inline citations make verification straightforward. For legal, financial, or medical contexts, build a step that follows citation URLs to confirm claims before surfacing them to end users.

Use structured inputs for better results

The input field supports up to 40,000 characters. For complex research tasks, include context, constraints, or specific angles you want covered. A well-scoped question produces a more focused answer.


Pricing

Pricing is fixed per tier — see the research effort levels table above for per-tier pricing and latency. For more details, visit https://you.com/pricing) or contact api@you.com.


Next steps