What your budget actually buys.
Pay only for what you use. No seats, no subscriptions. Pick a model, set a budget, and Webhound works within it — showing every search, every page visited, every dollar spent. Same pricing for Reports, Datasets, Chains, and Ask.
Which model, which budget
Two models, one tradeoff. Pro is smarter and more accurate — the right pick for reports, where one mistake shows. Flash is cheaper and faster — the right pick for datasets, where you want more rows per dollar. Use the other combos only when you know why.
Pro report
What you want when the report has to be right. Pro is the smarter, more accurate model — it reasons through sources carefully and rarely makes things up. Use for memos, strategy docs, due diligence, anything someone will push back on.
- Smarter reasoning, higher accuracy
- Holds up to scrutiny claim-by-claim
- Best for memos, strategic analysis, deep research
Flash report
Cheaper and faster, but less accurate — Flash can get facts wrong or confuse similar sources. Good for casual scouting and draft-quality writeups, not for anything where accuracy is being judged.
- Roughly 4× cheaper per run than Pro
- Good for market scans, competitor lists, first drafts
- Expect occasional factual slips — verify before shipping
Flash dataset
What you want when you need volume. Every row gets verified regardless of model, but Flash is faster and cheaper per row — so you can cover far more ground for the same budget. Use for lists, directories, catalogs, anywhere throughput matters.
- ~2–3× more rows per dollar than Pro
- Verifier still checks every row
- Best for company lists, product catalogs, person rosters
Pro dataset
Fewer rows, but each one is smarter. Pro is better at hard fields — interpreting ambiguous data, reasoning across multiple pages, niche domains. Use when each row needs to be right, not just present.
- Smarter on difficult or ambiguous fields
- Better at multi-page reasoning and niche domains
- Best for financial data, legal records, research-grade lookups
Numbers are medians from real sessions at each price point, not estimates. Your run will vary with query breadth and source availability.
Turn on Deep read when precision matters more than breadth — it lets Webhound hold much more of each page in context per pass so it can reason over more material at once. Catches buried details that smaller-chunk reads miss; multiplies both cost and runtime.
Minimums at a glance
Every session has a floor so the agent has enough budget to produce something useful. You can always top up a running session.
| Session type | Model | Mode | Minimum |
|---|---|---|---|
| Report | Flash | Standard | $2 |
| Report | Pro | Standard | $10 |
| Report | Flash | Deep read | $8 |
| Report | Pro | Deep read | $25 |
| Dataset | Flash | Standard | $1 |
| Dataset | Pro | Standard | $5 |
| Dataset | Flash | Deep read | $3 |
| Dataset | Pro | Deep read | $15 |
What each dollar breaks down into
Session cost is the sum of LLM token usage plus every scrape and search the agent made. Here's what everything costs at the margin.
LLM tokens
Most of a session's cost is tokens. Every page the agent reads, every todo it plans, every cycle it verifies — it all passes through the LLM. Pro tokens cost ~4× more, and that's where the accuracy gap comes from.
Search & scrape
A single $10 report typically runs 50–200 of these operations depending on depth. Failed scrapes don't cost anything.
