You ran a fan out. You have a list of sub-queries. Now you're staring at it wondering if this is actually useful or just another novelty SEO tool.
It is useful. Here's how to turn the list into a content plan that actually shows up in ChatGPT, Claude, Perplexity, and Google AI Mode.
What is query fan out?
Odds are, you are here because there are a handful of commercial prompts you know would be valuable if you could get your company, product, or service to consistently show up in LLM results.
Query fan out is what happens behind the scenes when an LLM gets your prompt. It breaks your prompt into a handful of related sub-queries and fires them at search indexes like Google in parallel. Then it pulls the top results from each one, finds passages that agree with each other, and synthesizes a final answer. All in a few seconds.
This is grounded in Google's published methodology (US patent US20240289407A1, "Search with Stateful Chat"), and you can see the same pattern at work in Perplexity, ChatGPT search, and Claude search. They are all running real searches under the hood and writing the answer on top.
The reason this matters: LLMs don't have a magical index of the web. They lean on search engines to retrieve evidence, then use language models to write the response. That means showing up in LLM answers is closer to classic SEO than most "GEO" content makes it sound. You don't need to chase a new pseudo-science. You need to know which sub-queries the LLM is actually running, and rank for those.
That is what this tool gives you: the related key terms that LLMs are quietly searching when someone asks about your space.
A quick example
Say you sell B2B accounting software and you want to rank when someone asks ChatGPT, "what's the best accounting software for SaaS startups?"
The LLM might fan that out into things like:
- "best accounting software for SaaS companies 2026"
- "QuickBooks alternatives for startups"
- "accounting software with revenue recognition for SaaS"
- "best accounting software for venture-backed companies"
- "Xero vs NetSuite for SaaS"
Five sub-queries, five sets of search results, five different sets of citations the LLM can pull from. If you only optimize for the surface prompt, you are fighting for one slot. If you optimize for the fan out, you have five shots at being cited.
Okay, I have my query fan out terms. Now what?
Step 1: Get them in a tracker
Drop your fan out terms into a keyword tracker. If you already pay for Semrush or Ahrefs, use those. If you don't want to fork out the money for a full SEO platform, SEOPress Keyword Rank Tracker is a free way to start. Google Search Console also works for any term you already rank somewhere in the top 100 for.
Step 2: Diagnose where you stand
For each term, two quick checks:
- Are you ranking? Top 10, top 30, or nowhere?
- How competitive is the term? Look at the keyword difficulty score in your tracker, or eyeball the top 10 results. Are they all DR 80+ publications and category leaders, or is there a mix of mid-size sites in there?
Those two questions give you four scenarios. Each one has a different play.
Step 3: Pick the right play for each scenario
Scenario 1: You rank in the top 10, but not top 3
You are in the hunt. Don't rebuild the page. Tighten it. Add the exact sub-query phrasing as a section heading. Make sure the answer to that sub-query lives in a clean 150 to 300 word passage right under the heading. LLMs grab passages, not pages, so a scannable chunk that directly answers the sub-query gives you the best shot at being cited in the response.
Scenario 2: You rank somewhere on pages 2 to 5
You have foundation, not focus. Either expand the existing page with a dedicated section that answers the sub-query head-on, or split off a new page if the sub-query is meaningfully different from what the parent page is about. Internal link from the parent to the new page using the sub-query phrasing as anchor text.
Scenario 3: You are not ranking, and the term is brutally competitive
Building a new page that out-ranks Forbes, G2, and three category leaders is a multi-quarter project, and it might never work. Skip the head-on fight. Instead, get mentioned in the pages that already rank.
Listicles, review sites, "best of" round-ups, directories, comparison pages. With a little elbow grease, you can either submit your site directly or reach out to the site owner and get added to existing top-ranked content. LLMs cite those listicles, and now your name shows up in the answer even though your own site never ranked. This is the fastest path to LLM visibility for crowded terms.
Scenario 4: You are not ranking, and the term is NOT very competitive
This is your highest-leverage scenario, and the one most people miss. Build the page. Use the sub-query as the H1 or a primary H2. Answer it directly in the first paragraph. Include a comparison table, a clear definition, and at least one concrete example with a real number in it.
Uncompetitive sub-queries are the ones LLMs love to surface because there is not much else to choose from. Win these first. They compound.
A few habits that help in every scenario
No matter which scenario you're working on, a few things lift every page:
- Use the sub-query phrasing exactly. LLMs match on semantic similarity, and exact phrasing is the highest-confidence match.
- Write in passages, not walls of text. Each section should answer one sub-query in 150 to 300 words. That's the chunk size LLMs prefer to lift.
- Add data and specifics. "Sales rose" is forgettable. "Sales rose 34% to $4.2M in Q3" is the kind of sentence LLMs pull directly into answers.
- Spread your message off your own site too. LinkedIn posts, guest articles, podcast transcripts, and Reddit threads are all candidate passages. The more places your message lives, the more raffle tickets you have to be the cited source.
FAQ
How is this different from regular keyword research?
Keyword research tells you what humans type into Google. Query fan out tells you what an LLM types into Google after it reads what a human asked it. Different layer, different leverage. You should do both.
Do all LLMs fan out the same way?
No. Each model has its own reasoning chain and its own training data, so the same prompt fans out differently in ChatGPT, Claude, Perplexity, and Gemini. The terms you see here are directional. The patterns hold across models even when the exact phrasing differs.
How often should I re-run a fan out?
Any time your prompt's context shifts. New competitor in the market, a product launch, or a new year. The tool injects today's date into the fan out, so a "best X" prompt run in 2026 will look different than the same prompt run in 2025.
Why are some of my sub-queries about a totally different industry than I expected?
That's a real product insight, not a bug. Lots of acronyms and short brand names have dominant meanings outside your industry. If your fan out comes back full of results from a completely different field, the LLM is telling you another meaning of that term wins on the open web. Trying to rank for the bare term without context is a bad bet. Use a longer phrase that includes your industry, or invest in a clear disambiguation page that defines the term in your context.
Run another fan out at the top of the page. Or if you'd rather have a team turn these terms into a content plan that actually ranks, reach out to New Chemistry.