
Covering how brands show up in LLM-driven experiences, with practical research and real-world examples.
Summary: DreamFactory had a large content library but was invisible in AI search. XLR8 AI found the exact questions DreamFactory's buyers were asking ChatGPT and Claude — by pulling verbatim language from sales call transcripts. The result: 90% visibility on Google AI Mode and #1 share of voice across all major LLMs.
This case study covers how DreamFactory, an enterprise API platform, partnered with XLR8 AI to go from invisible in AI-generated answers to dominating every major LLM for their category. The approach was unconventional: rather than starting with keyword research, XLR8 AI extracted the exact buyer questions from DreamFactory's sales call transcripts — then engineered content that made DreamFactory the cited answer for every one of them.
| Metric | Before | After |
|---|---|---|
| Google AI Mode visibility | — | 91.3% |
| Overall LLM visibility | Baseline | +79% |
| Share of voice on ChatGPT, Claude, Gemini, Grok, Perplexity | Not in top 5 | #1 across all platforms |
| New content citation rate | 0% | 100% |
| AI Gateway visibility | 20% | 78% |
| SQL Server visibility | 61% | 79% |
DreamFactory is a self-hosted platform that provides governed API access to any data source for enterprise applications and local LLMs. It enables teams to securely expose databases including MySQL, PostgreSQL, MongoDB, SQL Server, Snowflake, and BigQuery through automatically generated REST APIs. Built for enterprises with strict governance, data sovereignty, and on-premises requirements.
By mid-2025, enterprise software buyers had shifted a significant portion of their research to AI assistants. Instead of Googling "best SQL Server API tool," they were opening ChatGPT or Claude and asking real questions about real problems they were stuck on.
DreamFactory was the right answer to hundreds of those questions every day — but it was not appearing in the responses.
The root cause was not a lack of content. DreamFactory had a large blog and substantial technical documentation. The problem was that the content was not aligned to how buyers actually phrased their questions inside AI assistants. And no one had systematically identified which questions those were — until XLR8 AI pulled them directly from DreamFactory's sales call recordings.
Before writing a single piece of content, XLR8 AI made a distinction that most GEO programs miss entirely.
Definitional queries — "what is a REST API," "what does RBAC stand for" — are answered entirely from an LLM's training data. No web retrieval happens. No external sources are cited. No vendors are mentioned. Optimizing for these queries produces zero pipeline impact regardless of content quality.
Implementation-level informational queries — "how do I generate REST APIs from SQL Server," "how do I securely expose my database as a REST API" — behave completely differently. The buyer needs a practical, current answer. LLMs retrieve external sources for these, compare available solutions, and cite the platforms they recommend. This is the only mode in AI search where vendor recommendations are possible.
DreamFactory's buyers were overwhelmingly asking implementation-level informational queries. Every content and optimization decision that followed was built around this single distinction.
Keyword research tools show what people search for on Google at volume. DreamFactory's most valuable buyer queries don't appear there — they are conversational, problem-specific questions asked to AI assistants that never get typed into a traditional search bar.
The only place these queries exist in raw form is in sales call recordings, where buyers describe what they were searching for before they found the product.
XLR8 AI built a methodology to extract verbatim buyer language from DreamFactory's sales call transcripts — preserving exact phrasing, not summarizing themes. The output was six precise target queries that became the foundation of the entire strategy.
The six informational queries DreamFactory needed to win:
Each query represents a buyer in the middle of solving a problem — past the awareness stage, actively deciding how to solve something. DreamFactory was the right answer to every one of them. Before this engagement, it was absent from all of them.
With the six queries identified, XLR8 AI mapped each one against DreamFactory's existing content, what LLMs were currently citing in response, and the specific gap. Every action was query-specific — not a generic content calendar, but a direct fix for a precise citation failure.
Content was optimized using XLR8 AI's adversarial ML-driven optimizer — engineering each asset for cosine similarity to the target query vector, not keyword density, in a format that matches how LLMs chunk and extract text.
The result: Every asset across the AI Gateway category (13 articles) and SQL Server category (14 assets) achieved a 100% citation rate post-publication across ChatGPT, Claude, Gemini, Grok, and Perplexity.
Once the informational query foundation was in place, XLR8 AI expanded DreamFactory's visibility across four additional areas:
Documentation restructuring. DreamFactory's technical docs were written for human readers, not LLM extraction. XLR8 AI restructured four documentation files and two connector pages using a template optimized for how LLMs chunk and retrieve text. All six were cited post-revision — including connector pages that had previously received zero citations.
Funnel expansion. XLR8 AI expanded into discovery, competitor alternative, competitor comparison, transactional, and sentiment queries — covering the full buyer conversation, not just the initial research phase.
GPT Thinking Mode tracking. XLR8 AI identified that most of DreamFactory's target queries trigger GPT Thinking Mode rather than GPT Fast Mode, and that citation behavior differs between the two. Custom tracking infrastructure was built to measure citations separately across both models — something standard GEO tools cannot do.
Sentiment remediation. LLMs were surfacing 56 distinct "cons" about DreamFactory across 93 source citations. XLR8 AI's Action Center identified each objection and its source, enabling targeted content to replace simplified negative framings with accurate, specific context.
"XLR8 AI helped us turn AI search from a black box into a measurable growth channel. Within months, we achieved 80%+ visibility across major LLMs and became the top-cited solution for key API Gateway and SQL-to-API queries. My biggest takeaway of working with the team is their technical knowledge of how AI Search works — they're not a marketing team, but a team of ML experts helping us navigate this shift as SEO is fundamentally changing."
— Terence Bennett, CEO, DreamFactory
1. Sales call transcripts are the best source of AI search keywords. The conversational queries buyers type into ChatGPT don't appear in keyword research tools. The only place they exist verbatim is in your sales call recordings.
2. There is a critical distinction between definitional and implementation-level queries. Only implementation-level queries trigger LLM web retrieval and vendor recommendations. Optimizing for definitional queries wastes resources.
3. Technical documentation is an underutilized GEO asset. DreamFactory's connector pages had zero citations before restructuring. After restructuring for LLM extraction, all six were cited. Docs are not just for users — they are citations waiting to happen.
4. LLM cons and sentiment are measurable and addressable. DreamFactory had 56 distinct negative talking points circulating in AI responses. These were identified, sourced, and systematically replaced through targeted content.
5. GPT Thinking Mode and GPT Fast Mode behave differently. Enterprise software companies whose queries trigger Thinking Mode need specialized tracking infrastructure. Standard GEO tools do not differentiate between these modes.
Google AI Mode is Google's AI-powered answer layer that generates direct responses to queries rather than displaying a list of blue links. It uses retrieval-augmented generation (RAG) to pull from indexed content and cite sources. Appearing in Google AI Mode is distinct from traditional Google ranking and requires GEO-specific optimization.
Cosine similarity is a mathematical measure of how closely aligned two pieces of text are in meaning. LLMs use this concept when deciding which sources to retrieve in response to a query. XLR8 AI optimizes content for high cosine similarity to target queries — meaning the content is semantically aligned to the exact question being asked, not just keyword-matched.
GPT Thinking Mode is a mode in which ChatGPT performs extended reasoning before generating a response. It behaves differently from GPT Fast Mode in terms of how it retrieves and cites sources. For enterprise software queries with technical complexity, Thinking Mode is commonly triggered.
LLMs retrieve information in chunks of approximately 80–100 words. Documentation written for sequential human reading often doesn't break naturally into these chunks. Restructuring docs to use clear headers, explicit entity mentions, and self-contained answer blocks dramatically improves retrieval and citation probability.
XLR8 AI combines proprietary ML-driven content optimization, sales call transcript analysis, multi-model citation tracking (including GPT Thinking vs. Fast Mode), and sentiment remediation into a single integrated program. Most GEO tools offer reporting only. XLR8 AI executes. Learn more at tryxlr8.ai.
Source: Full case study
About XLR8 AI: XLR8 AI is an end-to-end AI search visibility and GEO optimization platform for enterprises. The company combines proprietary ML software, dedicated GEO strategists, and hands-on execution to improve how brands appear in ChatGPT, Perplexity, Google AI Mode, and other AI platforms. Learn more at tryxlr8.ai.


