GEO Case Study: From Zero AI Citations to 40% Mention Rate in 45 Days
By Satish Kumar, Co-Founder & CEO, Astiva AI · Published 25 April 2026 · Q1 2026 beta cohort
Answer-first summary: A mid-market B2B SaaS company in marketing automation had 34 published pages and zero AI citations across ChatGPT, Claude, Perplexity, and Gemini. Five structural GEO fixes applied to 8 existing pages — no new content published, no backlinks built — produced a 40% mention rate on Perplexity, 27.5% on ChatGPT, and 22.5% on Claude in 45 days.
What was the AI mention rate after 45 days?
After 45 days, the brand achieved: Perplexity 40%, ChatGPT 27.5%, Claude 22.5%, Gemini 17% — up from 0% on every platform at baseline.
| Platform | Before | After 45 days | Improvement |
| Perplexity | 0% | 40% | +40 pp |
| ChatGPT | 0% | 27.5% | +27.5 pp |
| Claude | 0% | 22.5% | +22.5 pp |
| Gemini | 0% | 17% | +17 pp |
Why did a content-rich site have zero AI citations?
The company had 34 blog posts and pillar pages earning page-one Google rankings. Despite this, Astiva's initial AI audit returned 0% mention rate on every platform. The root cause was structural: content covered the right topics but was not formatted for AI extraction. Traditional SEO optimises for ranking signals. Generative Engine Optimization (GEO) requires answer-first structure, explicit sourced statistics, machine-readable schema, and assertive comparison verdicts.
Baseline audit finding: 0 of 34 pages scored above 15 out of 25 on Astiva's GEO readiness framework. Primary failure modes: answers buried mid-paragraph, no schema, no sourced statistics.
What five GEO fixes were applied to improve AI citation rate?
All five fixes were applied to 8 target pages only. No new pages published, no backlinks built.
- Answer-first paragraph structure — Every page opened with a direct answer in the first sentence. AI platforms extract answer fragments; buried answers are never cited.
- FAQPage + Article + Person schema — Pages with all three schema types showed citation rates 1.8x higher than prose-only rewrites. Schema gives AI platforms a machine-readable extraction surface.
- 3 to 5 sourced statistics per page — Each page included a minimum of three sourced statistics with source name and year visible inline. Specific, verifiable numbers are the most-cited content element across ChatGPT, Claude, and Perplexity.
- Question-format H2 headings — Section headings rewritten from label-style to question-format. Question H2s match exact query phrasing and dramatically increase extraction probability.
- Comparison tables with explicit verdict column — Side-by-side tables with a "Best for" verdict column. AI platforms quote verdict columns verbatim when users ask "which is better?"
How did AI mention rate change week by week over 45 days?
- Days 1 to 7: Baseline audit. All mention rates 0% on all platforms.
- Days 8 to 14: All 5 fixes deployed to 8 pages. Pages re-crawled by Googlebot within 5 days.
- Days 14 to 21: Perplexity first signal: 8.5% mention rate. ChatGPT and Claude still 0%.
- Days 21 to 35: ChatGPT reached 12.5%. Claude first citation at 7.5%. Perplexity climbed to 27%.
- Days 35 to 45: Final snapshot — Perplexity 40%, ChatGPT 27.5%, Claude 22.5%, Gemini 17%.
What does this case study prove about AI citation optimisation?
- Structural fixes outperform new content. All gains came from restructuring 8 existing pages.
- Perplexity is the fastest early signal. Use Perplexity mention rate as the leading indicator — if it doesn't respond within 2 weeks, the fix is not working.
- SEO rank and AI citation rate are independent. All 8 pages ranked on page one for SEO already. Ranking position did not predict citation rate.
- Schema is the highest-leverage single fix. Pages with FAQPage + Article + Person schema showed citation rates 1.8x higher than prose-only rewrites.
Methodology
40 prompts tested weekly across ChatGPT (GPT-4o), Claude 3.5 Sonnet, Perplexity (default web mode), and Gemini 1.5 Pro. Each prompt run 3 times per platform per week; brand counted as mentioned if it appeared in at least one of the three runs. Mention rate = (prompts with at least one brand mention) / (total prompts run). Tracking period: Q1 2026, 45 days. Results verified via Astiva AI monitoring dashboard. Client identity protected under NDA.
Start Free Trial — Run the Same Audit on Your Site |
Free AI Visibility Analysis |
Astiva Methodology
← Back to Astiva AI