GEO Case Study: From Zero AI Citations to 40% Mention Rate in 45 Days

By Satish Kumar, Co-Founder & CEO, Astiva AI · Published 25 April 2026 · Q1 2026 beta cohort

Answer-first summary: A mid-market B2B SaaS company in marketing automation had 34 published pages and zero AI citations across ChatGPT, Claude, Perplexity, and Gemini. Five structural GEO fixes applied to 8 existing pages — no new content published, no backlinks built — produced a 40% mention rate on Perplexity, 27.5% on ChatGPT, and 22.5% on Claude in 45 days.

What was the AI mention rate after 45 days?

After 45 days, the brand achieved: Perplexity 40%, ChatGPT 27.5%, Claude 22.5%, Gemini 17% — up from 0% on every platform at baseline.

PlatformBeforeAfter 45 daysImprovement
Perplexity0%40%+40 pp
ChatGPT0%27.5%+27.5 pp
Claude0%22.5%+22.5 pp
Gemini0%17%+17 pp

Why did a content-rich site have zero AI citations?

The company had 34 blog posts and pillar pages earning page-one Google rankings. Despite this, Astiva's initial AI audit returned 0% mention rate on every platform. The root cause was structural: content covered the right topics but was not formatted for AI extraction. Traditional SEO optimises for ranking signals. Generative Engine Optimization (GEO) requires answer-first structure, explicit sourced statistics, machine-readable schema, and assertive comparison verdicts.

Baseline audit finding: 0 of 34 pages scored above 15 out of 25 on Astiva's GEO readiness framework. Primary failure modes: answers buried mid-paragraph, no schema, no sourced statistics.

What five GEO fixes were applied to improve AI citation rate?

All five fixes were applied to 8 target pages only. No new pages published, no backlinks built.

  1. Answer-first paragraph structure — Every page opened with a direct answer in the first sentence. AI platforms extract answer fragments; buried answers are never cited.
  2. FAQPage + Article + Person schema — Pages with all three schema types showed citation rates 1.8x higher than prose-only rewrites. Schema gives AI platforms a machine-readable extraction surface.
  3. 3 to 5 sourced statistics per page — Each page included a minimum of three sourced statistics with source name and year visible inline. Specific, verifiable numbers are the most-cited content element across ChatGPT, Claude, and Perplexity.
  4. Question-format H2 headings — Section headings rewritten from label-style to question-format. Question H2s match exact query phrasing and dramatically increase extraction probability.
  5. Comparison tables with explicit verdict column — Side-by-side tables with a "Best for" verdict column. AI platforms quote verdict columns verbatim when users ask "which is better?"

How did AI mention rate change week by week over 45 days?

What does this case study prove about AI citation optimisation?

Methodology

40 prompts tested weekly across ChatGPT (GPT-4o), Claude 3.5 Sonnet, Perplexity (default web mode), and Gemini 1.5 Pro. Each prompt run 3 times per platform per week; brand counted as mentioned if it appeared in at least one of the three runs. Mention rate = (prompts with at least one brand mention) / (total prompts run). Tracking period: Q1 2026, 45 days. Results verified via Astiva AI monitoring dashboard. Client identity protected under NDA.

Start Free Trial — Run the Same Audit on Your Site | Free AI Visibility Analysis | Astiva Methodology

← Back to Astiva AI