AI Answer Engine Citation Behavior: An Empirical Analysis of the GEO-16 Framework

AI Answer Engine Citation Behavior: An Empirical Analysis of the GEO-16 Framework

AI Answer Engine Citation Behavior: An Empirical Analysis of the GEO-16 Framework

Research paper

Research paper

Oct 24, 2025

Oct 24, 2025

AI Answer Engine Citation Behavior research paper introducing GEO-16 framework for B2B SaaS
AI Answer Engine Citation Behavior research paper introducing GEO-16 framework for B2B SaaS

Abstract

AI answer engines increasingly mediate access to domain knowledge by generating responses and citing web sources. We introduce GEO-16, a 16 pillar auditing framework that converts on page quality signals into banded pillar scores and a normalized GEO score G that ranges from 0 to 1. Using 70 product intent prompts, we collected 1,702 citations across three engines (Brave Summary, Google AI Overviews, and Perplexity) and audited 1,100 unique URLs.

In our corpus, the engines differed in the GEO quality of the pages they cited, and pillars related to Metadata and Freshness, Semantic HTML, and Structured Data showed the strongest associations with citation. Logistic models with domain clustered standard errors indicate that overall page quality is a strong predictor of citation, and simple operating points (for example, G at least 0.70 combined with at least 12 pillar hits) align with substantially higher citation rates in our data.

Get the full research

Explore our complete study, data, and findings behind this article.

Let us help you win the AI search.

Let us help you win the AI search.

Let us help you win the AI search.