Research
Evidence-based insights on optimizing content for AI answer engines. Our research explores citability, recall fidelity, and governance frameworks for the generative web.
New to AI answer engine optimization? Follow this structured path to build comprehensive understanding
Recall Fidelity in the Age of Generative Engines
The integration of large language models (LLMs) into search interfaces has transformed the dynamics of digital visibility. This paper introduces recall fidelity as a measurable construct and explores strategies for achieving consistent recall in AI-generated responses.
Architecting Citability in the Generative Web
This paper introduces the concept of a citability substrate—a generalizable set of patterns that improve a content source's likelihood of being referenced by generative models.
Governance, Monitoring, and Generative Share-of-Voice (GSoV)
This paper introduces Generative Share-of-Voice (GSoV) — the percentage of LLM answers that faithfully cite or align with a given source. Continuous measurement, cryptographic provenance, and rights-aware metadata are essential for brand integrity.
Key Concepts
- Generative Share-of-Voice (GSoV)
- The percentage of LLM answers that faithfully cite or align with a given source. A visibility metric for the age of AI-generated content.
- Recall Fidelity
- The likelihood that a model retrieves and regenerates a specific knowledge unit with correct attribution and preserved context.
- Citability Substrate
- A generalizable set of patterns—including modularity, structured metadata, and exposure frequency—that improve a content source's likelihood of being cited by LLMs.
- Attribution Drift
- When citations are omitted, generalized, or misassigned in LLM outputs, threatening factual integrity and traceability.