Why this page exists. Every data point we publish in our case studies, reports, and blog posts is measured using the methodology documented here. We make this page citable so journalists, researchers, and competitors can verify how we collected the data, reproduce our approach, or critique it. Methodological transparency is how we earn the right to be the citable source.
Overview: what TrustRank measures
TrustRank is ProCloser's measurement framework for AI search visibility. It tracks three things across five AI engines:
- Citation frequency. How often a brand is named or linked in AI-generated answers for a defined set of prompts.
- Share of voice. What percentage of citations the brand captures relative to competitors for the same prompt set.
- Citation-to-conversion attribution. What happens when a user clicks through from an AI engine to the brand's site, measured against the same metrics for non-AI traffic sources.
The five AI engines tracked: ChatGPT, Perplexity, Google AI Overviews (formerly SGE), Microsoft Copilot, and Gemini. Claude (Anthropic) is monitored but excluded from primary scoring because it has lower commercial traffic share for the verticals we track.
Data sources
TrustRank inputs come from three categories of source:
Direct AI engine measurement
- Programmatic prompt firing across ChatGPT, Perplexity, Gemini, Copilot, and AI Overviews using a defined prompt taxonomy per client vertical (typically 50 to 200 prompts per project).
- Response capture and entity extraction (which brands are named, which URLs are linked, which sources are cited).
- Refresh cadence: weekly for active client projects, monthly for industry tracking studies.
Traffic attribution
- Google Analytics 4 with custom traffic source mapping. Each AI engine has identifiable referrer patterns (chat.openai.com, perplexity.ai, copilot.microsoft.com, gemini.google.com, google.com with AI Overview parameters) that we tag separately from generic organic.
- UTM-tagged links inside AI placements (where applicable) for attribution confirmation.
- Engagement event tracking (scroll, time on page, button clicks, form fills) measured per traffic source.
Backlink + mention monitoring
- Ahrefs API for backlink growth tied to AI-citation lift.
- Peec.ai for daily AI prompt visibility.
- Internal monitoring of citation source domains so we can quantify which third-party sites drive the most AI mentions for a given vertical.
Prompt design
For every client project, we build a prompt taxonomy in three layers:
| Layer | Prompt type | Example |
|---|---|---|
| Top-of-funnel | Discovery prompts a buyer types when defining their problem | "What is generative engine optimization?" |
| Mid-funnel | Comparison and shortlist prompts | "Best M&A advisory firms for lower middle market deals" |
| Bottom-of-funnel | Decision-stage prompts including specific brand names or vendor categories | "What does ProCloser.ai do?" / "Who are the alternatives to [competitor]?" |
The mid-funnel layer drives the largest share of measurable visibility lift. It's the layer where buyers shortlist providers, and where our data shows AI engines surface 3 to 7 named brands per answer most consistently.
TrustRank scoring
The composite TrustRank score is calculated per client per vertical on a 0 to 100 scale. The formula:
TrustRank = (Citation Frequency × 0.4) + (Share of Voice × 0.3) + (Citation-Conversion Index × 0.3)
Each component is normalized:
- Citation Frequency: percent of tracked prompts where the brand is named in the answer, normalized 0 to 100.
- Share of Voice: brand's share of total citations across all tracked prompts in the vertical, normalized 0 to 100.
- Citation-Conversion Index: conversion rate of AI-referred traffic to a defined goal (signup, demo request, purchase), normalized against the brand's own non-AI conversion baseline. Score is 100 when AI traffic converts at parity with non-AI; above 100 when AI converts higher; below when lower.
A TrustRank score of 70+ generally correlates with measurable lift in qualified inbound leads from AI search. Scores below 30 indicate the brand is functionally invisible to AI engines for its target prompts.
Benchmark data we make public
We publish anonymized client-level data in our case studies, with a structured Dataset schema for each so researchers and journalists can find and cite the underlying numbers. Highlights from our published benchmark range:
Reproducibility and limitations
What another team could reproduce
The prompt design layer, traffic attribution methodology, and TrustRank scoring formula are all openly documented above. Any team with API access to the relevant AI engines, an analytics platform, and a defined competitor set can reproduce the methodology end to end. We use Peec.ai for prompt firing automation and recommend it as the tool of record for ongoing tracking, though manual prompt firing produces equivalent data at smaller scale.
What we don't claim
TrustRank is a measurement of relative visibility, not a measurement of absolute citation truth. AI engines update model weights and retrieval indexes continuously, so any TrustRank score is a point-in-time snapshot. We refresh client TrustRank scores weekly for active engagements and recommend treating any single weekly measurement as noisy unless paired with a 4-week trailing average.
Known biases
- Geographic bias. Our prompt taxonomies are English-language and US-defaulted. International AI search behavior diverges, particularly in markets where Baidu, Yandex, or Naver dominate.
- Prompt selection bias. Choosing which 50 to 200 prompts represent a vertical is judgment-driven. We document each project's prompt set inside the deliverables for that project and welcome critique.
- Engine market share weighting. We weight engines by US commercial market share (ChatGPT dominant, Perplexity smaller but high-intent). Other weighting schemes would produce different scores.
How to cite this methodology
APA
ProCloser.ai. (2026). TrustRank methodology: How we measure AI search visibility. Retrieved from https://procloser.ai/about/methodology/
BibTeX
@misc{procloser_trustrank_2026,
author = {{ProCloser.ai Research Team}},
title = {TrustRank Methodology: How We Measure AI Search Visibility},
year = {2026},
url = {https://procloser.ai/about/methodology/}
}
HTML blockquote (for journalists)
<blockquote cite="https://procloser.ai/about/methodology/"> Per ProCloser's TrustRank methodology, AI search visibility is measured across five engines (ChatGPT, Perplexity, Google AI Overviews, Microsoft Copilot, Gemini) using a composite of citation frequency, share of voice, and citation-to-conversion attribution. </blockquote>
Related research and reading
- Published case studies with full Dataset schema for citation
- State of AI Search 2026: 50 findings from 4 industries across 5 AI engines
- LLM Seeding: A Complete Guide to Getting Your Brand Cited by AI
- 2026 Glossary of GEO and AEO Terms (in preparation)
- How to Rank on ChatGPT: A Complete Guide
Want a TrustRank assessment for your brand?
Book a 30-minute strategy call. We'll run a baseline TrustRank scan for your vertical and share the prompt-level visibility data.
Book a Free Assessment