Skip to content

Sample Scorecard

What the AI Search Recommendation Audit scorecard looks like

This sample uses two real buyer-intent prompt families from a recent audit pass: AI meeting assistants for sales teams and newsletter platforms for creators.

Back to the audit service

Scoring rubric

  • • Top recommendation = 3
  • • Top 3 = 2
  • • Mentioned lower = 1
  • • Omitted = 0
  • • +1 if an owned brand URL is cited directly
  • • -1 if competitor-owned content is cited above the target

Sample results

Engine Category Target Rank bucket Owned citation Competitors above target Score
ChatGPT AI meeting assistant MeetGeek mentioned lower yes Gong, Fireflies, Otter, Fathom, Avoma 1
Gemini AI meeting assistant MeetGeek top 3 yes Gong, Fireflies 2
Claude AI meeting assistant MeetGeek mentioned lower yes Gong, Fireflies, Avoma, Fathom 1
ChatGPT Newsletter platform Beehiiv top recommendation yes none 4
Gemini Newsletter platform Beehiiv top recommendation yes none 4
Claude Newsletter platform Beehiiv top recommendation yes none 4

Category total: MeetGeek

4 / 12

Weak visibility on the sales-team meeting-assistant prompt family. The pattern is not total absence — it is losing the frame to coaching, revenue-intelligence, and CRM-depth competitors.

Category total: Beehiiv

12 / 12

Strong visibility on creator-newsletter prompts. The winner pattern is clear positioning around growth, monetization, referrals, and creator-first workflow.

What buyers learn from a scorecard like this

  • • whether the issue is omission, weak ranking, or weak framing
  • • which competitors keep taking the shortlist
  • • which buyer-intent pages or comparison angles are missing
  • • which proof blocks need to exist before more content spend makes sense

Want a version built for your category?

We can run the same prompt family against your brand, score what the engines recommend, and hand back the first pages and proof blocks to ship.

Request the audit