SEO taught us to fight for keywords. Now AI is changing the game. People don’t just search. They ask.
And when they ask ChatGPT or Claude about your category, your name might not even come up.That’s a new kind of blind spot.
These 11 questions will help you evaluate how your brand shows up inside large language models (LLMs) like ChatGPT, Claude, Gemini and Perplexity.
1. What happens when someone asks: “What’s the best [tool/product/service]?”
This is where real discovery starts.
Ask ChatGPT or Claude a buyer-style question like “What’s the best CRM for startups?” or “Best Shopify analytics tools.”
Are you mentioned? Are your competitors?
If you’re not in the mix, it’s like being invisible on page one of Google.
2. Does the model actually understand what you do?
Ask:
“What is [your company name]?”
“Can you summarize what [company] offers?”
Do the answers sound like your website? Or is it a vague, outdated guess?
If the model doesn’t get you, your audience won’t either.
3. Is your value proposition showing up in answers?
What makes you different? And is that showing up when someone asks for comparisons or alternatives?
Try asking:
“What’s the difference between [your company] and [competitor]?”
“What’s better for [pain point] — [your product] or [other brand]?”
Your unique edge should be front and center.
4. Are you showing up in high-intent, buyer-style queries?
People phrase things like:
“Best tools for launching a newsletter”
“Alternatives to Notion for project management”
“Which email platforms are easiest for beginners?”
These are early funnel moments. LLMs often give lists or top picks. You want to be in those.
5. Are old or irrelevant sources shaping how AI sees you?
LLMs pull from the public internet. That includes ancient press releases, scraped profiles, and blog posts you forgot existed.
If your old messaging is what’s ranking in AI answers, that’s what people believe.
6. Are your target keywords represented in AI summaries?
Let’s say you’ve worked hard to rank for “AI brand monitoring” or “SEO for ChatGPT.” Great, but does ChatGPT actually say those words when describing you?
Ask yourself:
- Are the terms you care about being used?
- Are they associated with your brand or someone else?
7. Do you show up differently across models?
ChatGPT, Claude, Gemini, Perplexity, they’re not the same. They pull from different datasets, use different recency rules, and behave differently based on how users ask.
You need visibility across all of them.
One model might favor you. Another might forget you exist.
8. Are you showing up in advice-based queries, not just tool lists?
People often start with problems, not products.
“How do I increase organic reach without spending on ads?”
“How can I monitor my brand online?”
If your tool helps solve that pain, but you’re not being suggested, that’s a missed opportunity.
9. Do you know when your visibility shifts?
It’s not enough to check once. LLMs update their knowledge, sometimes weekly. Competitors can leapfrog you fast.
You need to monitor how and when your brand moves up (or down) in AI-generated responses. Think of it like rank tracking but for models instead of search engines.
10. Is your brand ever misrepresented or confused with others?
LLMs can hallucinate. They might mix you up with a similarly named brand, misstate your pricing, or assign you features you don’t offer.
That’s a problem, especially if your sales team hears it on calls and you don’t know why.
11. Do you have a strategy to influence how LLMs see you?
If you’re not represented how you want, what’s the plan?
You can’t just keyword stuff a blog and hope AI catches up. You need to shape the sources LLMs pull from:
- Structured content
- Mentions on trusted sites
- Clear, updated descriptions
- Relevant, AI-friendly formats
And it all starts with knowing where you stand.
You can’t influence what you don’t track.
LLMs reach millions of people a day. They shape how buyers learn, compare and choose. Murmur helps you monitor how your brand shows up inside AI platforms and notifies you when things change.
Because checking the models manually is a pain. We automated it.