I thought this would be simple.
Run a few prompts in ChatGPT.
See if the brand shows up.
Done.
It isn’t.
The first thing that feels off
You run a query like:
best SEO tools for SaaS
You get a list.
Cool.
Run it again a few minutes later…
Different list.
Try the same thing in Gemini or Perplexity?
Different again.
At that point you realise:
👉 there isn’t really a “result” to track
This isn’t Google
With Google, you can say:
- I rank #3
- I moved to #5
- competitor overtook me
There’s something stable underneath.
Even if it fluctuates, it’s measurable.
With AI tools?
You’re not ranking.
You’re being… picked.
Or not.
And that decision isn’t consistent.
The uncomfortable part
You can look like you’re doing fine if you test the “right” query.
Then completely disappear if you change wording slightly.
Example:
- “best SEO tools” → you show up
- “SEO tools for startups” → gone
- “what should I use for SEO” → also gone
Same intent. Different outcome.
So what is your “visibility” actually?
It’s not position, it’s frequency
This is the mental shift that took me a while.
You don’t have a ranking.
You have something more like:
“how often do I show up across a set of queries”
Which is way messier.
Because now you’re dealing with:
- query variation
- different models
- timing
- randomness (or at least it feels like it)
Why manual checking falls apart quickly
You can do it manually.
But to get anything close to reality, you’d need:
- multiple queries (not just 1–2)
- multiple tools (ChatGPT, Gemini, Perplexity)
- repeat it regularly
Even with something small like:
- 10 queries
- across 3 tools That’s already 30 checks.
Now do that every week and try to compare results.
You won’t.
Or you’ll do it once and never again.
The part nobody talks about: query space
Most people test a couple of obvious prompts.
Real users don’t.
They type things like:
- “what’s the best tool for SEO if I’m just starting”
- “alternatives to ahrefs”
- “tools for content optimisation”
Each of those hits a slightly different path.
And AI responds differently depending on how easy your brand is to “fit” into that answer.
So what are you even optimising for?
This is where it clicked for me:
You’re not optimising for ranking anymore.
You’re optimising for being easy to include.
Which depends on things like:
- how clearly your product is positioned
- whether your content is easy to extract from
- how often your brand shows up elsewhere
Not just “do you have a good page”.
The real problem: no feedback loop
This is the bit that actually makes it hard.
You:
- change something on your site
- maybe add content
- maybe build a few links
Then what?
You don’t know if it helped.
Because:
- results change
- queries vary
- nothing is tracked properly
So you’re basically guessing.
What actually helped (for me at least)
I stopped thinking in terms of:
“am I showing up?”
and started thinking:
“how often do I show up across a set of queries over time?”
That’s a completely different thing to measure.
And it’s not something you can realistically do in your head or in a spreadsheet.
That’s why tools like https://searchscore.io started making more sense to me.
Not because “another SEO tool”.
But because this problem isn’t really SEO in the traditional sense.
It’s closer to:
- monitoring
- pattern detection
- trend tracking
Bigger picture
This feels like an early-stage problem.
The same way SEO was messy before rank trackers existed.
Right now most people:
- don’t measure this
- assume they show up
- or test once and move on
Which means there’s a weird window where:
👉 if you do measure it properly, you’re ahead by default
Final thought
If you haven’t checked this properly yet, do it.
Not once.
Try a handful of queries across a couple of tools.
The result is usually the same:
👉 you’re either barely there… or not there at all
And that’s a much bigger problem than most people realise.













