ai-seo-geo.html

Your buyers are asking ChatGPT, not Google. Make sure it tells them your name — not your competitor's.

Generative Engine Optimization is what gets your brand cited inside the answer when your buyers ask ChatGPT, Perplexity, Gemini, or Google AI Overviews. Your competition has not figured this out yet. You can be the brand that ships first.

BOOK YOUR 30-MIN AUDIT CALL

The moment your category broke

You typed your own brand name into ChatGPT last week. You asked it to recommend a vendor in your category. You watched it answer with three names — and yours was not one of them. That is the moment the rules changed for you.

Roughly 40% of high-intent queries that used to land on a Google search result now land on an AI answer. The buyer reads three sentences and a citation list, and they decide. They never see your homepage. They never see your case studies. They never see the £80k of content marketing you shipped in 2024. If your name is not in those three sentences, you are not in the consideration set.

The agencies you are paying for SEO are still optimising for blue-link rankings. That work matters less every month. The optimisation that decides whether AI cites you — schema, entity authority, citable original data, llms.txt, machine-readable depth — is a different discipline. It has a name. It is called Generative Engine Optimization. This is what I do.

What you actually get

1. The audit that tells you where you stand

Open ChatGPT and ask it the ten questions your buyers ask. You will see your brand cited zero times, two times, or twelve times. Then ask Perplexity, Gemini, and Bing Copilot the same questions. The shape of the answer is your starting line. The audit is a 30-page report covering: your AI citation share across the four major surfaces, your entity health (how the LLMs see your brand right now, including any disambiguation or hallucination problems), your schema gaps, your llms.txt and AI crawler readiness, and a prioritised list of the fixes that will move the needle in the next 90 days.

2. The schema rollout your dev team has been deferring

Schema.org structured data is the language LLMs use to understand your site. Your competitors have it; you mostly do not. Organization, Service, FAQPage, HowTo, Product, Article — each one has rules and each one has gotchas. The rollout is a tracked programme: I write the JSON-LD, your dev team merges the PRs, the AI surfaces re-crawl over 4–6 weeks, and you watch the citation share rise.

3. The entity graph that makes your brand legible to LLMs

Your brand is an entity to ChatGPT. So is every product you sell, every founder you have, every service you offer. Right now those entities are fragmented across your site, your LinkedIn, your Crunchbase, your Wikipedia (if it exists), your Knowledge Panel (if it exists). The entity work pulls them into one coherent graph the LLMs can resolve cleanly. The result: when someone asks "who builds X for Y", your brand surfaces with the right context, not someone else's.

4. The citable research nobody else in your category will do

The fastest path to AI citation is publishing the original data the LLMs want to cite. Survey your customers, analyse your industry, publish the numbers, mark them up with structured data. You become the source. Your competitors quote you. The LLMs follow the quotes. This is the playbook the Anthropic Economic Index uses; the same shape works for your category at a smaller scale.

5. The llms.txt and AI crawler permissions you have been ignoring

Anthropic, OpenAI, Perplexity, and Google have crawler bots distinct from Googlebot. Your robots.txt is probably blocking them by default — the same accidental blocking that lost some of the biggest brands their AI visibility in 2024. The fix is two lines of robots.txt and a properly structured llms.txt. The work is small; the impact is real.

6. The mention unification that earns you trust

LLMs assess brand trust partly through citation consistency across the open web. Your name on Crunchbase says one thing; your G2 listing says another; your Wikipedia draft says a third. We standardise the canonical version of every brand fact and propagate it across the directories, review sites, and aggregators that feed the training data. Boring work; outsized lift.

Why this matters now, not in six months

You can feel the shift in your own behaviour. You google less than you did a year ago. You ask Claude. You ask Perplexity. You let Gemini summarise. So does your buyer. The brands that ship GEO this quarter are the brands that own their category in the AI-search era. The brands that ship in 2027 will be playing catch-up against incumbents who already exist in the LLM training cuts.

The window is the same shape as the early 2010s mobile-web window. The brands that took mobile seriously in 2011 won the next decade. The ones that waited until 2014 spent five years rebuilding what their competitors already had. GEO is that window again, just compressed.

Why me, specifically, for this

I have shipped real GEO work, on production sites, with measurable AI-citation movement — on Deluxe Astrology, on HostList, and on this site. I built the AI-citation tracking pipeline that powers the dashboard you are about to see (live SERP plus DataForSEO's AI Optimization endpoints, refreshed every three days). I run an SEO programme at Seahawk Media that has shipped 5,000+ sites; I know which schema actually moves the needle and which is theatre.

The agencies adding "AI SEO" to their service menu in 2026 are still selling the same content marketing they sold in 2018, with a Perplexity logo on the deck. The technical depth — entity graphs, schema completeness audits, llms.txt, AI-citation tracking — is missing from most of those decks. It is what your programme actually needs.

What this costs

The audit — £2,500, 4 weeks

You get the 30-page report, the AI-citation baseline across all four surfaces, the schema gap list, the entity-health diagnosis, and the 90-day priority roadmap. If you do not engage further, you keep the report and run the work in-house. Roughly half the audits I run convert to a programme; the other half ship the work themselves with the report as their brief, and that is a perfectly fine outcome too.

The programme — £6,000–£18,000 per quarter

The programme price depends on category breadth (one product line versus fifteen), language and region scope (UK only versus UK + US + EU), and whether the original-research stream is included. Ongoing tracking, monthly reports against AI-citation share-of-voice, and quarterly executive review. Three-quarter minimum commitment because the AI surfaces re-crawl on a 4–8 week cycle and shorter engagements do not let the data settle.

See your baseline before you book

Want a 30-second read on where you actually sit before you book the call? The free AI Citation Checker derives five buyer queries for your category and reports whether your brand cites in the top 10 organic results that ChatGPT and Perplexity pull from, plus whether Google AI Overviews are running on those queries today. No login, no email, takes about 20 seconds.

When you're ready

Book a 30-minute call. Tell me your category and the three buyers you most want ChatGPT to recommend to. By the end of the call you have a sense of where your AI-citation baseline is sitting, a price range, and a delivery window. No deck. No qualification screen. Real numbers.

Common questions

What is GEO actually doing under the hood?

Three things, in order. One — making your site structurally legible to LLMs through schema and entity work, so when ChatGPT crawls the web it understands what you do. Two — giving the LLMs reasons to cite you, through original data and citable research nobody else in your category is publishing. Three — fixing the brand-mention inconsistencies and crawler-permission gaps that cost you visibility silently.

Will this hurt my Google rankings?

No. The schema, entity, and content work that lifts AI citation is the same work Google is using to seed AI Overviews. Traditional rankings tend to lift alongside AI visibility, not in opposition. The only way to do GEO badly is to skip the technical depth and chase volume.

How long until I see citations?

Schema and crawler-permission fixes show up in 4–6 weeks. Entity graph improvements show up in 8–12 weeks. The original-research compounding effect — being cited because your data is the data others cite — takes 6–12 months. The audit baseline gives you the starting line; the quarterly reports show movement.

Do you do this for B2C as well?

Yes, with one caveat: B2C in low-consideration categories (consumer goods under £100, fast-fashion, food and drink) has less GEO leverage because the buyer is not asking ChatGPT for recommendations at the same rate. B2B, considered B2C, healthcare, finance, and professional services are where GEO matters most this quarter.

What if my category is dominated by one player ChatGPT always cites?

That is the most common version of the brief. The work is then about becoming the second name on the citation list, then the first alternative. Long-tail wins compound; the incumbent rarely defends every entity in the category, and the gaps are where the programme moves first.

Is the audit refundable if you find nothing?

I have never run an audit that found nothing. The schema gaps alone are usually 40+ items in a typical mid-market site. If you genuinely have a perfect baseline I will tell you so on the kickoff call before taking the engagement.