how-i-use-claude-code-for-seo-audits.html
< BACK A clean terminal window with a soft cursor on a wooden desk beside a small magnifying glass, suggesting AI-assisted code analysis

How I use Claude Code for technical SEO audits in 2026

I run organic search across 91,000 pages on Deluxe Astrology, 28,000 programmatic-SEO pages on HostList.io, and across the wider Seahawk Media client portfolio. The audit work that used to take me a full day per site now takes 90 minutes inside Claude Code, and the output is more rigorous than the manual version. This is the actual workflow I run, the MCP stack behind it, the prompt I use, and the parts of an SEO audit Claude Code still cannot do well.

If you are weighing whether AI-assisted audits are real, this post is the operator answer: yes, with discipline, and only if you treat Claude as the senior engineering collaborator rather than the senior strategist.

Why Claude Code beats traditional audit tooling for the boring 70%

A traditional technical SEO audit involves Screaming Frog crawling a site, Search Console pulled for the indexability and CWV view, Lighthouse run a few times across representative templates, schema markup checked manually against a validator, and finally a human assembling all of that into a prioritised remediation list with severity and traffic impact. The crawling and parsing is roughly 70 percent of the wall-clock time. The judgement layer is the remaining 30 percent.

Claude Code with the right MCP stack collapses the 70 percent. Crawl execution, parsing, deduplication, schema validation, and report drafting all happen in a single session against your repository or your hosted site. The 30 percent that requires senior judgement (which issues actually matter for this specific business, what to prioritise given runway and team capacity) stays human. The combined wall-clock time drops by roughly 6x at the same or higher quality.

That is the entire pitch. Everything below is the working detail of how to make it actually deliver.

The MCP stack I actually use

Other writeups on Claude Code for SEO use a single MCP (usually Puppeteer) and stop there. The honest stack for serious audit work is wider:

Filesystem MCP

Built into Claude Code by default. I use it to dump audit outputs into a working directory, then have Claude reason across them in the same session. The Filesystem MCP is also how I feed Claude existing reports, sitemaps, robots.txt, and any pre-existing technical documentation.

Puppeteer MCP

Browser automation. Crawls the site, takes screenshots of representative templates, extracts rendered HTML for JavaScript-heavy pages, captures Core Web Vitals via the Performance API. The Puppeteer-driven render is the closest thing to what Google actually sees, more accurate than static HTML parsing.

Postgres MCP (or SQLite for smaller jobs)

Critical for serious audits. Search Console data exported as CSV gets loaded into a Postgres table. Claude then runs SQL queries against it: which URLs are losing traffic month-over-month, which queries shifted, which pages dropped out of the index. The aggregation work that takes 20 minutes in Excel takes 30 seconds in SQL.

Brave Search or similar

Current SERP comparison. When auditing for ranking regressions, comparing what currently ranks for the target queries against what the audited site ships is half the diagnostic. A SERP-fetching MCP turns this into a single tool call rather than a manual tab-switching exercise.

Bash / shell

Claude Code runs Lighthouse CLI, curl tests for redirect chains, and any custom scripts I have for the project. Treating the shell as another tool rather than a separate environment matters; the audit happens in one continuous session.

Pre-audit setup: 90 seconds of context

Before any crawl runs, Claude needs to know what kind of site this is and what the audit is for. I open Claude Code in the project directory (or a fresh directory if auditing an external site) and prime the session with a short brief:

Three sentences on the business: what they sell, who their audience is, what their organic-traffic target is for the next 12 months. Two sentences on the site shape: stack, page count, indexability shape, content cadence. One sentence on the audit deliverable: what should be true at the end of the audit that is not true today.

That 90 seconds of context-loading is the difference between Claude generating a generic checklist audit and Claude producing an audit calibrated to the specific business. Skip it and you get the same generic output every other Claude Code SEO writeup demonstrates.

Phase 1: Technical crawl

Claude runs Puppeteer against the site, follows internal links to a configurable depth (I default to 3), and captures: rendered HTML per URL, response codes, redirect chains, canonical tags, hreflang annotations, schema markup blocks, indexability signals (robots meta, X-Robots-Tag, robots.txt), and Core Web Vitals.

Output gets dumped to ./audit-output/crawl/ as one JSON file per URL plus a summary.json across the corpus. For sites under 500 pages this completes in under 10 minutes. For larger sites I cap the crawl at 1,000 representative URLs and sample the rest.

Common patterns the crawl surfaces immediately: redirect chains longer than 2 hops, canonical tags pointing to non-canonical URLs, hreflang reciprocity breaks, schema validation errors. Each one of these takes hours of manual work to find at scale; the crawl finds them in the first 10 minutes.

Phase 2: Search Console data analysis

I export Search Console data as CSV (queries, pages, dates, clicks, impressions, position) and load it via the Postgres MCP. Claude then runs a sequence of queries answering specific operator questions:

Which pages are losing the most traffic month-over-month, ranked by absolute click loss.

Which queries dropped out of the top 20 in the last 90 days.

Which page-query pairs have impressions but near-zero clicks (CTR opportunity).

Which pages are indexed but receive zero impressions in the last 30 days (probable thin-content or quality-gate candidates).

Which pages have multiple competing canonical signals across the index.

Each of these used to be a separate tab of pivot tables in Excel. They are now a sequence of SQL queries Claude generates, runs, and synthesises into a single section of the report.

Phase 3: Schema markup audit

Claude takes the schema blocks captured in Phase 1, validates them against schema.org expected types per page archetype (Organization, BreadcrumbList, Article, BlogPosting, Product, FAQPage, HowTo, LocalBusiness), and flags issues:

Missing required properties (sameAs on Organization, image on BlogPosting, aggregateRating with no actual rating data).

Type mismatches (LocalBusiness schema on pages that are not local businesses, Product schema on category pages).

Stale references (sameAs URLs pointing at deleted social profiles, image URLs returning 404).

Outdated entity relationships (about and mentions arrays missing the actual entities the page covers).

I have caught 30+ schema bugs this way on sites that thought their schema was fine. The validator finds the syntactic ones; Claude finds the semantic ones.

Phase 4: Core Web Vitals via Lighthouse

Lighthouse runs as a CLI command Claude invokes. I default to running it across 8-12 representative templates per site rather than one URL. The output JSONs land in ./audit-output/lighthouse/ and Claude synthesises them into a CWV section that shows median LCP, CLS, INP, and TTFB across the template set, plus the worst-performing single page per metric.

The synthesis is what manual audits get wrong. A single Lighthouse run is point-in-time noise; 12 runs across templates is the real performance picture. Claude does the aggregation in seconds.

Phase 5: Report synthesis

After the four phases, Claude has roughly 30 to 80 issues identified. The synthesis prompt asks Claude to:

Cluster issues by severity (red blocks publishability, amber affects rankings, green is polish).

Estimate traffic impact per cluster using Search Console data.

Order remediation by traffic impact divided by engineering cost.

Output the top 10 issues in a one-page-per-issue format: severity, traffic-affected URL count, plain English explanation, concrete fix, estimated 90-day metric improvement.

That is the deliverable. 10 issues a senior team can fix in 4 to 8 weeks, prioritised by impact, ready to hand to engineering. Manual audits often produce 200-row spreadsheets that nobody fixes; the 10-issue prioritised list gets fixed.

The audit prompt I actually use

Single prompt I run after the 90-second context brief. I have iterated on this for roughly 12 months across dozens of audits:

"You are a senior technical SEO auditor working on the site described in the brief above. Run a 5-phase audit in this order: technical crawl via Puppeteer, Search Console analysis via the Postgres MCP, schema validation, Core Web Vitals via Lighthouse CLI, and prioritised report synthesis. Save outputs in ./audit-output/. After the synthesis phase, present the top 10 issues in a one-issue-per-page format. For each issue: severity (red / amber / green), traffic-affected URL count, plain-English explanation, concrete fix, estimated 90-day metric improvement. Wait for my approval before generating fix patches. Use British spelling. Avoid em-dashes."

No John Wick framing, no "you are a paid expert" theatre. The instruction is direct, the structure is explicit, and the output format matches what gets read by clients and engineering teams.

What Claude Code cannot do here

Honest list of where the AI-assisted audit hits its ceiling:

Strategic prioritisation across business context. Claude can rank issues by traffic impact; it cannot tell you that a 5,000-traffic-loss issue on a marketing page matters more than a 50,000-traffic-loss issue on a deprecated product page that the company is phasing out anyway.

Competitive positioning. Claude can compare your SERPs to competitors mechanically; it cannot tell you that the competitor is winning because of a brand campaign you do not see in the data.

Editorial judgement on content quality. Claude can flag thin content statistically. Whether the thin content should be deleted, expanded, or kept as a low-value-but-niche-relevant page is a human call.

Stakeholder communication. The audit lands well or poorly based on how it is framed for the client team. That framing is human work.

Treat Claude Code as the senior engineering collaborator on the audit, not as the senior strategist. The 30 percent of the audit that is judgement stays yours.

The honest cost

Claude API costs for a 1,000-page audit run roughly 8 to 25 USD on Sonnet-tier pricing. The Puppeteer crawl, Lighthouse runs, and Postgres queries are free; the cost is entirely in the model tokens. A traditional manual audit at agency rates lands at 5,000 to 25,000 USD. The cost gap is real and it is the reason this workflow is going to become standard within 18 months.

The labour shifted, not disappeared. The audit still requires a senior SEO to run the prompts, interpret the output, and own the strategic prioritisation. What disappeared is the 6 hours of crawling, parsing, and aggregating that no senior person should ever have spent time on.

Bottom line

Claude Code with a real MCP stack collapses 70 percent of an SEO audit to a 90-minute session. The remaining 30 percent (judgement, prioritisation, communication) stays a human job and pays better than ever because the cost of the boring 70 has fallen to near zero.

The agencies that adapt this workflow will outprice the ones that do not within 12 months. The senior SEOs who learn it become more valuable, not less, because their judgement is the binding constraint on output volume.

At Seahawk Media we run technical SEO audits using exactly this workflow on client engagements starting from 2,500 USD. The first conversation is free and the audit deliverable is the same 10-issue prioritised list described above, regardless of which agency tier you pick.

How to run a technical SEO audit in 2026 (the methodology pillar)

SEO for operators in 2026 (the head-term guide)

Best SERP tracking tools in 2026

< BACK