mcp-servers-production-stack-2026.html
< BACK TO BLOG A clean shelf with eight distinct geometric tools arranged in a precise row, suggesting a curated production stack

The MCP servers I run in production at Seahawk: an honest stack

Most writeups about MCP servers in 2026 are demos, not production stacks. The honest version: the MCP ecosystem has roughly 200 servers, half of them are toys, and the production-grade stack you actually run for daily agency work is smaller and more boring than the demo videos suggest. This is the stack we run at Seahawk Media across client engagements, the trade-offs we have learned, and what we tried that did not earn its place.

I run an agency that ships custom WordPress, Next.js, and Astro work plus SEO and content. The MCP layer is load-bearing across all of that now. If you are weighing which MCP servers to actually install on your daily-driver Claude Code, this is the working answer.

The eight MCP servers we run daily

Filesystem (built-in)

The most-used MCP we have. Reads, writes, searches across the project tree. Every Claude Code session uses it. Nothing exotic; just essential.

Puppeteer

Browser automation for QA, content scraping, screenshot capture, post-deploy verification. Replaced two paid SaaS tools we used to subscribe to. The only friction is install: we keep a node-puppeteer image around because Chromium dependency hell still exists.

Postgres

Direct SQL against client databases. Replaces the SQL editor tab open in another window. Particularly useful for content audits (which posts have empty featured images, which have orphan revisions, which have tags that match nothing). Querying through Claude is faster than writing the SQL yourself for ad-hoc questions.

SERP fetching for SEO audits. We use it every day for competitive research, ranking diagnosis, and AI Overview citation tracking. The Brave API is more reliable than scraping Google directly; the cost is a few dollars per month at our usage.

GitHub

Reads issues, pull requests, comment threads, and repository metadata. Used for triaging incoming work, drafting PR descriptions, and synthesising recent commit activity into client status reports.

Sentry

Reads error events from production sites we monitor. Claude can correlate Sentry events to recent code changes and surface the likely culprit faster than I can manually click through the error trail. Saved several hours per incident response.

Notion

Reads our internal documentation and client briefs. Particularly useful for cross-referencing client standards during code generation. Replaces the manual "let me check what we agreed in the discovery doc" step.

Linear

Reads issues, status, and assignees. Used for daily standup prep, weekly client reports, and auto-drafting issue updates from commit activity.

What we tried and dropped

Five MCP servers we installed, used for a month, and removed:

Slack MCP. The signal-to-noise ratio against pulling Slack history into Claude was bad. Slack messages are too unstructured to use as good context.

Google Drive MCP. The auth flow was fragile and the document-parsing was unreliable. We export to markdown manually instead.

Generic web-fetch MCPs. Most are slower or less reliable than Puppeteer when we need to actually browse a site. Removed all of them.

Multiple AI-image-generation MCPs. We have a working FAL pipeline outside Claude already; the MCP wrappers added latency without saving real time.

Database-mcp generic adapters. Built-in Postgres MCP outperformed every generic database MCP we tried.

The MCP stack hygiene rules

Three rules we have settled on after a year of trial and error:

Less is more

Eight MCPs is the sweet spot. We tried fifteen for a month; the context window pressure made Claude slower and less accurate. Each MCP installed has a context cost, and the cost compounds. Audit quarterly; remove anything not used in the last 30 days.

Built-in beats third-party where it overlaps

Anthropic-shipped MCPs (Filesystem, the official integrations) are more reliable than community equivalents in our experience. We default to built-in and only reach for third-party when the built-in does not cover the use case.

Auth is the hidden cost

Every MCP server has its own auth story. Some use OAuth, some need API keys, some need service-account JSON files. The setup overhead is real; the maintenance cost when tokens rotate is realer. Pick MCPs whose auth pattern matches your existing infrastructure rather than ones that require new credential sprawl.

What we are watching for 2026

Three MCP categories I expect to mature:

Agency-CRM MCPs (HubSpot, Salesforce, Pipedrive). The ones currently shipping are alpha-quality; serious versions probably arrive H2 2026.

Project-specific MCPs (a Seahawk-internal MCP exposing our client database, our pricing logic, our hosting inventory). We are building this in-house rather than waiting for a generic version.

Browser-control MCPs at scale. Puppeteer is good for one-page interactions; running headless browsers across hundreds of pages efficiently still requires custom orchestration. The MCP layer here is improving but not solved.

Bottom line

Eight MCP servers run daily at Seahawk: Filesystem, Puppeteer, Postgres, Brave, GitHub, Sentry, Notion, Linear. That is the stack that ships. The other 200 MCP servers in the ecosystem are largely demos or duplicates of these.

If you are starting fresh, install Filesystem (already there), then Puppeteer, then add Postgres or Brave depending on whether your work is more code-heavy or more research-heavy. Add the others as specific need arises. The MCP stack should grow because work demanded it, never because the demo video was good.

< BACK TO BLOG