Yesterday at 10pm I had a vague idea about turning a Directus deployment into a sales asset. This morning at 7am I had a working admin on Railway, three live dashboards, a public demo blog with five posts, a 2,400-word sales pillar that pitches the whole shape as a service, and three production bugs I caught and fixed at midnight. The total infrastructure cost: $5 per month. The total time: roughly nine hours of focused work spread across one night and one early morning.
This is the case study. What got built, what broke, what I learned, and the actual numbers behind the build. If you are evaluating whether to commission a similar project for your business, the data points below are the most honest reference I can offer.
What I set out to do
Three goals, in priority order.
First, deploy Directus connected to my existing Supabase Postgres database. Goal: a generic CRUD admin pointing at the same data my custom Astro admin already manages. Reason: not to replace the custom admin (which has Monday-style inline editing the team likes), but to add the operations layer the custom admin does not have. Bulk edits, saved views, role-based permissions, schema browser, Insights dashboards.
Second, set up the dashboards I have been wanting for six months. Three boards: content pipeline (blog publishing velocity, topic queue depth), SEO health (rank positions, AI Overview presence rate, intent distribution), tool usage (AEO tool searches, brand-name tool searches, advisor usage). Concrete numbers I can glance at on Mondays to know whether the engine is running.
Third, turn the whole build into a sales asset. The Directus admin I deploy for myself is identical to the one I would deploy for a client. So the deployment itself becomes the demo. A prospective client clicks a button on my sales page, lands in a real Directus admin populated with real data, types into the rich-text editor, sees how saved views work, then comes back to book a call. Total demo-loop friction: roughly fifteen seconds from page-land to live-editor.
Hour 0 to 4: deploy
Railway hobby plan, official Directus template, one-click deploy. The template ships with Directus plus Redis plus a bundled PostGIS database plus an S3-compatible storage bucket. Total deploy time including configuration: 35 minutes from blank Railway account to logged-in admin.
The first surprise: the Directus template wires its database connection via reference variables pointing at the bundled PostGIS service, not via individual host/port/user/password fields. To redirect Directus at my external Supabase Postgres, I needed to find the connection-string variable (DB_CONNECTION_STRING) and paste my Supabase Session pooler URL with credentials.
The second surprise: my Supabase database password contained the character "#". URL-encoded as %23 in a Postgres connection string. Without encoding, the URL parser cuts the string at the # because hash means URL fragment. Directus logged ECONNREFUSED::1:5432 because it fell back to localhost when the connection string was malformed. Half an hour of confusion, then I rotated the password to alphanumeric-only and the connection landed.
Once Directus connected, all 12 tables from my Supabase database auto-imported as Directus Collections. Zero schema migration, zero data movement, zero downtime. The custom Astro admin and Directus admin both see the same data, both write to it, both immediately reflect each other's changes.
Hour 4 to 12: configure, configure, configure
The bulk of the work was not deployment. It was field-level configuration to make Directus behave like a polished admin instead of a raw database browser. Every column needs interface metadata, display formatting, sort order, width, group, and visibility decisions. Default Directus shows everything; the polished version shows only what the team needs.
Three observations from this phase.
Doing this through the Directus admin UI by clicking is slow. Roughly two minutes per field, ninety-plus fields across twelve collections, total click-time near three hours. I delegated to a browser-driving AI agent (Claude in Chrome) which switched to the Directus REST API for bulk configuration. The agent posted PATCH /fields/{collection}/{field} calls in sequence, with the full meta object as the payload. Three minutes per collection instead of forty. The whole config phase compressed from three hours to forty minutes.
The REST API approach also gave me reproducibility. Every PATCH is a curl-equivalent that I could re-run on a fresh Directus deployment to restore the same configuration. The configuration is effectively code, not a fragile pile of UI clicks.
The last quarter of this phase was building the three Insights dashboards. Each panel is a JSON config sent to POST /panels. The dashboards landed in roughly twenty minutes once the field metadata was in place. Three boards, six panels, real data rendering. The numbers I had been guessing at for six months were finally on screen.
Hour 12 to 18: the three bugs that took the most time
No build is real until it breaks in production. Three bugs surfaced after I shipped to main and refreshed the live site.
Bug 1: Spline scene blocked by CSP, with three stacked causes
I had also shipped a 3D-first web pillar with a Spline NEXBOT robot hero. After deploy, the robot was invisible on production but worked locally. Three stacked issues in sequence.
One: the script tag using define:vars was inlined by Astro, which meant Vite did not bundle the dynamic import of @splinetool/viewer, which meant the bare module specifier was unresolvable in the browser. Two: after fixing that, the spline-viewer element was inside a parent with the hidden attribute at upgrade time, so the Lit custom element initialised with 0x0 dimensions and never recovered. Three: after fixing that, the Spline viewer fetches its modelling WASM from unpkg.com at scene-load time, which my CSP did not whitelist. Each fix surfaced the next bug. Total debug time: ninety minutes.
Production-only debugging is its own discipline. The dev server resolves bare imports differently than the production build. The dev environment has a more permissive default CSP. Both bugs were impossible to surface without an actual production deploy. My takeaway: ship to a real environment early, even when local works fine.
Bug 2: AI Overview boolean and Postgres avg()
One Insights panel wanted to show "what percent of tracked SERP runs surface an AI Overview." The seo_serp_runs table has a boolean column ai_overview_present. Naive instinct: avg() that column, multiply by 100, render as percent.
Postgres throws on avg(boolean). The function does not exist for that type. Several workaround attempts failed: Directus does not expose a "percentage" aggregate, count-of-true with hardcoded denominator drifts as more rows land, casting at query time is not exposed through the Insights panel options.
The fix that worked: add a generated column ai_overview_present_int that computes as case when ai_overview_present then 100 else 0 end. avg() over an integer column works trivially and yields the percent directly. One-line SQL migration, zero application code change.
Bug 3: image hosts in CSP
I seeded the demo blog with five Unsplash photo URLs. The browser silently blocked them because images.unsplash.com was not whitelisted in img-src. Solid black card images on the public demo blog. I added Unsplash to CSP, the images rendered, then realised I had just violated my own non-negotiable rule about using FAL for all images stored in Supabase Storage.
Correct fix: a script that pulls every demo_posts row, generates a per-category editorial photograph via FAL flux-pro/v1.1-ultra, sharp re-encodes to WebP at quality 82 max-width 1600, uploads to a public Supabase Storage bucket, updates featured_image to the storage CDN URL. Twenty-minute build, five FAL generations cost under a pound. Removed Unsplash from CSP. Images now live in my own infrastructure, no external dependencies, no CSP-update-per-host treadmill going forward.
Hour 18 to 24: the sales pillar
With Directus working and the demo blog rendering correctly, I wrote the sales pillar at /solutions/headless-cms-and-admin-tools/. Four service tiers (CMS, internal ops admin, directories, bespoke), priced in USD with GBP in brackets, six FAQs, a comparison table against WordPress and HubSpot and Notion and Airtable, a "brief I will not take" section to disqualify the wrong-shape clients early.
The pillar links directly to the live demo. Prospects clicking "OPEN THE EDITOR" land in the actual Directus admin with displayed credentials they paste in. Seconds later they are editing a real post in a real editor. The demo loop is fifteen seconds from page-land to live editing.
Total time on the pillar page: ninety minutes of writing plus thirty minutes of schema markup, internal links, and visual polish. The page itself took less time than two of the three bugs.
Numbers
Build cost breakdown, since this is the case-study post and that is the question prospects will ask.
Railway Hobby plan for Directus + Redis + bucket: $5 per month, billed monthly. First 30 days free.
Supabase Postgres: existing infrastructure, no new cost. Using the Pro plan already at roughly $25 per month for the database.
FAL API for five hero images: under one pound total, roughly $1 USD.
DataForSEO for backfilling 42 keywords with search volume + intent + difficulty: $0.04 total.
My time, end-to-end including the bugs and writing: nine hours of focused work. At my consulting day rate of around $1,500 per day, that is roughly $1,700 in opportunity cost.
Total cash outlay to ship the working sales asset: roughly $6 in marginal infrastructure plus $1 in API fees. Total true cost including time: roughly $1,700.
Reference comparison: an agency quoting a "headless CMS admin tool build" engagement would typically charge $8,000 to $15,000 USD for the equivalent scope. I am charging $8,000 to $19,000 USD on my own service page. The gap between cost-to-build and price-to-sell is the entire economic engine of agency work. The case study is the proof that the cost-to-build number is real.
What this proves to a prospective client
Three things, all verifiable in the next ten minutes.
First, the build is real. Click the OPEN THE EDITOR button on the sales page. Type into the rich-text editor. Schedule a post. Look at the dashboards. The thing exists. It is not a mockup, it is not a Figma file, it is not a screenshot. It is the same software I would deploy for your business.
Second, the speed is real. Twenty-four hours from "vague idea" to "production-deployed sales asset with live demo." A client engagement would not move at this speed because client engagements include discovery, design review, stakeholder approvals, and security audits. But the underlying build velocity is what determines whether a six-week timeline is honest or optimistic. Twenty-four hours of solo build effort corresponds to roughly three weeks of standard agency engagement time once the overhead is folded in. That math is what produces the six-to-eight-week tier-1 timeline.
Third, the failure modes are real. Three production bugs got caught and fixed in real time. CSP issues, dynamic-import bundling, generated-column arithmetic. These are the same bugs that surface in client engagements. The fact that I caught and resolved them on a live site is the actual demo of competence. A polished case study with no bugs would be the suspicious one.
What I would do differently
Three small reflections, in order of usefulness.
I would have started with the FAL image generation script before the Unsplash workaround. The CSP-update-then-revert dance cost ten minutes of unnecessary churn. The non-negotiable rule about FAL images exists for exactly this reason and I should have listened to it first.
I would have used the REST API approach for Directus configuration from the start instead of trying the UI click-through. The agent-driven REST-call sequence was three times faster and produced reproducible configuration. The lesson generalises beyond Directus: any admin that exposes a comprehensive REST API should be configured through that API rather than through its UI.
I would have written the case-study post sooner. This post is being written about twelve hours after the build finished. By the time I am writing this, some of the failure modes are already fading from memory. The honest case study is the one written within the same day as the build, while the bugs are still fresh enough to describe accurately.
Where to go next
If you are evaluating whether your business should commission a similar build, the demo loop is the fastest path to a useful answer. Open /solutions/headless-cms-and-admin-tools/, click the demo button, log in with the displayed credentials, spend fifteen minutes clicking around. By the end you will have a clear sense of whether this shape of tool fits your team or not.
If yes, book the thirty-minute call linked on that page. Tell me your existing stack, your team size, your data shape. By the end of the call you have a tier pick, a price range, and a delivery window. Most engagements I take run six to twelve weeks at $8,000 to $50,000 USD depending on scope. The half I do not take, I tell you why on the call.
If you want the full sales pitch, the pillar page is at /solutions/headless-cms-and-admin-tools/. If you want to skip to the live demo, the credentials are on that page. If you want the next case study, the next blog post will probably be about either an Asian-corridor manufacturer build or the same pattern applied to a different vertical. Tell me which is more useful.
