People assume you need Kubernetes, a dedicated DevOps team, and a six-figure cloud bill to handle a million database rows and 10K daily writes. You don’t. Here’s our entire production stack — one VM, one database, and a lot of Cloudflare free tier.
The Stack
| Component | Service | Notes |
|---|---|---|
| Backend | GCP VM (e2-standard-2, 2 vCPU, 8GB) | Runs everything |
| Database | GCP Cloud SQL PostgreSQL + read replica | Primary + replica |
| Frontend | Cloudflare Pages | Free |
| CDN + DNS | Cloudflare | Free |
| Cache | Redis (on VM) | $0 |
| Queue | Bull + Redis (on VM) | $0 |
| AWS SES | < $1 at our volume | |
| Secrets | GCP Secret Manager | ~$0.10 |
| AI Services | Cloudflare Workers AI + OpenAI | Variable |
| Monitoring | PM2 + custom health checks | $0 |
Why This Works
Single VM, Multiple Processes
PM2 runs everything on one VM:
- Express.js API server
- 7 Bull queue workers (3 import, 2 bulk, 2 rate-limited)
- Scheduled jobs (cron-based)
- Redis server
At 8GB RAM, this handles 1M+ active listings and 10K daily writes comfortably. Memory usage peaks at ~3GB during heavy import cycles.
Cloudflare Pages = Free Hosting + CDN
Our Hugo static site (landing pages, auth pages) and Cloudflare Pages Functions (SSR proxy) cost exactly $0. CF Pages includes:
- Global CDN
- Automatic HTTPS
- Edge functions (we use them to proxy SSR pages from the backend)
- Unlimited bandwidth
Cloud SQL with Read Replica
The primary handles writes (imports, match calculations). The read replica handles all user-facing queries and crawler traffic, so a heavy import cycle doesn’t slow down the website.
AWS SES for Email
Verified domain, DKIM configured, production access. At our volume (< 1,000 emails/month), it’s under $1.
Where We DON’T Cheap Out
Database. Cloud SQL with a read replica. Losing 1M job listings or having slow user queries would be catastrophic.
Secrets. GCP Secret Manager for API keys and credentials. Not in
.envfiles, not in code.Backups. Automated daily database backups with point-in-time recovery.
The Key Optimizations
1. Block Unnecessary Resources During Imports
Some ATS platforms render heavy career pages. We intercept requests and block images, stylesheets, fonts, and media — only fetching the HTML and data we need. This cut bandwidth consumption by ~30%.
2. Stale-While-Revalidate Caching
Category pages serve cached data instantly and refresh in the background. Users never see a slow load, and the database handles far fewer queries.
3. AI Cost Reduction
We moved from expensive models to smaller ones where quality was sufficient, batched embedding calls, and added in-memory caching for repeated title lookups. Per-user AI cost: ~$0.03/month.
Scaling When Needed
When we eventually need to scale:
- Vertical first. Double the VM RAM for a few dollars more. Still cheaper than any Kubernetes setup.
- Separate workers. Move queue workers to their own VM when CPU becomes the bottleneck.
- CDN more aggressively. Our SSR pages already cache at the edge — we can push TTLs higher.
The point isn’t that our setup is the cheapest possible — it’s that a single developer can run a serious data platform without a DevOps team, without Kubernetes, and without VC-funded cloud bills.
See the result at MisuJob — 1M+ tech jobs, AI matching, all running on infrastructure that a solo dev can manage.
What’s the simplest production setup you’ve run? We’d love to compare notes.

