Back to product
How it works

How terraform-stack works

A complete tour of the architecture: data flow, subsystems, technology choices, performance, and where the project is heading next.

TL;DR

Four small Terraform modules — Vercel, Supabase, Cloudflare, optional DigitalOcean — composed in a root main.tf. The Supabase and Cloudflare modules apply first; the Vercel module reads their outputs to template its env vars. One terraform apply produces a full, branded, TLS-terminated, database-backed Next.js stack.

Core data flow

From the moment a request enters the system to the moment a response leaves it.

  terraform apply
       │
       ├─────────────┬──────────────┬─────────────────┐
       ▼             ▼              ▼                  ▼
  Supabase     Cloudflare      Vercel              DigitalOcean
  module       module          module              (optional)
       │             │              │                  │
       ▼             ▼              │                  ▼
  Project +    DNS A +CNAME    Project linked     Droplet or
  generated    R2 bucket       to GitHub repo     DOKS cluster
  DB password  KV namespace        │                  │
       │             │              │                  │
       └─── outputs ─┴──── env_vars ┘                  │
                          │                            │
                          ▼                            ▼
              Vercel project deploys      k8s-ops-toolkit chart
              automatically on git push   installs onto DOKS

Each subsystem, deep-dived

Every component in the data flow above, opened up and explained.

modules/vercel

Provisions a Vercel project linked to a GitHub repo, a project domain, and environment variables from a map(string) input. Inputs are project_name, domain, github_repo, and env_vars. Outputs are project_id and project_name.

Env vars are applied to all three Vercel targets — production, preview, development — so every PR preview deploys with the same configuration as production. The framework field is hard-coded to Next.js because that is what this stack ships.

modules/supabase

Provisions a Supabase project in your chosen region under your organisation, plus a 32-character random database password. Inputs: project_name, org_id, region. Outputs: project_id, api_url, anon_key, service_role_key, database_password.

The anon and service-role keys are marked sensitive in the module — make sure your remote state backend is encrypted. The database password is also sensitive; never commit a state file that contains these.

modules/cloudflare

Provisions DNS records pointing at Vercel (apex A and www CNAME), an R2 bucket named after the domain, and a Workers KV namespace. The R2 bucket name has dots replaced with hyphens because R2 does not allow dots in bucket names. Inputs: domain. Outputs: zone_id, r2_bucket, kv_namespace.

The Cloudflare zone for the domain must already exist; the module does not create it. Adding a domain to Cloudflare is a one-time UI action and the module assumes it has been done.

modules/digitalocean (optional)

Off by default. Provisions a single droplet or small DOKS cluster for workloads that do not fit on Vercel — long-running jobs, non-HTTP services, anything that needs persistent state on disk. Inputs (when enabled): droplet_size, droplet_region, ssh_key_id. Outputs: droplet_ip, droplet_id.

Pairs with the k8s-ops-toolkit: this module provisions the cluster, that toolkit deploys the platform stack onto it. Same opinionated stack, full coverage from infrastructure to observability.

main.tf wiring

The root main.tf is a few dozen lines. It instantiates each module, passes the user-facing variables, and wires module outputs into the Vercel module’s env vars. Terraform’s dependency graph orders the apply automatically: Supabase and Cloudflare apply first, Vercel applies last.

Adding a new provider follows the same pattern: write a module under modules/your-provider/, instantiate it in main.tf, optionally pipe its outputs into the Vercel env vars. The pattern stays the same.

Why this stack

The road not taken matters as much as the road taken. Here is what was picked, why, and what was rejected and why.

Picked

Terraform 1.9+

Mature ecosystem, broad provider support, deterministic plans. The default for IaC.

Not this

Pulumi — fine choice, code-first ergonomics. We picked Terraform because the providers we target all have stable Terraform modules first.

Picked

vercel/vercel provider

Official, kept current with Vercel API changes.

Not this

A community provider — provider lag is operationally painful.

Picked

supabase/supabase provider

Official from Supabase. Project lifecycle is the only thing we manage; the schema is owned by the app.

Not this

Doing project setup via the dashboard manually — undermines reproducibility.

Picked

cloudflare/cloudflare provider

Comprehensive coverage of DNS, R2, KV, and Workers. The DNS-as-code story is mature.

Not this

AWS Route 53 — would couple this stack to AWS for no reason; Cloudflare is already the CDN.

Picked

digitalocean/digitalocean provider

Cheapest credible cloud for compute, simple pricing model, fast UI for sanity checks.

Not this

EC2 — fine, more expensive, more configuration surface for the same shape of workload.

Picked

random_password

Generates the Supabase DB password without checking it into the repo. Persists in state, not in source.

Not this

Manually setting a password — irreproducible, leak-prone.

Picked

S3-compatible state backend

Works with AWS S3 directly and with Cloudflare R2 via the S3 protocol — keeps state in the same provider as your infrastructure if you want.

Not this

Terraform Cloud — fine option; we deliberately do not require it because solo engineers often skip it.

Performance & observability

Performance for IaC is mostly about apply duration, not runtime cost. A clean apply of all four modules — Supabase project create, Cloudflare DNS records, R2 bucket, KV namespace, Vercel project, Vercel env vars, Vercel domain — completes in roughly 90 seconds against the public APIs.

The Supabase project takes the longest (typically 30–60 seconds for project provisioning). Cloudflare and Vercel calls return in single-digit seconds. DigitalOcean droplet creation is similar to Cloudflare. A subsequent terraform apply with no changes runs in under 10 seconds.

State file size is tiny — under 50KB even for a fully populated stack — because Terraform stores resource state, not the full provider response. Backend reads and writes are not a meaningful cost.

Plan-time validation catches the common errors before any provider call: missing region, malformed domain, mistyped github_repo. The pattern of the providers is similar enough that terraform plan output reads cleanly across all four.

Where it is heading

  • Resend module for transactional email + DKIM setup against the Cloudflare zone.
  • Stripe products and prices as IaC. Subscription tiers should be reproducible across environments.
  • GitHub Actions module that provisions deploy keys, repo secrets, and an OIDC role for CI to assume cloud creds without static secrets.
  • A bootstrap stack that creates the Cloudflare R2 bucket and IAM keys for the main stack's remote state — solves the chicken-and-egg of state-bucket-bootstrapping.
  • Pulumi parity. The same modules expressed as Pulumi resources for teams that prefer it.

Read the full whitepaper for the formal technical write-up.