How a buying team finally found every sample.
A walk-through of the architecture, data flow, and design decisions behind the Sample Lifecycle Tracker.
If you only read one paragraph.
Every sample is a row in Postgres with a state, a buyer, a supplier, and a long tail of events. Twelve roles see different slices of that data, enforced by row-level security in the database itself. State transitions go through a trigger that writes an audit row and rejects illegal moves. Updates push live to every connected client over Supabase realtime. Each morning, a cron job emails each role a list of their overdue items. Each night, two materialised views refresh, powering the directors’ analytics dashboards. The whole thing is one Next.js app on Vercel.
From buyer’s click to factory liaison’s screen.
One database write triggers four side-effects: the row updates, the audit log records the transition, every connected client whose RLS allows it sees the change, and the dashboard counter increments without a refresh. None of this requires application code; it is structurally guaranteed by Postgres + Supabase realtime.
Each piece, on its own.
Auth and onboarding
Supabase Auth with magic-link email. New staff are added by an admin, who picks a role from a dropdown. A welcome email goes out via Resend with a single sign-in link. First-time login lands on an onboarding screen tailored to the role — a buyer sees a tour of sample creation, a factory liaison sees the at-factory queue with a one-line explainer.
The role is stored in user_roles, joined into RLS policies via auth.uid(). There is exactly one current role per user. Role changes are an explicit DDL-style admin action, audit-logged.
The sample state machine
Eight states: requested, ordered, received, with-design, with-sales, at-factory, returned, cut-for-production. Plus an absorbing archived state for written-off samples. Allowed transitions live in a small lookup table; the trigger function enforces them.
Comments and attachments
Each sample has its own page with a comment thread (sample_comments, RLS-scoped) and a photo gallery (Supabase Storage). Comments are realtime — any team member typing on the detail view sees others type live. Photos upload via signed URLs; the bucket is private and reads go through short-lived signed URLs that respect role boundaries.
Overdue alerts
A Vercel Cron at 07:00 UTC hits /api/cron/overdue. The handler runs one query per role with role-specific overdue rules:
The result feeds a React Email template, sent via Resend to the role’s mailbox. Each role gets only what they can act on. No CC, no broadcast.
Analytics
Two materialised views back the directors’ dashboards. Refreshed via pg_cron at 02:00 UTC nightly with REFRESH MATERIALIZED VIEW CONCURRENTLY, so reads are never blocked.
Why this, not that.
Next.js 14 App Router
Server components for read-heavy dashboards keep the JavaScript bundle small. Server Actions handle every state transition with full type safety from form to database.
A SPA + REST API would have doubled the codebase and added a stale-data class of bugs. RSC + server actions removes the need for client-side data fetching libraries entirely.
Supabase (Postgres 15 + Auth + Realtime + Storage)
One vendor, one bill, one auth boundary. RLS policies live next to the data they protect. Realtime is a Postgres logical-replication consumer, not a separate event bus.
Hand-rolled Postgres + NextAuth + S3 + a websocket server is four moving parts to maintain. For a team of one shipping in six weeks, the trade is not close.
Postgres Row-Level Security
Twelve roles, explicit per-table policies. The application cannot leak data even with a buggy route handler. Policies are DDL, version-controlled, code-reviewed, auditable.
Application-layer auth scattered across server actions is a bug-magnet. One forgotten check is a privacy incident.
Materialised views for analytics
Two views (mv_spend_by_supplier, mv_spend_by_category) refresh nightly via pg_cron. Dashboards query the views; queries are sub-200ms and isolated from the operational hot path.
Live aggregation against the events table during dashboard loads brought the DB under load. CONCURRENTLY-refreshed mat views were the cheapest and best fix.
Vercel Cron + Resend for overdue emails
Cron at 07:00 UTC; one route handler builds the per-role digest from a SQL query and ships it via Resend. Idempotent, observable, free at this scale.
A separate scheduler service for one daily job would be over-engineered. Vercel Cron is exactly the right granularity.
Recharts for the analytics dashboards
Composable React primitives, SVG output, no canvas surprises. Renders identically on iPad and desktop.
D3 directly is more powerful but expensive to maintain. Chart.js depends on canvas which makes accessibility worse.
What it talks to.
Resend
Transactional email. Welcome emails, overdue digests, weekly director summaries. React Email templates compiled at deploy time.
Client ERP (nightly)
Cost data flows in via a nightly CSV upload to a private S3 bucket; a cron handler upserts into samples on the cost columns only.
Sentry
Error reporting only. Source maps uploaded from the Vercel build. PII scrubbed in beforeSend.
What we watch.
We watch three signals: realtime delivery latency (Supabase metrics), p95 query duration (Postgres pg_stat_statements), and 5xx rate (Vercel + Sentry). Alarms fire to email if p95 doubles week-over-week or if the 5xx rate exceeds 0.5 percent of requests for ten consecutive minutes. We have not yet had a real alarm.
Where it goes next.
Supplier scorecards
Composite metrics (turnaround, reject rate, write-off rate) per supplier, plotted over time. Already prototyped on the existing materialised views.
Photo similarity search
pgvector + a small embedding model to flag near-duplicate samples across buyers in the same season. Low-priority but high-value when it hits.
Mobile camera capture
PWA install prompt + getUserMedia for one-tap photo upload from the warehouse floor. The existing storage layer already handles it.
ERP write-back
Currently the ERP feeds in cost data; eventually approved-for-production rows could push back the other way. Requires the client’s ERP vendor to expose an API.
Want the engineering record?
The whitepaper covers architecture, RLS policy design, the implementation timeline, and the lessons learned in detail.