How it works · Sample Lifecycle Tracker

How a buying team finally found every sample.

A walk-through of the architecture, data flow, and design decisions behind the Sample Lifecycle Tracker.

The 60-second version

If you only read one paragraph.

Every sample is a row in Postgres with a state, a buyer, a supplier, and a long tail of events. Twelve roles see different slices of that data, enforced by row-level security in the database itself. State transitions go through a trigger that writes an audit row and rejects illegal moves. Updates push live to every connected client over Supabase realtime. Each morning, a cron job emails each role a list of their overdue items. Each night, two materialised views refresh, powering the directors’ analytics dashboards. The whole thing is one Next.js app on Vercel.

Core data flow

From buyer’s click to factory liaison’s screen.

┌──────────────────────────────────────────────────────────────────┐ │ Buyer marks sample 4421 as 'with-sales' │ │ ↓ Server Action (typed, server-side, in the same repo) │ │ Postgres UPDATE samples SET state='with-sales' WHERE id=4421 │ │ ↓ enforce_sample_transition() trigger: │ │ • checks allowed_transitions table │ │ • inserts row into sample_events (audit log) │ │ • RAISE EXCEPTION on illegal moves │ │ ↓ logical replication │ │ Supabase Realtime broadcasts to all subscribers │ │ ↓ WebSocket │ │ Sales lead's dashboard updates within ~120ms │ │ Factory liaison's filtered view does NOT update │ │ (RLS rejects the row for that role) │ └──────────────────────────────────────────────────────────────────┘

One database write triggers four side-effects: the row updates, the audit log records the transition, every connected client whose RLS allows it sees the change, and the dashboard counter increments without a refresh. None of this requires application code; it is structurally guaranteed by Postgres + Supabase realtime.

Subsystems

Each piece, on its own.

Auth and onboarding

Supabase Auth with magic-link email. New staff are added by an admin, who picks a role from a dropdown. A welcome email goes out via Resend with a single sign-in link. First-time login lands on an onboarding screen tailored to the role — a buyer sees a tour of sample creation, a factory liaison sees the at-factory queue with a one-line explainer.

The role is stored in user_roles, joined into RLS policies via auth.uid(). There is exactly one current role per user. Role changes are an explicit DDL-style admin action, audit-logged.

The sample state machine

Eight states: requested, ordered, received, with-design, with-sales, at-factory, returned, cut-for-production. Plus an absorbing archived state for written-off samples. Allowed transitions live in a small lookup table; the trigger function enforces them.

-- The transition table (excerpt) INSERT INTO allowed_transitions (from_state, to_state) VALUES ('requested', 'ordered'), ('ordered', 'received'), ('received', 'with-design'), ('with-design','with-sales'), ('with-sales', 'at-factory'), ('at-factory', 'returned'), ('returned', 'cut-for-production'), ('returned', 'archived'), ('with-sales', 'archived');

Comments and attachments

Each sample has its own page with a comment thread (sample_comments, RLS-scoped) and a photo gallery (Supabase Storage). Comments are realtime — any team member typing on the detail view sees others type live. Photos upload via signed URLs; the bucket is private and reads go through short-lived signed URLs that respect role boundaries.

Overdue alerts

A Vercel Cron at 07:00 UTC hits /api/cron/overdue. The handler runs one query per role with role-specific overdue rules:

SELECT s.id, s.title, s.state, EXTRACT(DAY FROM now() - s.state_entered_at) AS days FROM samples s WHERE s.state = 'at-factory' AND s.state_entered_at < now() - interval '14 days' ORDER BY days DESC;

The result feeds a React Email template, sent via Resend to the role’s mailbox. Each role gets only what they can act on. No CC, no broadcast.

Analytics

Two materialised views back the directors’ dashboards. Refreshed via pg_cron at 02:00 UTC nightly with REFRESH MATERIALIZED VIEW CONCURRENTLY, so reads are never blocked.

CREATE MATERIALIZED VIEW mv_spend_by_supplier AS SELECT supplier_id, date_trunc('month', received_at) AS month, COUNT(*) AS sample_count, SUM(unit_cost * quantity) AS total_spend, SUM(CASE WHEN state = 'archived' THEN unit_cost * quantity ELSE 0 END) AS write_off FROM samples GROUP BY supplier_id, date_trunc('month', received_at); CREATE UNIQUE INDEX ON mv_spend_by_supplier (supplier_id, month);
Stack and reasoning

Why this, not that.

Next.js 14 App Router

Why we picked it

Server components for read-heavy dashboards keep the JavaScript bundle small. Server Actions handle every state transition with full type safety from form to database.

What it replaced

A SPA + REST API would have doubled the codebase and added a stale-data class of bugs. RSC + server actions removes the need for client-side data fetching libraries entirely.

Supabase (Postgres 15 + Auth + Realtime + Storage)

Why we picked it

One vendor, one bill, one auth boundary. RLS policies live next to the data they protect. Realtime is a Postgres logical-replication consumer, not a separate event bus.

What it replaced

Hand-rolled Postgres + NextAuth + S3 + a websocket server is four moving parts to maintain. For a team of one shipping in six weeks, the trade is not close.

Postgres Row-Level Security

Why we picked it

Twelve roles, explicit per-table policies. The application cannot leak data even with a buggy route handler. Policies are DDL, version-controlled, code-reviewed, auditable.

What it replaced

Application-layer auth scattered across server actions is a bug-magnet. One forgotten check is a privacy incident.

Materialised views for analytics

Why we picked it

Two views (mv_spend_by_supplier, mv_spend_by_category) refresh nightly via pg_cron. Dashboards query the views; queries are sub-200ms and isolated from the operational hot path.

What it replaced

Live aggregation against the events table during dashboard loads brought the DB under load. CONCURRENTLY-refreshed mat views were the cheapest and best fix.

Vercel Cron + Resend for overdue emails

Why we picked it

Cron at 07:00 UTC; one route handler builds the per-role digest from a SQL query and ships it via Resend. Idempotent, observable, free at this scale.

What it replaced

A separate scheduler service for one daily job would be over-engineered. Vercel Cron is exactly the right granularity.

Recharts for the analytics dashboards

Why we picked it

Composable React primitives, SVG output, no canvas surprises. Renders identically on iPad and desktop.

What it replaced

D3 directly is more powerful but expensive to maintain. Chart.js depends on canvas which makes accessibility worse.

Integrations

What it talks to.

Resend

Transactional email. Welcome emails, overdue digests, weekly director summaries. React Email templates compiled at deploy time.

Client ERP (nightly)

Cost data flows in via a nightly CSV upload to a private S3 bucket; a cron handler upserts into samples on the cost columns only.

Sentry

Error reporting only. Source maps uploaded from the Vercel build. PII scrubbed in beforeSend.

Performance and observability

What we watch.

~120ms
Realtime delivery latency, p50
~180ms
Dashboard query p95 (mat views)
~700ms
Sample detail page TTFB

We watch three signals: realtime delivery latency (Supabase metrics), p95 query duration (Postgres pg_stat_statements), and 5xx rate (Vercel + Sentry). Alarms fire to email if p95 doubles week-over-week or if the 5xx rate exceeds 0.5 percent of requests for ten consecutive minutes. We have not yet had a real alarm.

Future direction

Where it goes next.

Supplier scorecards

Composite metrics (turnaround, reject rate, write-off rate) per supplier, plotted over time. Already prototyped on the existing materialised views.

Photo similarity search

pgvector + a small embedding model to flag near-duplicate samples across buyers in the same season. Low-priority but high-value when it hits.

Mobile camera capture

PWA install prompt + getUserMedia for one-tap photo upload from the warehouse floor. The existing storage layer already handles it.

ERP write-back

Currently the ERP feeds in cost data; eventually approved-for-production rows could push back the other way. Requires the client’s ERP vendor to expose an API.

Want the engineering record?

The whitepaper covers architecture, RLS policy design, the implementation timeline, and the lessons learned in detail.