Whitepaper · Sample Lifecycle Tracker

Tracking every sample
across a buying team.

Engineering record for an enterprise sample lifecycle platform delivered to an anonymised UK fashion retail client. State-machine data model, twelve-role RLS, realtime sync, and analytics on materialised Postgres views.

Next.js 14Supabase PostgresRLSRealtimeVercel CronRecharts
12user roles
8canonical states
0samples lost since launch
~6 weeksbuild to launch

Executive summary

An anonymised UK fashion retail buyer had no shared view of where samples were at any given moment. We replaced three disjoint systems (a buyers’ spreadsheet, a WhatsApp logistics group, a paper shelf-tag system) with a single Next.js application backed by Postgres 15 on Supabase, an eight-state lifecycle machine, twelve role-based access policies enforced at the row level, realtime channel sync, and a nightly-refreshed analytics layer on materialised views. Six-figure seasonal sample spend went from after-the-fact unknown to live, drillable, and actionable.

01Background

The client is a UK-based fashion retailer with multiple buying teams running parallel buys across menswear, womenswear, and accessories. Each season produces several hundred physical samples: garments, swatches, hardware mock-ups, and finished prototypes. Samples are commissioned, photographed, costed, sent to factories for spec sign-off, returned, and either approved for production or written off.

The pre-existing toolchain was three disconnected systems. A shared Excel workbook on OneDrive held the master sample list, but only some buyers updated it. A WhatsApp group held logistics conversations (“courier picked up sample 4421 today”) but had no structure beyond chronology. Physical shelves with handwritten tags held the actual samples, with a paper notebook tracking incoming and outgoing movement.

Three sources of truth meant zero sources of truth. The ops director described the situation as “running a casino without a cashier’s book”.

02The problem in detail

We quantified the pain across four axes during the discovery week.

SymptomPre-platform measurement
Samples lost or unfindable per season~14 (mid four figures of spend each)
Time to answer “where is sample X?”5 to 45 minutes
Overdue factory returns discovered late~2 per week, average 11 days late
End-of-season analytics turnaround4 to 6 working days
Near-duplicate orders by different buyers3 to 5 per season

The financial exposure was non-trivial. A typical season ran into the high five figures of total sample cost. A 3 to 5 percent loss rate translated into thousands of pounds of avoidable write-off, plus the opportunity cost of sample-driven decisions made on incomplete data.

03Goals and non-goals

In scope

  • Single source of truth for every sample across its full lifecycle
  • Per-role dashboards with realtime updates across the team
  • Mandatory state transitions with audit history
  • Automated overdue alerts scoped per role
  • Live spend analytics by buyer, supplier, category, and season
  • Photograph and document attachment per sample
  • Comment threads pinned to each sample, durable across years

Explicitly out of scope

  • Production order tracking. The platform stops at “cut for production”. The client’s ERP handles production and inventory.
  • Supplier portal. Suppliers do not log in. They communicate via existing email and phone channels; the team logs the outcome.
  • Mobile app. A responsive web app on iPad and phone was sufficient. Native apps were a 5x cost multiplier for a marginal UX gain.
  • Replacing the ERP. Costing pulls from the ERP via a nightly export. We did not try to replace it.
  • Customer-facing anything. Internal only. Authenticated users only.

04Architecture

One Next.js 14 application, App Router, deployed on Vercel. One Supabase project (Postgres 15, Auth, Storage, Realtime). One Resend account for transactional email. No separate API server, no message queue, no microservices. The whole thing fits in a single repo with a single deploy target.

System diagram

┌──────────────────────────────────────────────────────────────┐
│  iPad / desktop / phone clients                              │
│    ↓ Supabase auth cookie                                    │
│  Vercel Edge → Next.js (App Router, RSC + Server Actions)    │
│    ↓ supabase-js (RLS enforced, role-aware queries)          │
│  Postgres 15 on Supabase                                     │
│    ├── samples, sample_events, sample_comments               │
│    ├── users, user_roles (12 roles)                          │
│    ├── mv_spend_by_supplier  (refreshed nightly)             │
│    └── mv_spend_by_category  (refreshed nightly)             │
│    ↓ realtime channels                                       │
│  Connected clients receive INSERT/UPDATE events live         │
│                                                              │
│  Vercel Cron (07:00 UTC daily)                               │
│    → /api/cron/overdue → Resend per-role digest emails       │
│  Vercel Cron (02:00 UTC nightly)                             │
│    → REFRESH MATERIALIZED VIEW CONCURRENTLY ...              │
│                                                              │
│  Supabase Storage                                            │
│    └── sample photos + supplier docs (RLS-aware signed URLs) │
└──────────────────────────────────────────────────────────────┘

Component breakdown

  • App layer: Next.js App Router. Server components for read paths (per-role dashboards, sample detail), server actions for every state transition. Zero REST endpoints; all mutations are typed server actions invoked from client components.
  • Data layer: Postgres 15 with native foreign keys, check constraints, and a partial unique index ensuring a sample can only be in at-factory with one factory at a time.
  • Auth: Supabase Auth, magic-link email by default. SSO was not required.
  • Authorisation: Postgres RLS policies keyed off auth.uid() joined with user_roles. The application never decides who can read what.
  • Realtime: Supabase realtime channels per sample_id for the detail view, and a global channel for dashboard counters.
  • Analytics: Two materialised views refreshed nightly via pg_cron. Live dashboards query the views, not the base tables, keeping report queries off the operational hot path.

Sample state machine

requested ─→ ordered ─→ received ─→ with-design ─┐
                                                  ├─→ at-factory ─→ returned ─→ cut-for-production
                                  with-sales ─────┘                       └────→ archived

Allowed transitions enforced by a CHECK constraint via a trigger function:
  CREATE FUNCTION enforce_sample_transition() RETURNS trigger AS $$
  BEGIN
    IF NOT EXISTS (
      SELECT 1 FROM allowed_transitions
      WHERE from_state = OLD.state AND to_state = NEW.state
    ) THEN
      RAISE EXCEPTION 'Invalid transition: % -> %', OLD.state, NEW.state;
    END IF;
    INSERT INTO sample_events (sample_id, from_state, to_state, actor_id)
    VALUES (NEW.id, OLD.state, NEW.state, auth.uid());
    RETURN NEW;
  END;
  $$ LANGUAGE plpgsql;

05Key technical decisions

Postgres over MongoDB

The data has hard relational shape. A sample has one supplier, many events, many comments, one buyer, one factory at a time. Foreign keys, ON DELETE CASCADE, and check constraints are doing real work here. RLS at the row level is a Postgres feature with no clean MongoDB equivalent. Aggregations for analytics (sum spend by supplier, percentiles by category) are built into SQL. Any document store would have been a step back.

RLS over application-layer auth

Twelve roles times eight states times read/write/comment is a permissions matrix that goes wrong if you express it as a forest of if (user.role === ...) branches. Encoding it as RLS policies puts the truth next to the data, makes it auditable as DDL in source control, and means a future bug in a route handler cannot leak data. Example:

CREATE POLICY "factory_liaison_sees_only_at_factory"
  ON samples FOR SELECT
  USING (
    EXISTS (
      SELECT 1 FROM user_roles ur
      WHERE ur.user_id = auth.uid()
        AND ur.role = 'factory_liaison'
    )
    AND state = 'at-factory'
  );

Server-rendered (RSC) over SPA

The dominant page is a list view filtered by role and sortable by status, then a per-sample detail. Server components render this faster, ship less JavaScript to old iPads on the buying floor, and avoid an entire class of stale-data bugs. Client components are reserved for the realtime parts (live counters, comment streams) and form interactions.

Realtime channels over polling

Supabase realtime is a Postgres logical-replication consumer that pushes changes over WebSockets. We subscribe per sample for the detail view and globally for dashboard counts. Bandwidth cost is trivial; perceived collaboration quality is transformed. The buyer sees the sales lead’s update before they finish typing the next message in WhatsApp. Polling every 10 seconds would have been cheaper to build but operationally worse.

Materialised views over live aggregates

The directors’ analytics dashboard runs five aggregations across hundreds of thousands of event rows. Running those live each visit hammered the primary database during the first week of testing. Moving the queries onto materialised views, refreshed nightly via pg_cron at 02:00 UTC with REFRESH MATERIALIZED VIEW CONCURRENTLY, dropped dashboard load time from ~3 seconds to under 200ms and removed all analytic load from the hot path.

Vercel over self-hosted

The client wanted nobody to care about server administration. Vercel terminates TLS, scales automatically, runs the cron jobs, and ships logs to Sentry. Self-hosting on a tiny VPS would have saved maybe £15 a month at the cost of a maintenance burden that nobody wanted. The trade was obvious.

06Implementation milestones

WeekDeliverable
Week 0Whiteboard sessions: state machine, role matrix, dashboard wireframes
Week 1Schema + migrations + RLS policies. Auth wired. Skeleton pages render per role.
Week 2Sample CRUD with transitions. Audit log populating. Comments live.
Week 3Realtime channels for detail and dashboard. Photograph upload via Supabase Storage.
Week 4Overdue cron + Resend digest emails. Per-role onboarding emails.
Week 5Analytics dashboards on materialised views. Recharts wiring.
Week 6UAT with all twelve roles. Fixes, polish, training note. Production cut-over.

07Results

0samples lost since launch
~10stime to locate any sample
90%drop in late factory returns
livespend analytics

Sample loss has gone from ~14 per season to zero across multiple seasons in production. Overdue factory returns dropped roughly 90 percent because the responsible role gets a daily email of exactly what they need to chase. The directors’ weekly ops meeting now runs from the dashboard rather than from a Monday-morning spreadsheet rebuild.

The platform was not designed to be a CFO tool, but the spend analytics layer ended up changing the buying team’s vendor conversations. The team now negotiates with suppliers using live data: “you’re 18 percent of our sample spend, and your reject rate is double the average”. That is a different conversation from the one they had before.

08Lessons learned

The state machine is the product

Two whiteboard days defining the eight states paid back across every screen. Every UI element, every permission check, every report comes out of that diagram. If we had started by designing screens, we would still be reshuffling them.

Trigger-enforced transitions catch real bugs

The first month, the trigger function caught two genuinely wrong transition attempts driven by client-side state desync. Without the trigger, those would have been silent data corruption. With it, the user got a clear error and the engineer got a clear stack trace.

Materialised views are an underrated tool

The first cut ran analytics live and brought the database to a crawl during demos. Moving to nightly-refreshed materialised views took an afternoon and made every dashboard instant. For analytics that does not need to be up-to-the-second, this pattern is hard to beat.

Targeted alerts are the only alerts that get acted on

The overdue email goes only to the role that can fix the problem. No CC, no escalation chain, no broadcast. The factory liaison sees only the at-factory overdues. The buyer sees only their own returned-but-unprocessed samples. Acted-on rate is high. The same email broadcast to the whole team would have been muted within a fortnight.

Realtime changes the social dynamic of a team

Once everyone sees updates instantly, the “did you see my update?” backchannel on Slack disappears. People stop hedging their data behind their own private spreadsheet. That cultural shift was unplanned and arguably the most valuable second-order effect.

09Conclusion

A focused 6-week build replaced three disjoint systems with a single Postgres-backed Next.js application. The architecture was conventional: RSC + server actions, RLS for authorisation, materialised views for analytics, Supabase realtime for collaboration. The discipline was in the modelling — eight states, twelve roles, explicit transitions, audit by default. The ROI showed up in payback inside the first quarter (in saved write-offs alone), and the buying team operates from a different epistemic position than they did a year ago.