Back to Blog
Cloud

Cloudflare Workers vs Vercel Functions in 2026: a real-world cost and latency breakdown

I run the same Next.js workload on Vercel and a hand-rolled Workers stack. The pricing model is the most important difference, not the latency. Here are the numbers.

8 March 202614 min read

I run the same workload on both. A Next.js app with a few API routes, an LLM proxy, and a handful of cron jobs. Here is what I learned.

Latency

p50 cold-start latency (ms), London POP

Source: My own bench, k6, 50 vUs, 5 min

The cold-start gap between Workers and Vercel Edge is real but small in practice (14ms vs 27ms p50). Both feel instant. The big gap is to Vercel's Lambda functions (245ms) which is the default if your route uses any Node API not supported by the edge runtime. If you forget to set export const runtime = 'edge' you fall off this cliff[2].

Cost

Monthly cost vs request volume (millions/month)

Source: Public price lists, Q1 2026

This is where the choice gets made. Workers' pricing is "per request + CPU-ms", Vercel Pro is "per invocation + bandwidth + edge function execution"[1][2]. At low scale they are roughly comparable. At 100 million requests a month, Workers is roughly 8x cheaper.

That sounds extreme until you realise Vercel's pricing includes the build minutes, the preview deployments, the analytics, the deploy infrastructure, the support. Workers is "we'll route a request to your code." You pay for the rest separately if you want it.

Feature parity

Workers vs Vercel Functions feature parity, May 2026
SpecCloudflare WorkersVercel Functions
Cold start (p50)14 ms27 ms (edge) / 245 ms (lambda)
CPU time per request30s on Paid plan60s (Pro) / 300s (Enterprise)
Pricing modelPer request + CPU-msPer execution + invocation + bandwidth
POPs335+40+ regions (lambda)
Native key-valueKV (eventual)Edge Config (immutable per deploy)
Native databaseD1 (SQLite)Postgres (Vercel Storage)
Native object storeR2 (zero egress)Vercel Blob (egress charged)
Cron triggersYesYes (Pro+)
Local emulationWrangler (good)next dev (excellent)
Vendor lockMedium (Workers API)Low (Lambda compatible)

Notable differences:

  • R2 has zero egress fees, Vercel Blob charges egress. For media-heavy workloads this changes the maths.
  • D1 is SQLite at the edge, eventual consistency. Vercel Postgres is a managed primary. If you need transactions across regions, Vercel.
  • Workers runs in 335+ POPs[3], Vercel runs functions in fewer regions but with smarter routing.

Vendor lock

This is underrated. Workers code uses the Workers Runtime API. If you write addEventListener('fetch') style or use Cloudflare bindings, moving off Cloudflare requires rewriting. Vercel Functions are mostly Lambda-compatible — your handler is a Node function, and you can move it to AWS Lambda or fly.io with minimal work.

Where I land

For SarmaLink-AI's chat backend (low requests, high CPU per request) I use Vercel because the dev experience and Postgres integration are better. For a hypothetical asset proxy or webhook router (high requests, low CPU per request) I use Workers because the pricing makes it 5-10x cheaper.

The decision is not "which is better." It is "which pricing curve fits your workload."

About the data

A note on what the numbers in this post represent so you can read them with the right confidence:

  • "My own bench" rows are personal measurements on my own hardware. They are honest about my setup and reproducible there, but they should not be treated as universal benchmark scores.
  • Benchmark numbers attributed to public sources (Geekbench Browser, DXOMARK, NotebookCheck, FIA timing) are illustrative — the trend is what matters, not the third decimal place. Cross-check against the source for anything you would act on financially.
  • Client outcomes and ROI percentages in business-focused posts are anonymised composites drawn from my own consulting work. Real numbers, real direction, sanitised so individual clients are not identifiable.
  • Foldable crease-depth and similar engineering measurements are estimates pulled from teardown reports and reviewer claims; manufacturers do not publish these directly.
  • Forecasts and "what I bet" lines are exactly that — opinions, not predictions with a track record yet.

If you spot a number that contradicts a source you trust, tell me — I would rather correct it than be the chart that was off by 6 percent and pretended otherwise.

References

  1. [1]
  2. [2]
    Vercel Functions pricing https://vercel.com/pricing
  3. [3]
    Cloudflare 2024 network update https://blog.cloudflare.com
S

Sarma

SarmaLinux

More from Cloud

Have a project in mind?

Let's discuss how I can help you implement these ideas.

Get in Touch