Skip to main content

Cloudflare Workers vs Vercel Edge vs Lambda@Edge

·APIScout Team
cloudflare workersvercel edgelambda@edgeedge computingserverlessapi comparison2026

Edge Functions Became the Default Middleware Layer

In 2022, running code at the edge was exotic. In 2026, it's the default for authentication middleware, A/B testing, geolocation-based routing, and request transformation. Every major platform ships an edge runtime: Cloudflare Workers, Vercel Edge Functions, AWS Lambda@Edge, Fastly Compute, Netlify Edge, and Deno Deploy.

The key architectural distinction is V8 isolates vs Node.js. Cloudflare Workers and Vercel Edge use V8 isolates — the same JavaScript engine as Chrome, but without Node.js APIs. Cold starts measure in microseconds. AWS Lambda@Edge uses Node.js, which means full Node.js compatibility at the cost of 100ms–2s cold starts (INIT phase now billed as of August 2025).

TL;DR

Cloudflare Workers offers the best price-to-performance ratio at scale — 100K free requests/day, V8 isolate cold starts under 5ms, 330+ cities globally, and a March 2025 update that raised the CPU limit to 5 minutes for long-running workloads. Vercel Edge Functions is zero-config for Next.js teams — Fluid Compute (April 2025) eliminates most observable cold starts, though Vercel now recommends the Node.js runtime over Edge for most use cases. AWS Lambda@Edge is the CloudFront-native choice — full Node.js 20/22/24 runtime at 13 Regional Edge Caches, but 100ms–2s cold starts and INIT phase billing added August 2025.

Key Takeaways

  • Cloudflare Workers: 100K requests/day free; $5/month base paid ($0.30/million over 10M included); CPU 10ms (free) / 50ms–5 min (paid); 330+ cities
  • Vercel Edge: 1M executions/month free (Hobby; non-commercial only); $2/million overage on Pro; 4MB bundle; Fluid Compute since April 2025
  • Lambda@Edge: No free tier; $0.60/million requests; $0.00005001/GB-second; INIT phase billed since August 1, 2025; executes at 13 Regional Edge Caches (not all CloudFront PoPs)
  • Cold starts: Workers <5ms, Vercel Edge ~0ms (Fluid Compute), Lambda@Edge 100ms–2s
  • CPU limits: Workers 10ms (free) / 50ms paid / up to 5 min long-running (March 2025); Vercel Edge 300s wall clock; Lambda@Edge 5s (viewer) / 30s (origin)
  • Runtime: Workers/Vercel Edge = Web APIs + expanding Node.js compat; Lambda@Edge = full Node.js 20/22

Pricing Comparison

FeatureCloudflare WorkersVercel EdgeLambda@Edge
Free requests100K/day1M/month (Hobby; non-commercial)None
Paid rate$5/mo + $0.30/M over 10M~$2/M overage (Pro)$0.60/M
CPU/duration pricing$0.02/M GB-ms (paid)Included in request fee$0.00005001/GB-s
Data transfer egressIncludedIncluded$0.09/GB
Minimum spend$5/month (Workers Paid)$20/month (Pro)None

Cloudflare Workers Pricing Detail

Cloudflare Workers has two pricing models:

Workers Free (default):

  • 100,000 requests/day
  • 10ms CPU time per request
  • 128MB memory
  • No KV, Durable Objects, or R2 bindings on free tier

Workers Paid ($5/month base):

  • 10 million requests included; 30 million CPU-ms/month included
  • 50ms CPU time per request (raised from 30ms)
  • $0.30/million additional requests (overage)
  • $0.02/million CPU-milliseconds overage
  • Access to Workers KV, Durable Objects, R2, Queues, AI bindings

Long-running Workers (March 2025): CPU time limit raised to up to 5 minutes per request for paid workloads — a major change enabling complex compute tasks previously only possible in traditional serverless

Workers Unbound (deprecated): The Unbound plan was discontinued and all accounts migrated to Standard pricing by March 2024. Long-running workloads now use the paid plan's 5-minute CPU cap instead.

Vercel Edge Functions Pricing

Vercel pricing is project/deployment-level:

  • Hobby: Free; 1M edge executions/month; non-commercial use only; 4MB bundle
  • Pro ($20/month): Includes $20 usage credit; 10M edge executions/month; ~$2/million overage; 4MB bundle; 300s execution wall clock
  • Enterprise: Custom; dedicated limits; SLA

Fluid Compute (enabled by default April 23, 2025): Replaces the old serverless container model with bytecode caching and predictive pre-warming. Effectively eliminates observable cold starts for production Pro/Enterprise deployments and bills only on Active CPU time (not provisioned time).

Vercel's December 2025 recommendation: Vercel now recommends preferring the Node.js runtime over Edge runtime for most use cases. Fluid Compute made the Node.js runtime fast enough that Edge's latency advantage has narrowed significantly — while Edge's V8 limitations (no native modules, 4MB bundle cap, no filesystem) remain pain points.

Edge Middleware (the middleware.ts file) runs on Vercel's edge network at every request globally — no additional configuration.

Lambda@Edge Pricing

Lambda@Edge pricing is layered on top of CloudFront:

  • Requests: $0.60/million (vs Lambda's $0.20/million — 3x more expensive)
  • Duration: $0.00005001/GB-second (vs Lambda's $0.0000166667/GB-s — 3x more expensive)
  • CloudFront requests: $0.0085–$0.0120/10K HTTP requests (separate charge)
  • Data transfer: $0.09/GB (CloudFront pricing applies)
  • INIT phase billing (August 1, 2025): Cold start initialization is now billed at the same duration rate as execution — can increase costs 10–50% for cold-start-heavy workloads

Lambda@Edge's higher per-invocation price reflects the global replication cost — your function is replicated and executes at 13 Regional Edge Caches (a subset of CloudFront's 750+ PoPs for caching; code only runs at those 13 locations). Functions must be authored in us-east-1 and are auto-replicated by AWS. Note: Lambda@Edge does not support Lambda layers, VPC access, ARM64, or container images.

Architecture and Runtime Differences

V8 Isolates vs Node.js

// Cloudflare Workers — Service Worker API + Web Standard APIs
export default {
  async fetch(request, env, ctx) {
    const url = new URL(request.url)

    // KV storage — globally replicated key/value
    const cachedResponse = await env.MY_KV.get(url.pathname)
    if (cachedResponse) {
      return new Response(cachedResponse, {
        headers: { 'X-Cache': 'HIT', 'Content-Type': 'application/json' }
      })
    }

    // Geolocation — built into every request
    const country = request.cf?.country
    const city = request.cf?.city
    const continent = request.cf?.continent

    // A/B testing based on country
    const variant = country === 'US' ? 'A' : 'B'

    const response = await fetch(
      `https://api.internal.example.com/data?variant=${variant}`,
      { cf: { cacheEverything: true, cacheTtl: 3600 } }
    )

    const data = await response.json()
    ctx.waitUntil(env.MY_KV.put(url.pathname, JSON.stringify(data), { expirationTtl: 3600 }))

    return Response.json({ ...data, country, variant })
  }
}

Vercel Edge: Next.js Native

// middleware.ts — runs at the edge before every request
import { NextResponse } from 'next/server'
import type { NextRequest } from 'next/server'
import { verifyJWT } from '@/lib/auth-edge'

export const config = {
  matcher: ['/dashboard/:path*', '/api/protected/:path*']
}

export async function middleware(request: NextRequest) {
  const token = request.cookies.get('auth-token')?.value

  if (!token) {
    return NextResponse.redirect(new URL('/login', request.url))
  }

  try {
    const payload = await verifyJWT(token) // Edge-compatible JWT verification

    // Clone headers to add user context for downstream
    const requestHeaders = new Headers(request.headers)
    requestHeaders.set('x-user-id', payload.sub)
    requestHeaders.set('x-user-role', payload.role)

    // Geolocation from Vercel headers
    const country = request.geo?.country
    if (country === 'RU' || country === 'CN') {
      return NextResponse.json({ error: 'Region blocked' }, { status: 451 })
    }

    return NextResponse.next({ request: { headers: requestHeaders } })
  } catch {
    return NextResponse.redirect(new URL('/login', request.url))
  }
}
// Next.js Edge API Route — same runtime
export const runtime = 'edge'

export async function GET(request: Request) {
  const { searchParams } = new URL(request.url)
  const id = searchParams.get('id')

  // Edge-compatible: no Node.js APIs like fs, crypto (use Web Crypto instead)
  const hash = await crypto.subtle.digest('SHA-256', new TextEncoder().encode(id!))
  const hashHex = Array.from(new Uint8Array(hash))
    .map(b => b.toString(16).padStart(2, '0'))
    .join('')

  return Response.json({ id, hash: hashHex })
}

Lambda@Edge: Full Node.js

// Lambda@Edge — Node.js 20 runtime, full AWS SDK available
import { CloudFrontRequestHandler } from 'aws-lambda'
import { SSMClient, GetParameterCommand } from '@aws-sdk/client-ssm'

const ssm = new SSMClient({ region: 'us-east-1' })

export const handler: CloudFrontRequestHandler = async (event) => {
  const request = event.Records[0].cf.request
  const headers = request.headers

  // Full Node.js APIs available — file system, crypto, etc.
  // But: SSM calls add latency from edge PoPs to us-east-1!

  // Authentication header check
  const authHeader = headers['authorization']?.[0]?.value
  if (!authHeader) {
    return {
      status: '401',
      statusDescription: 'Unauthorized',
      body: JSON.stringify({ error: 'Missing authorization' })
    }
  }

  // Modify request before it reaches the origin
  request.headers['x-internal-request'] = [{ key: 'X-Internal-Request', value: 'true' }]

  // A/B test via cookie
  const abCookie = headers.cookie?.[0]?.value
  if (!abCookie?.includes('variant=')) {
    const variant = Math.random() < 0.5 ? 'A' : 'B'
    request.headers.cookie = [{
      key: 'Cookie',
      value: `${abCookie || ''}; variant=${variant}`
    }]
  }

  return request // Pass modified request to origin
}

Lambda@Edge function types (different CloudFront triggers):

  • Viewer Request: runs before cache check; all requests hit it; most expensive
  • Origin Request: runs only on cache miss; reaches back to origin
  • Origin Response: modify origin response before caching
  • Viewer Response: modify response before returning to client

Runtime Limits and Capabilities

CapabilityCloudflare WorkersVercel EdgeLambda@Edge
CPU time limit10ms free / 50ms paid / up to 5 min (long-running, paid)300s wall clock5s (viewer) / 30s (origin)
Memory128MBNot configurable (edge runtime)128MB max (viewer) / up to 3,008MB (origin)
Max bundle size3MB compressed (free) / 10MB (paid)4MB1MB compressed (viewer) / 50MB compressed (origin)
Max execution time30s (standard) / 5 min (long-running, paid)300s5s (viewer) / 30s (origin)
Node.js APIsPartial (node: compat flag)PartialFull Node.js 20/22/24
File system❌ (read-only /tmp)
Outbound TCP✅ (Sockets API)
WebSockets✅ (Durable Objects)
KV storage✅ (Workers KV)✅ (Edge Config)❌ (external only)
Cold starts<5ms (V8 isolates)~0ms (Fluid Compute, April 2025)100ms–2s

Global Distribution

PlatformPoP CountNotes
Cloudflare Workers330+ citiesUpdated September 2025; 334 PoPs; deploys to every location
Vercel Edge126+ PoPs / 20 compute regionsAnycast routing to nearest PoP → nearest compute region
Lambda@Edge13 Regional Edge CachesCloudFront has 700+ PoPs for caching; Lambda code runs at only 13 of them

Cloudflare's 330+ city PoPs are the densest edge compute network. Lambda@Edge's "450+ PoPs" figure that circulates widely refers to CloudFront's caching network (now 750+) — Lambda@Edge functions only execute at 13 Regional Edge Caches, making it geographically less distributed than Cloudflare or even Vercel's 20 compute regions for function execution.

Use Case Comparison

Use CaseBest ChoiceWhy
Auth middleware (JWT check)Vercel Edge or WorkersBoth support Web Crypto API for JWT verification
A/B testingCloudflare Workersrequest.cf.country, cookie manipulation, KV for experiment state
Geo-blockingAll threeWorkers: cf.country; Vercel: request.geo; Lambda@Edge: CloudFront CloudFront-Viewer-Country header
Image optimizationCloudflare Images or Lambda@EdgeWorkers can transform via Cloudflare Images; Lambda@Edge works with S3/CloudFront
SSR at the edgeWorkers (full SSR) or Vercel Edge (Next.js)Workers supports full response streaming; Vercel Edge is Next.js-native
Complex business logicLambda@EdgeFull Node.js, large bundles, longer execution time
API proxy / rate limitingCloudflare WorkersWorkers KV for rate limit counters, Durable Objects for exact limits
Bot detectionCloudflare WorkersCloudflare Bot Management available as binding

When to Choose Each

Choose Cloudflare Workers if:

  • You need the broadest global compute coverage (330+ cities) with the lowest latency
  • Cost efficiency at high request volumes matters ($0.30/million — lower than the widely-cited stale $0.50 figure)
  • You want access to the broader Cloudflare ecosystem: KV, Durable Objects, R2, D1, Workers AI, Queues
  • You need outbound TCP sockets or WebSockets at the edge
  • Long-running compute (up to 5 min CPU per request since March 2025) matters for your use case
  • You're building a standalone edge application, not middleware for an existing framework

Choose Vercel Edge Functions if:

  • You're building with Next.js and want zero-config edge middleware
  • The middleware.ts pattern handles your use cases (auth, redirects, geolocation, headers)
  • You want the simplest path to edge deployment without leaving the Vercel ecosystem
  • Consider Vercel Node.js runtime instead: As of December 2025, Vercel recommends Node.js over Edge for most use cases — Fluid Compute has closed the cold start gap, while Node.js avoids V8's native module and bundle size restrictions

Choose Lambda@Edge if:

  • You're already on AWS with CloudFront for CDN
  • You need full Node.js 20/22 compatibility (native modules, large bundles, AWS SDK calls)
  • Your use case requires longer execution times (origin: 30s) or more memory (up to 10GB on origin)
  • You need tight integration with CloudFront behaviors, WAF, and S3 origins
  • You can tolerate higher cold start latency (100ms–2s) and the new INIT phase billing (August 2025)
  • Consider CloudFront Functions instead: For simple URL rewrites and header manipulation at $0.10/M (6x cheaper than Lambda@Edge), CloudFront Functions run at 200+ PoPs vs Lambda@Edge's 13

Track Cloudflare, Vercel, and AWS edge function API availability on APIScout.

Related: Kong vs Envoy vs Tyk vs AWS API Gateway 2026 · Supabase vs Neon vs PlanetScale 2026

Comments