Skip to main content

Cloudflare Workers vs AWS Lambda@Edge vs Fastly Compute

·APIScout Team
edge computingcloudflare workersaws lambda edgefastly computeserverlesscdnedge functionswebassembly

Computing at the Edge

Traditional serverless runs in a handful of regions. When a user in Singapore hits your US-East API endpoint, they're waiting for round-trip latency to the US. Edge computing eliminates this: run your code in the data center closest to the user, reducing TTFB from 300-500ms to 10-50ms.

Three platforms define edge computing for developers in 2026: Cloudflare Workers (330+ PoPs, V8 isolates, JavaScript/TypeScript native), AWS Lambda@Edge (Lambda functions at CloudFront's CDN layer with broad language support), and Fastly Compute (enterprise edge with Rust/Go/WASM, strong for CDN-critical workloads).

TL;DR

Cloudflare Workers is the default for most teams — 330+ PoPs, no cold starts, 100K free requests/day, and the richest edge ecosystem (Workers KV, D1, R2, Durable Objects). AWS Lambda@Edge is right when you're deeply invested in AWS and need Lambda at the CDN layer. Fastly Compute is the enterprise choice for teams needing Rust/Go/WASM at the edge with advanced CDN customization — but costs 3-7x more than Cloudflare for equivalent workloads.

Key Takeaways

  • Cloudflare Workers runs on V8 isolates — no cold starts, sub-millisecond startup, 441% faster than Lambda at P95 in benchmarks.
  • Cloudflare Workers free tier: 100,000 requests/day — the most generous edge computing free tier available.
  • AWS Lambda@Edge charges for idle time — unlike Cloudflare's CPU-only billing, Lambda@Edge bills for total duration including I/O wait.
  • Cloudflare Workers has 330+ PoPs vs Fastly's 70+ — more locations means closer proximity to more users globally.
  • Fastly Compute supports Rust, Go, and WASM natively — the strongest multi-language support for edge compute.
  • AWS Lambda@Edge has a 1-5 second cold start — unacceptable for latency-sensitive applications.
  • Cloudflare Workers CPU billing — you only pay for actual CPU time, not network wait time — significant cost difference for I/O-heavy workloads.

Platform Comparison

FeatureCloudflare WorkersAWS Lambda@EdgeFastly Compute
PoP count330+600+ (CloudFront)70+
Cold startsNone (V8 isolates)1-5 secondsNear-zero (WASM)
Billing unitCPU timeTotal duration (incl I/O)Requests + compute
Free tier100K req/dayNoneNone
Starting price$5/monthUsage-based$50/month min
LanguagesJS, TS, WASMNode, Python, Java, etcRust, Go, JS, WASM
Max execution time30s CPU / 5 min wall30 seconds50ms default
Memory limit128MBUp to 10GB32MB default

Cloudflare Workers

Best for: Most teams, Next.js/React apps at the edge, serverless-first development, cost efficiency

Cloudflare Workers is built on V8 isolates — the same JavaScript engine powering Chrome. Instead of cold-starting a container or Lambda execution environment, Workers spin up in microseconds. Each Worker is isolated in a V8 context, runs at the edge globally, and shares no state between requests.

Pricing

TierRequestsCPU TimeCost
Free100K/day10ms/request$0
Workers Paid10M/month30s CPU/request$5/month
Workers Paid overage+$0.30/millionUsage

Critical pricing difference: Cloudflare bills for CPU time only — if your Worker spends 100ms waiting on a fetch() call, that time doesn't count. AWS Lambda@Edge bills for total duration, including I/O wait.

Worker Example

// workers-site/index.ts
export default {
  async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
    const url = new URL(request.url);

    // A/B testing at the edge — no origin roundtrip
    const userId = request.headers.get("cf-connecting-ip") ?? "anon";
    const variant = parseInt(userId.split(".")[3]) % 2 === 0 ? "a" : "b";

    if (url.pathname === "/api/feature-config") {
      return Response.json({ variant, feature: "new-checkout", enabled: variant === "a" });
    }

    // Geolocation from Cloudflare headers
    const country = request.headers.get("cf-ipcountry") ?? "US";
    const city = request.cf?.city ?? "Unknown";

    // Cache API — Workers-native caching
    const cacheKey = new Request(`${url.origin}${url.pathname}?country=${country}`, request);
    const cached = await caches.default.match(cacheKey);
    if (cached) return cached;

    // Fetch origin and cache
    const response = await fetch(request);
    const cloned = response.clone();
    ctx.waitUntil(caches.default.put(cacheKey, cloned));

    return response;
  },
};

The Workers Ecosystem

Workers' edge data services are what make it a complete platform:

Workers KV — globally distributed key-value store, eventually consistent:

// Read from KV — served from nearest Cloudflare PoP
const config = await env.CONFIG_KV.get("feature-flags", { type: "json" });
await env.CONFIG_KV.put("user:123:session", JSON.stringify(session), { expirationTtl: 3600 });

Durable Objects — stateful objects with strong consistency:

// Counter that's consistent globally
export class Counter {
  private count: number = 0;

  async fetch(request: Request): Promise<Response> {
    this.count++;
    return Response.json({ count: this.count });
  }
}

D1 — SQLite database at the edge, zero cold starts R2 — object storage with Workers-native access, zero egress fees

When to Choose Cloudflare Workers

Next.js applications deployed via Cloudflare Pages, applications needing truly global distribution with no cold starts, teams that want edge data (KV, D1, R2, Durable Objects) alongside compute, cost-sensitive teams that need edge compute without enterprise minimums.

AWS Lambda@Edge

Best for: AWS-heavy organizations, broad language support, tight CloudFront integration

Lambda@Edge runs Lambda functions at CloudFront CDN edge locations — your existing Lambda code, deployed globally, triggered on CloudFront events (viewer request, origin request, origin response, viewer response).

How It Works

Lambda@Edge integrates with CloudFront at four trigger points:

// Lambda@Edge function (Node.js)
exports.handler = async (event) => {
  const { request, response } = event.Records[0].cf;

  // Viewer request: modify incoming requests before CloudFront checks cache
  // Use case: URL normalization, authentication, redirect logic
  if (event.Records[0].cf.request) {
    const req = event.Records[0].cf.request;

    // Add security headers
    if (req.headers["x-api-key"]) {
      const isValid = await validateApiKey(req.headers["x-api-key"][0].value);
      if (!isValid) {
        return {
          status: "401",
          statusDescription: "Unauthorized",
          body: JSON.stringify({ error: "Invalid API key" }),
        };
      }
    }

    return req;
  }

  // Viewer response: modify responses before sending to users
  const res = event.Records[0].cf.response;
  res.headers["strict-transport-security"] = [{ key: "Strict-Transport-Security", value: "max-age=31536000" }];
  return res;
};

Pricing

Lambda@Edge bills like Lambda — but at 3x the price:

ResourceLambdaLambda@Edge
Requests$0.20/million$0.60/million
Duration$0.0000166667/GB-second$0.00005001/GB-second

Important: Lambda@Edge charges for total execution time including I/O wait — waiting 100ms for an API call = 100ms billed. Cloudflare Workers doesn't charge for I/O wait.

Cold Start Problem

Lambda@Edge's cold start (1-5 seconds) is a significant problem for user-facing applications. Unlike Workers' V8 isolates, Lambda@Edge spins up full execution environments — acceptable for CDN request manipulation, unacceptable for latency-sensitive APIs.

When to Choose Lambda@Edge

Teams deeply invested in AWS with existing Lambda functions they want to distribute globally, applications already using CloudFront where edge logic needs to run at CDN level, or teams that need Lambda's language flexibility (Python, Java, .NET) at the edge.

Fastly Compute

Best for: Enterprise CDN, Rust/Go/WASM edge compute, advanced CDN customization

Fastly Compute (formerly Compute@Edge) runs WebAssembly on Fastly's CDN infrastructure. The key differentiator: language support. While Cloudflare Workers is primarily JavaScript/TypeScript with WASM, Fastly has first-class support for Rust, Go, and any language that compiles to WASM — making it the right choice for teams with Rust-native codebases.

Pricing

  • Minimum spend: $50/month
  • Requests: Included in plan
  • Bandwidth: Usage-based

Fastly is typically 3-7x more expensive than Cloudflare Workers for equivalent workloads — the premium buys enterprise features, strong SLAs, and multi-language support.

Rust Example

use fastly::http::StatusCode;
use fastly::{Request, Response};

#[fastly::main]
fn main(req: Request) -> Result<Response, fastly::Error> {
    // Geographic routing in Rust — compiled to WASM
    let geo = req.get_client_ip_addr();
    let country = geo.and_then(|ip| fastly::geo::geo_lookup(ip))
        .map(|g| g.country_code().to_string())
        .unwrap_or_else(|| "US".to_string());

    if country == "DE" || country == "FR" {
        // Route EU traffic to EU origin
        let bereq = req.clone_with_origin("https://eu.origin.example.com");
        return Ok(bereq.send("eu_origin")?);
    }

    Ok(Response::from_status(StatusCode::OK)
        .with_body_text_plain(&format!("Country: {}", country)))
}

When to Choose Fastly Compute

Enterprise teams with Rust or Go codebases that want edge compute, organizations already on Fastly's CDN, advanced CDN customization needs (complex routing, request manipulation at CDN layer), or teams with specific WASM runtime requirements.

Performance Benchmarks

Real-world edge latency from Cloudflare's own benchmarks and third-party testing:

PlatformP50 LatencyP95 LatencyCold Start
Cloudflare Workers~10ms~20msNone
Fastly Compute~15ms~30ms~5ms (WASM init)
Lambda@Edge~40ms warm~200ms+ cold1-5 seconds
Lambda (regional)~50ms~150ms100-500ms

Cloudflare's own benchmarks show Workers at P95 is 441% faster than Lambda and 192% faster than Lambda@Edge — largely because V8 isolates have zero cold start overhead.

Decision Framework

ScenarioRecommended
General edge computeCloudflare Workers
No cold starts requiredCloudflare Workers
AWS-invested organizationLambda@Edge
Rust/Go/WASM codebaseFastly Compute
Minimum costCloudflare Workers (100K free/day)
Enterprise CDN, advanced routingFastly Compute
Edge database (KV, SQLite)Cloudflare Workers (D1, KV)
CloudFront CDN customizationLambda@Edge
Maximum PoP coverageAWS Lambda@Edge (600+ CloudFront PoPs)

Verdict

Cloudflare Workers is the right default for 2026. No cold starts, 330+ PoPs, CPU-only billing, 100K free requests/day, and the richest edge ecosystem (D1, KV, R2, Durable Objects) make it the obvious choice for most applications. The Workers platform has matured to the point where it can replace a significant portion of traditional serverless infrastructure.

AWS Lambda@Edge makes sense when you're already committed to AWS and need Lambda at the CDN layer. The cold start problem and higher pricing are real limitations, but the integration with CloudFront, S3, and the broader AWS ecosystem has genuine value for AWS-native teams.

Fastly Compute fills the enterprise niche — Rust/Go/WASM support, advanced CDN customization, and enterprise SLAs that neither Cloudflare nor AWS provides at this level. If you're a Rust shop that needs edge compute, Fastly is the only viable option.


Compare edge computing API pricing, performance benchmarks, and documentation at APIScout — find the right edge platform for your application.

Comments