Skip to main content

Uploadthing vs Cloudflare R2 vs S3 for Next.js 2026

·APIScout Team
file-uploadcloudflare-r2aws-s3nextjs

File upload infrastructure is a decision most teams get wrong in two phases: they start with S3 because it's familiar, discover the egress fees at scale, then scramble to migrate. In 2026 there's a cleaner path: understand the tradeoffs between Uploadthing's opinionated DX, Cloudflare R2's zero-egress model, and S3's comprehensive ecosystem — then pick once.

This comparison focuses on what Next.js developers actually care about: integration complexity, pricing at real traffic levels, and the migration story if you choose wrong.

The Three Options

Uploadthing

Uploadthing is the opinionated file upload solution for TypeScript full-stack apps. Built by Theo (t3.gg) and the T3 stack community, it makes strong choices so you don't have to.

The core model: you define type-safe "file routes" on your server that specify what's allowed (file types, sizes, who can upload), and Uploadthing handles presigned URLs, file hosting, CDN delivery, and webhook callbacks. Your frontend gets a React hook that connects to the router. There's no S3 bucket to configure, no IAM policy to debug.

// server/uploadthing.ts
import { createUploadthing, type FileRouter } from 'uploadthing/next';
const f = createUploadthing();

export const uploadRouter = {
  imageUploader: f({ image: { maxFileSize: '4MB' } })
    .middleware(async ({ req }) => {
      const user = await getCurrentUser(req);
      if (!user) throw new UploadThingError('Unauthorized');
      return { userId: user.id };
    })
    .onUploadComplete(async ({ metadata, file }) => {
      await db.userFiles.create({
        data: { userId: metadata.userId, url: file.url },
      });
    }),
} satisfies FileRouter;

The middleware validates your user. onUploadComplete runs after the upload finishes. Your frontend component gets typed props.

Free tier: 2GB storage, 1GB bandwidth/month. Paid plans: Starter $10/month (50GB storage), Pro $30/month (200GB storage).

Uploadthing stores files on their own CDN — you don't control the underlying storage provider. This is the fundamental tradeoff: you trade infrastructure control for developer experience.

Cloudflare R2

R2 is Cloudflare's object storage service, designed as an S3 replacement with one critical difference: zero egress fees. Every time someone downloads a file from S3, you pay $0.09/GB (standard pricing). From R2, downloads are free.

R2 is fully S3-compatible. Libraries that work with S3 work with R2 by changing the endpoint URL. The @aws-sdk/client-s3 package works as-is.

In early 2026, Cloudflare shipped "Local Uploads (beta)" — client uploads route to the nearest Cloudflare PoP storage, reducing write latency by approximately 75% for global users.

Pricing:

  • Storage: $0.015/GB/month (after 10GB free)
  • Class A operations (writes): $4.50 per million (after 1 million free/month)
  • Class B operations (reads): $0.36 per million (after 10 million free/month)
  • Egress: $0.00

AWS S3

S3 is the baseline. Every object storage service is measured against it. The feature set is comprehensive: lifecycle rules, replication, versioning, event notifications, 99.999999999% durability, and deep integration with every AWS service.

The problem is pricing. At any meaningful scale, egress fees dominate:

  • Storage: $0.023/GB/month (standard)
  • Egress to internet: $0.09/GB (first 10TB/month)
  • Egress to other AWS services: $0.00 if same region

For applications where users upload and then repeatedly download their files (avatars, documents, media), egress fees create compounding costs. A 1TB storage bucket with active downloads can incur $90+/month in egress fees alone.

The IAM permission model is also a source of significant developer friction. S3 presigned URLs, bucket policies, IAM roles, and CORS configuration require more upfront work than alternatives.

Pricing Calculator

Let's price three workloads across all three platforms.

Scenario 1: Early-stage app (10GB storage, 50GB downloads/month)

ProviderStorageOperationsEgressTotal
Uploadthing Free$0$0$0$0
Cloudflare R2~$0 (under free tier)~$0 (under free tier)$0$0
AWS S3$0.23~$0.10$4.50~$4.83/mo

Scenario 2: Growing app (100GB storage, 500GB downloads/month)

ProviderStorageOperationsEgressTotal
Uploadthing Starter$10/month flatincludedincluded$10/mo
Cloudflare R2$1.35~$1.00$0~$2.35/mo
AWS S3$2.30~$1.00$45.00~$48.30/mo

Scenario 3: Scale (1TB storage, 5TB downloads/month)

ProviderStorageOperationsEgressTotal
Uploadthing Pro$30/month flat (200GB) — need EnterpriseCustomCustomCustom
Cloudflare R2$15.00~$5.00$0~$20/mo
AWS S3$23.00~$5.00$450.00~$478/mo

The conclusion is stark: S3's egress fees become the dominant cost at any meaningful download volume. R2 saves approximately 90% at the 1TB/5TB scale.

Next.js Integration Code

Uploadthing (Next.js App Router)

Full setup requires creating the file router, a Next.js API route handler, and using the client components:

// app/api/uploadthing/route.ts
import { createRouteHandler } from 'uploadthing/next';
import { uploadRouter } from '@/server/uploadthing';

export const { GET, POST } = createRouteHandler({ router: uploadRouter });
// components/FileUpload.tsx
'use client';
import { UploadButton } from '@uploadthing/react';
import type { OurFileRouter } from '@/server/uploadthing';

export function FileUpload() {
  return (
    <UploadButton<OurFileRouter, 'imageUploader'>
      endpoint="imageUploader"
      onClientUploadComplete={(files) => {
        console.log('Uploaded:', files[0].url);
      }}
      onUploadError={(error) => {
        console.error('Upload error:', error);
      }}
    />
  );
}

Six files, all typed, zero S3 configuration. This is the Uploadthing value proposition in code.

Cloudflare R2 (Next.js App Router)

R2 requires presigned URL generation on the server and a client-side upload:

// app/api/upload-url/route.ts
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';

const r2 = new S3Client({
  region: 'auto',
  endpoint: `https://${process.env.CF_ACCOUNT_ID}.r2.cloudflarestorage.com`,
  credentials: {
    accessKeyId: process.env.R2_ACCESS_KEY_ID!,
    secretAccessKey: process.env.R2_SECRET_ACCESS_KEY!,
  },
});

export async function POST(request: Request) {
  const { filename, contentType } = await request.json();
  const key = `uploads/${crypto.randomUUID()}-${filename}`;

  const url = await getSignedUrl(
    r2,
    new PutObjectCommand({
      Bucket: process.env.R2_BUCKET_NAME!,
      Key: key,
      ContentType: contentType,
    }),
    { expiresIn: 3600 }
  );

  return Response.json({ uploadUrl: url, key });
}
// components/FileUpload.tsx
'use client';

export function FileUpload() {
  const handleUpload = async (file: File) => {
    // Get presigned URL
    const { uploadUrl, key } = await fetch('/api/upload-url', {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ filename: file.name, contentType: file.type }),
    }).then(r => r.json());

    // Upload directly to R2
    await fetch(uploadUrl, {
      method: 'PUT',
      body: file,
      headers: { 'Content-Type': file.type },
    });

    // Save the key to your database
    await saveFileKey(key);
  };

  return <input type="file" onChange={e => handleUpload(e.target.files![0])} />;
}

More code, more control. You manage the presigned URL lifecycle, CORS configuration, and CDN setup (Cloudflare's built-in CDN or custom domain).

AWS S3 (Next.js App Router)

The S3 pattern is nearly identical to R2 — just a different endpoint and credentials:

// app/api/upload-url/route.ts
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';

const s3 = new S3Client({
  region: process.env.AWS_REGION!,
  credentials: {
    accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
    secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
  },
});

export async function POST(request: Request) {
  const { filename, contentType } = await request.json();
  const key = `uploads/${Date.now()}-${filename}`;

  const url = await getSignedUrl(
    s3,
    new PutObjectCommand({
      Bucket: process.env.S3_BUCKET_NAME!,
      Key: key,
      ContentType: contentType,
    }),
    { expiresIn: 3600 }
  );

  return Response.json({ uploadUrl: url, key });
}

The code is nearly identical to R2. The difference is configuration: IAM policies, bucket policies, CORS, and the egress fees you'll discover in your AWS bill.

Migration Guide: S3 to R2

If you're currently on S3 and want to migrate to R2, the code change is minimal:

  1. Create an R2 bucket in the Cloudflare dashboard
  2. Generate R2 API credentials (access key ID + secret)
  3. Update your S3 client configuration:
// Before (S3)
const client = new S3Client({
  region: 'us-east-1',
  credentials: { accessKeyId: '...', secretAccessKey: '...' },
});

// After (R2) — only endpoint changes
const client = new S3Client({
  region: 'auto',
  endpoint: `https://${CF_ACCOUNT_ID}.r2.cloudflarestorage.com`,
  credentials: { accessKeyId: R2_ACCESS_KEY, secretAccessKey: R2_SECRET },
});
  1. Migrate existing objects: use rclone or the AWS CLI with the --endpoint-url flag to copy objects from S3 to R2
  2. Update your CDN/public URL configuration

The @aws-sdk/client-s3 package works with R2 without any other changes. For most Next.js applications, the migration is a half-day of work.

File Type Validation and Security

One area where the three approaches differ substantially is how they handle file validation and authorization. Sending raw files to cloud storage without guardrails is a common source of security vulnerabilities.

Uploadthing's Type-Safe Router

Uploadthing's most opinionated feature is that file validation is defined server-side in your file router, and it's enforced before the upload reaches storage. The f({ image: { maxFileSize: '4MB' } }) call is not just documentation — it's enforced by Uploadthing's infrastructure. A malicious client cannot upload a 100MB video file to an image route; Uploadthing's servers reject it.

The middleware function runs your authentication logic, and only authenticated users get a presigned upload URL. If you throw an error in middleware, the upload is blocked. This makes authorization explicit and collocated with the file route definition:

export const uploadRouter = {
  // Only users with 'pro' plan can upload documents
  documentUploader: f({ pdf: { maxFileSize: '16MB' }, 'text/plain': { maxFileSize: '1MB' } })
    .middleware(async ({ req }) => {
      const user = await getCurrentUser(req);
      if (!user || user.plan !== 'pro') {
        throw new UploadThingError('Upgrade to Pro to upload documents');
      }
      return { userId: user.id };
    })
    .onUploadComplete(async ({ metadata, file }) => {
      await db.documents.create({
        data: {
          userId: metadata.userId,
          url: file.url,
          name: file.name,
          size: file.size,
        },
      });
    }),
} satisfies FileRouter;

The type safety extends to the client: the UploadButton component's endpoint prop is typed to only accept valid route names from your router. TypeScript catches mismatches at compile time.

R2 and S3 Presigned URLs

With presigned URLs (both R2 and S3), you control validation entirely in your server code that generates the URL. This is more flexible but requires you to implement security explicitly:

export async function POST(request: Request) {
  // Authentication check
  const session = await getServerSession(authOptions);
  if (!session?.user) {
    return Response.json({ error: 'Unauthorized' }, { status: 401 });
  }

  const { filename, contentType, fileSize } = await request.json();

  // Validate file type
  const allowedTypes = ['image/jpeg', 'image/png', 'image/webp'];
  if (!allowedTypes.includes(contentType)) {
    return Response.json({ error: 'Invalid file type' }, { status: 400 });
  }

  // Validate file size (5MB limit)
  if (fileSize > 5 * 1024 * 1024) {
    return Response.json({ error: 'File too large' }, { status: 400 });
  }

  // Generate presigned URL only after validation
  const key = `users/${session.user.id}/${crypto.randomUUID()}-${filename}`;
  const url = await getSignedUrl(r2, new PutObjectCommand({
    Bucket: BUCKET_NAME,
    Key: key,
    ContentType: contentType,
    ContentLength: fileSize,
  }), { expiresIn: 300 });

  return Response.json({ url, key });
}

Note the ContentLength constraint in the PutObjectCommand — this tells R2/S3 to reject uploads that don't match the declared size, adding a second layer of validation.

The limitation: without ContentLength enforcement, a client can lie about file size in the presigned URL request and then upload a larger file. Always include it.

CDN and Public Access

Uploadthing

Uploadthing hosts files on their CDN and gives you a public URL for each uploaded file. The URL format is https://utfs.io/f/{fileKey}. You don't configure CDN settings — they're managed by Uploadthing. Files are globally distributed but you don't control cache headers, geographic restrictions, or access control beyond "public."

This is fine for most use cases but means you can't implement time-limited URLs, geographic blocking, or custom cache headers without building a proxy.

Cloudflare R2

R2 integrates seamlessly with Cloudflare's CDN. You can:

  1. Enable the R2 public bucket URL (pub-{hash}.r2.dev/{key}) — free but less professional
  2. Connect a custom domain via a Cloudflare Worker — full control over headers, auth, and caching
  3. Use R2 with a Cloudflare Cache Rule — aggressive caching for public assets at the edge

For private files, you generate presigned URLs with expiration times. For public files (product images, user avatars), connecting R2 to a custom domain via Cloudflare gives you the full CDN experience with granular control.

AWS S3

S3's CDN story is CloudFront. To serve files quickly globally, you create a CloudFront distribution in front of your S3 bucket. This adds cost and complexity but gives you the most mature CDN feature set: custom error pages, Lambda@Edge for request manipulation, origin access control, signed URLs, and signed cookies.

For most Next.js applications, using Next.js Image Optimization (which proxies through Vercel's CDN) on top of S3 URLs is the simplest path to optimized image delivery without setting up CloudFront.

Multipart Uploads and Large Files

For files over 100MB — videos, large datasets, high-res images — the upload strategy matters substantially. A single-part upload that fails at 80% means starting over. Multipart uploads can resume from where they failed.

Uploadthing Large File Support

Uploadthing handles multipart uploads automatically for files over a threshold size. You don't implement anything differently — the client SDK detects file size and switches strategies transparently. The maximum file size depends on your plan, with the highest tiers supporting files up to several GB.

For a video upload use case in Next.js:

// server/uploadthing.ts
export const uploadRouter = {
  videoUploader: f({ video: { maxFileSize: '512MB', maxFileCount: 1 } })
    .middleware(async ({ req }) => {
      const user = await getAuthUser(req);
      if (!user) throw new UploadThingError('Unauthorized');
      return { userId: user.id };
    })
    .onUploadComplete(async ({ metadata, file }) => {
      // Trigger video processing job
      await videoProcessingQueue.add({ url: file.url, userId: metadata.userId });
    }),
} satisfies FileRouter;

Uploadthing manages the multipart split, upload, and assembly. You just set the max size.

R2 Multipart Uploads

Cloudflare R2 supports the S3 multipart upload API. For large files in production, using multipart upload is best practice:

import {
  CreateMultipartUploadCommand,
  UploadPartCommand,
  CompleteMultipartUploadCommand,
} from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';

async function createMultipartPresignedUrls(key: string, partCount: number) {
  const { UploadId } = await r2.send(
    new CreateMultipartUploadCommand({
      Bucket: BUCKET_NAME,
      Key: key,
    })
  );

  const presignedUrls = await Promise.all(
    Array.from({ length: partCount }, (_, i) =>
      getSignedUrl(
        r2,
        new UploadPartCommand({
          Bucket: BUCKET_NAME,
          Key: key,
          UploadId: UploadId!,
          PartNumber: i + 1,
        }),
        { expiresIn: 3600 }
      )
    )
  );

  return { uploadId: UploadId!, presignedUrls };
}

The client splits the file into parts (5MB minimum per part, except the last), uploads each part to its presigned URL, and then calls your server to complete the multipart upload with the ETags returned by R2. This is more code than Uploadthing's approach but gives you full control — resume logic, progress tracking per part, parallel part uploads.

S3 Multipart Uploads

Identical API to R2. The same code works against both — just change the endpoint and credentials. AWS Transfer Acceleration can be added for S3 to speed up uploads from regions far from the bucket's region, at $0.04/GB acceleration charge plus $0.08/GB transfer charge.

Image Optimization Considerations

When your Next.js app serves user-uploaded images, you need to think about optimization — serving a 5MB original upload on a product listing page is a bad user experience.

Uploadthing and Optimization

Uploadthing doesn't perform image optimization. Files are stored and served as-is. For teams who need image resizing and format conversion (WebP/AVIF), you'd need to add a service like Cloudinary, Imgix, or use Next.js Image Optimization on top of the Uploadthing CDN URLs.

Using Next.js Image with Uploadthing URLs works but requires adding utfs.io to your next.config.ts image domain allowlist:

// next.config.ts
const nextConfig = {
  images: {
    remotePatterns: [
      { protocol: 'https', hostname: 'utfs.io' },
    ],
  },
};

Then Next.js handles resizing and format conversion at request time, caching results on Vercel's CDN.

R2 and Image Optimization

Cloudflare Images is a separate product that handles image optimization, resizing, and format conversion at the edge. You can use it alongside R2 — store originals in R2, serve optimized variants via Cloudflare Images. The pricing is $5/month for the first 100,000 images plus $1 per 1,000 transformations.

For teams on Cloudflare's infrastructure, this is a clean solution: R2 storage + Cloudflare Images optimization + Cloudflare CDN delivery is a coherent stack with zero egress fees throughout.

S3 and Image Optimization

AWS S3 + CloudFront + Lambda@Edge is the traditional approach for image optimization on AWS. You write a Lambda@Edge function that intercepts image requests, resizes them on-demand, and caches the result in CloudFront. This is powerful but complex to configure and maintain.

Simpler alternative: use Imgix or Cloudinary as an optimization layer in front of S3. These services accept an S3 origin and add resize/format parameters to the URL. Cost is higher than native AWS tools but operational complexity is much lower.

Developer Experience Scorecard

CriterionUploadthingCloudflare R2AWS S3
Initial setup time15 minutes45 minutes2-4 hours
Type safetyExcellentManualManual
CORS configurationAutomaticManualManual
IAM/permissionsNone (handled)MinimalComplex
Scaling limitsPlatform-cappedUnlimitedUnlimited
Ecosystem integrationsLimitedBroad (S3-compatible)Comprehensive
Webhook callbacksBuilt-inManualS3 Event + Lambda

When to Choose Each

Choose Uploadthing when you're building a side project, early-stage startup, or any Next.js app where getting to market matters more than infrastructure control. The type-safe router pattern and zero-config setup save days of work. The ceiling is lower than R2/S3, but most apps never hit it.

Choose Cloudflare R2 when you have meaningful download volume (anything above 50GB/month where egress matters), need S3 compatibility to use existing libraries and tooling, or want global edge distribution via Cloudflare's network. It's the right default for most production applications that outgrow Uploadthing.

Choose AWS S3 when you're already deep in the AWS ecosystem and use other services that integrate natively with S3 (SQS event notifications, Lambda triggers, CloudFront), need S3-specific features like S3 Object Lambda or Intelligent-Tiering, or your compliance requirements mandate a specific provider. Otherwise, the egress fee model is a significant ongoing cost that R2 eliminates.


See also: AWS S3 vs Cloudflare R2 API 2026, How to Upload Files to Cloudflare R2 in Node.js, Best Cloud Storage APIs 2026

Comments

Get the free API Integration Checklist

Step-by-step checklist for evaluating, testing, and integrating third-party APIs — auth, rate limits, error handling, and more. Plus weekly API picks.

No spam. Unsubscribe anytime.