Trigger.dev v3 vs BullMQ vs Graphile Worker 2026
Trigger.dev v3 vs BullMQ vs Graphile Worker 2026
When your application needs to run code outside the request/response cycle — send a batch of emails, process uploaded files, run ML inference, sync data with an external CRM — you reach for a background job library. In the Node.js ecosystem in 2026, three tools represent three distinct philosophies: Trigger.dev v3 (managed, TypeScript-first, no timeout limits), BullMQ (Redis-backed, battle-tested throughput), and Graphile Worker (PostgreSQL-native, no extra infrastructure).
Each is genuinely good. Choosing the wrong one for your constraints costs you either performance, operational overhead, or expensive migration. This comparison gives you the data to decide.
TL;DR
Trigger.dev v3 is the best choice if you want a managed platform, need jobs to run longer than 15 minutes, or are building AI/ML pipelines requiring long compute. BullMQ is the best choice for high-throughput queues (1,000+ jobs/second), rate-limited task processing, or complex job dependency graphs — if you already run Redis. Graphile Worker is the best choice if you want zero new infrastructure dependencies, already run PostgreSQL, and process fewer than 200 jobs/second.
Key Takeaways
- BullMQ can process 1,000-10,000+ jobs/second; Graphile Worker tops out around 100-200/second due to PostgreSQL locking
- Trigger.dev v3 has no execution timeout — jobs can run for hours; BullMQ/Graphile Worker are limited by your worker process
- Graphile Worker requires only PostgreSQL — no Redis, no separate broker, no extra infrastructure cost
- BullMQ ships with Bull Board, a real-time queue monitoring UI; Trigger.dev has a built-in cloud dashboard; Graphile Worker requires custom observability tooling
- Trigger.dev is Apache 2.0 and self-hostable with Docker + PostgreSQL; BullMQ is MIT; Graphile Worker is MIT
- PostgreSQL-native scheduling (via triggers and functions) is Graphile Worker's unique capability — queue jobs from SQL, not just application code
- Trigger.dev v3's concurrency controls, fan-out, and realtime logs make it the most developer-friendly managed option
The Background Job Problem in 2026
Most web applications process the same job in different ways depending on urgency and scale:
- User clicks "export" → queue a job, respond immediately with "your export is being prepared"
- Webhook arrives from Stripe → process it asynchronously so your webhook endpoint doesn't time out
- Nightly batch → process 50,000 records, send emails, update aggregates
- AI pipeline → run OCR, extract entities, store embeddings — steps that take 2-5 minutes each
Traditional message brokers (Kafka, RabbitMQ) handle massive scale but add significant operational complexity. For most applications, a simpler job queue running against Redis or PostgreSQL is all that is needed.
Platform Overview
| Trigger.dev v3 | BullMQ | Graphile Worker | |
|---|---|---|---|
| Backend | Managed (PostgreSQL self-host option) | Redis | PostgreSQL |
| Max job duration | Unlimited | Worker-lifetime | Worker-lifetime |
| Throughput | High (managed) | 1,000-10,000+/sec | 100-200/sec |
| Step functions | Yes | Parent-child jobs | No |
| Scheduling (cron) | Yes | Yes | Yes |
| Monitoring UI | Cloud dashboard | Bull Board | DIY |
| Self-host | Yes | Yes | Yes (it's a library) |
| Extra infra needed | No (cloud) or PostgreSQL | Redis required | PostgreSQL (existing) |
| TypeScript support | First-class | First-class | First-class |
| License | Apache 2.0 | MIT | MIT |
| Pricing | Free + pay-per-run | Free (OSS) | Free (OSS) |
Trigger.dev v3: The Managed Job Platform
Trigger.dev v3 was a major architectural shift from v2. Jobs no longer run inside your serverless functions — they run on Trigger.dev's dedicated compute infrastructure, which means no timeout limits. A job processing a 2GB video can run for 30 minutes without hitting Vercel's 5-minute function limit.
The Developer Experience
Trigger.dev v3 feels like writing normal TypeScript functions. Tasks are defined with task(), scheduled with trigger(), and observed in a real-time cloud dashboard:
import { task, logger, wait } from "@trigger.dev/sdk/v3";
// Define a task
export const processUpload = task({
id: "process-upload",
// Retry configuration
retry: {
maxAttempts: 3,
factor: 2,
minTimeoutInMs: 1_000,
maxTimeoutInMs: 10_000,
},
run: async (payload: { fileKey: string; userId: string }) => {
logger.log("Processing file", { fileKey: payload.fileKey });
// Step 1: Download and validate
const file = await downloadFromS3(payload.fileKey);
logger.log("File downloaded", { size: file.size });
// Step 2: Process (can take minutes — no timeout!)
const result = await runMLPipeline(file);
// Step 3: Save results
await saveResults(payload.userId, result);
return { success: true, recordsProcessed: result.count };
},
});
// Trigger from your application
import { tasks } from "@trigger.dev/sdk/v3";
await tasks.trigger("process-upload", {
fileKey: "uploads/document.pdf",
userId: "user_123",
});
Fan-Out and Batch Processing
Trigger.dev v3 supports batch triggering — create hundreds of parallel tasks in one call:
export const batchNotify = task({
id: "batch-notify",
run: async (payload: { userIds: string[] }) => {
// Trigger a parallel notification task for each user
await tasks.batchTrigger(
"send-notification",
payload.userIds.map((userId) => ({ payload: { userId } }))
);
},
});
Concurrency and Rate Limiting
export const rateLimitedEmailSend = task({
id: "send-email",
// At most 10 concurrent runs of this task
concurrencyLimit: 10,
// Rate limit: max 100 per minute
rateLimit: {
limit: 100,
period: "1m",
},
run: async (payload: { to: string; subject: string }) => {
await sendEmail(payload);
},
});
Trigger.dev v3 Pricing
| Tier | Price | Included |
|---|---|---|
| Free | $0 | 2,500 runs/month |
| Hobby | $5/month | 25,000 runs/month |
| Pro | $20/month | 100,000 runs/month + $0.002/additional run |
| Enterprise | Custom | Unlimited, SLA, SAML |
Self-hosting is free with no run limits using Docker + PostgreSQL.
Trigger.dev v3 Limitations
- Managed cloud has some latency overhead vs self-hosted infrastructure
- Self-hosting requires running the Trigger.dev server (Docker + PostgreSQL)
- Not designed for sub-second latency requirements (job start takes ~100ms on managed)
- Smaller ecosystem than BullMQ for Node.js patterns
BullMQ: Redis-Backed, High-Throughput Queue
BullMQ is the successor to Bull, built on Redis streams. It has been battle-tested in production since 2019 and handles the most demanding background job workloads in the Node.js ecosystem. The core design principle is maximum throughput and feature richness using Redis as the backend.
Defining and Processing Jobs
import { Queue, Worker, Job } from "bullmq";
import { Redis } from "ioredis";
const connection = new Redis({
host: process.env.REDIS_HOST,
maxRetriesPerRequest: null,
});
// Create a queue
const emailQueue = new Queue("email", { connection });
// Add a job to the queue
await emailQueue.add(
"send-welcome",
{ userId: "user_123", email: "user@example.com" },
{
delay: 5_000, // 5 second delay before processing
attempts: 3, // Retry up to 3 times on failure
backoff: {
type: "exponential",
delay: 1_000,
},
removeOnComplete: { count: 1000 }, // Keep last 1000 completed jobs
removeOnFail: { count: 5000 }, // Keep last 5000 failed jobs
}
);
// Worker processes jobs
const worker = new Worker(
"email",
async (job: Job) => {
const { userId, email } = job.data;
await job.updateProgress(25);
await sendWelcomeEmail(email);
await job.updateProgress(100);
return { sent: true };
},
{ connection, concurrency: 50 }
);
worker.on("completed", (job) => {
console.log(`Job ${job.id} completed`);
});
worker.on("failed", (job, err) => {
console.error(`Job ${job?.id} failed: ${err.message}`);
});
Parent-Child Job Dependencies
BullMQ's most powerful feature is parent-child dependency graphs. A parent job only completes after all its children complete:
import { FlowProducer } from "bullmq";
const flowProducer = new FlowProducer({ connection });
// Parent waits for all children
await flowProducer.add({
name: "generate-report",
queueName: "reports",
data: { reportId: "r123" },
children: [
{
name: "fetch-sales-data",
queueName: "data-fetching",
data: { source: "sales_db" },
},
{
name: "fetch-marketing-data",
queueName: "data-fetching",
data: { source: "marketing_db" },
},
{
name: "fetch-support-data",
queueName: "data-fetching",
data: { source: "support_db" },
},
],
});
Scheduling with Repeatable Jobs
// Run every day at 3am UTC
await emailQueue.add(
"daily-digest",
{ type: "daily" },
{
repeat: {
pattern: "0 3 * * *",
tz: "UTC",
},
}
);
BullMQ Rate Limiting
BullMQ includes native rate limiting per queue:
const rateLimitedQueue = new Queue("external-api", {
connection,
defaultJobOptions: {
limiter: {
max: 100, // Max 100 concurrent
duration: 1_000, // Per 1000ms window
},
},
});
Bull Board: Monitoring UI
import { createBullBoard } from "@bull-board/api";
import { BullMQAdapter } from "@bull-board/api/bullMQAdapter";
import { ExpressAdapter } from "@bull-board/express";
const serverAdapter = new ExpressAdapter();
createBullBoard({
queues: [new BullMQAdapter(emailQueue)],
serverAdapter,
});
app.use("/queues", serverAdapter.getRouter());
// Visit /queues in browser for real-time queue stats
BullMQ Limitations
- Requires Redis — another infrastructure dependency to run, monitor, and pay for
- No built-in step function support (use parent-child for workflow dependencies)
- Workers are long-running Node.js processes — incompatible with pure serverless deployments
- Redis persistence settings require careful tuning to prevent job loss on restart
Graphile Worker: PostgreSQL-Native Job Queue
Graphile Worker has a niche but loyal following: teams that already run PostgreSQL and want zero additional infrastructure for background jobs. Your jobs live in a PostgreSQL table. Workers poll the database and process them. No Redis, no separate broker, no extra service to monitor.
Core Design
import { run, makeWorkerUtils } from "graphile-worker";
import { Pool } from "pg";
const pgPool = new Pool({ connectionString: process.env.DATABASE_URL });
// Define task handlers
const taskList = {
sendEmail: async (payload: { to: string; subject: string; body: string }) => {
await emailClient.send(payload);
},
generateThumbnail: async (payload: { imageUrl: string; jobId: string }) => {
const thumbnail = await sharp(await fetchImage(payload.imageUrl))
.resize(200, 200)
.toBuffer();
await uploadToS3(`thumbnails/${payload.jobId}.jpg`, thumbnail);
},
};
// Start the worker
const runner = await run({
pgPool,
taskList,
concurrency: 5, // 5 concurrent jobs
pollInterval: 1_000, // Check for new jobs every 1 second
});
Adding Jobs from Application Code
import { makeWorkerUtils } from "graphile-worker";
const workerUtils = await makeWorkerUtils({
connectionString: process.env.DATABASE_URL,
});
// Queue a job
await workerUtils.addJob("sendEmail", {
to: "user@example.com",
subject: "Welcome",
body: "Thanks for signing up!",
});
// Queue with delay
await workerUtils.addJob(
"generateThumbnail",
{ imageUrl: "https://...", jobId: "job_123" },
{
runAt: new Date(Date.now() + 60_000), // Run in 60 seconds
maxAttempts: 3,
jobKey: "thumbnail-job_123", // Deduplication key
}
);
The Killer Feature: Queue Jobs from SQL
Graphile Worker's unique capability is that PostgreSQL functions and triggers can add jobs directly:
-- Queue a job automatically when a new user is inserted
CREATE OR REPLACE FUNCTION queue_welcome_email()
RETURNS TRIGGER AS $$
BEGIN
PERFORM graphile_worker.add_job(
'sendEmail',
json_build_object(
'to', NEW.email,
'subject', 'Welcome to our platform!',
'body', 'Thanks for signing up, ' || NEW.name
)
);
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER user_signup_trigger
AFTER INSERT ON users
FOR EACH ROW EXECUTE FUNCTION queue_welcome_email();
This is impossible with Redis-based queues and requires application-layer glue with Trigger.dev or BullMQ.
Graphile Worker Throughput
Graphile Worker is well-tested at 20-100 jobs/second on typical PostgreSQL hardware. Beyond 200 jobs/second, you start hitting PostgreSQL advisory lock contention. For high-throughput workloads, this is a hard ceiling.
Graphile Worker: No Built-in Monitoring
Graphile Worker has no official monitoring UI. Job status lives in the graphile_worker.jobs table — you can query it or build custom dashboards, but there is no equivalent of Bull Board or Trigger.dev's cloud dashboard out of the box.
Failure Handling and Dead Letter Queues
| Trigger.dev v3 | BullMQ | Graphile Worker | |
|---|---|---|---|
| Retry backoff | Configurable per task | Configurable per job | Configurable |
| Max attempts | Configurable | Configurable | Configurable |
| DLQ behavior | Failed jobs in dashboard | Separate "failed" queue | Jobs stay in table with failed status |
| Error inspection | Real-time in dashboard | Bull Board or custom | Query _private_data column |
| Retry on deploy | Yes | Manual | Yes |
Observability and Scheduling Compared
BullMQ with Bull Board and Trigger.dev's cloud dashboard are the strongest for observability. Graphile Worker requires you to query PostgreSQL directly or build your own dashboard.
For cron scheduling, all three support standard cron syntax. Graphile Worker additionally supports PostgreSQL-triggered scheduling, which is unique.
When to Choose Each
Choose Trigger.dev v3 if:
- You need jobs to run longer than 15 minutes (AI pipelines, video processing, large file imports)
- You want a managed platform with no infrastructure to run
- You're already on a serverless stack (Vercel, Cloudflare Workers) without a persistent worker process
- You need realtime observability without building custom tooling
- Your team is TypeScript-first and values DX over raw throughput
Choose BullMQ if:
- You need high throughput: 1,000+ jobs/second
- You already run Redis in production
- You need complex job dependency graphs (parent-child flows)
- You require per-queue rate limiting for external API calls
- You're building a system that needs fine-grained job priority controls
- Long-running workers are acceptable in your infrastructure
Choose Graphile Worker if:
- You already run PostgreSQL and want zero new infrastructure
- Your job throughput is below 200/second
- You want the ability to queue jobs from PostgreSQL triggers or functions
- Your team knows SQL and prefers debugging jobs via SQL queries
- Operational simplicity beats monitoring convenience
For more background job and API comparisons, see our Inngest vs Temporal vs Trigger.dev analysis, best background job APIs roundup, and event-driven API patterns guide.
Methodology
This article draws on official Trigger.dev v3 documentation, BullMQ documentation and GitHub discussions, Graphile Worker documentation, community benchmarks from GitHub Discussions (#922, #2458), and DEV Community performance analysis. Throughput figures are from community-reported benchmarks and should be validated against your specific workload.