Skip to main content

Best Background Job APIs and Services in 2026

·APIScout Team
background-jobsinngesttrigger-devqueuedeveloper-toolsroundup

TL;DR

RankServiceBest ForStarting Price
1InngestEvent-driven step functionsFree (50K runs/mo)
2Trigger.devServerless long-running tasksFree ($5 usage/mo)
3BullMQSelf-hosted Node.js queueFree (open source)
4TemporalDurable mission-critical workflowsFree (OSS) / Cloud from ~$200/mo
5QStashServerless HTTP-based messagingFree (1,000 msgs/day)
6QuirrelLightweight serverless schedulingFree (open source)

Key Takeaways

  • Inngest leads for serverless teams that want event-driven step functions with automatic retries, sleep, fan-out, and durable execution -- all without managing infrastructure.
  • Trigger.dev v3 rewrote its execution model to run your code on their infrastructure, eliminating double-billing and supporting long-running tasks up to hours.
  • BullMQ remains the most popular open-source queue for Node.js. It is free, Redis-backed, and production-proven at billions of jobs per day.
  • Temporal is the enterprise standard for workflows that span days, weeks, or months. Self-host for free or use Temporal Cloud with action-based pricing.
  • QStash from Upstash is the simplest option for serverless apps: publish a message with a URL, and QStash delivers it via HTTP with retries. No persistent connections needed.
  • Quirrel remains a lightweight, open-source option for cron and delayed jobs in serverless environments, though it is now in maintenance mode after the Netlify acquisition.

The Background Job Landscape in 2026

Background jobs handle the work that should never block a user request: sending emails, processing images, generating reports, syncing data, running ML inference, orchestrating multi-step AI agent workflows. Every production application needs them, but the way developers run them has changed significantly.

Three forces are reshaping the category in 2026:

Serverless-first execution. Traditional background job systems required you to provision and manage worker servers. The new generation -- Inngest, Trigger.dev, QStash -- runs your jobs on managed infrastructure. You write the function, they handle execution, retries, concurrency, and scaling.

Durable execution. Inngest and Temporal pioneered the idea that each step in a workflow should be independently retried and persisted. If a five-step function fails on step three, it resumes from step three -- not from the beginning. This model is especially important for AI agent workflows where each step may involve an expensive LLM call.

AI workloads. AI agent orchestration, RAG pipelines, and batch inference jobs are driving demand for background job systems that support long-running tasks (minutes to hours), step-level retries, and high concurrency. Trigger.dev v3 and Inngest both position themselves as infrastructure for AI workflows.

The result is a market split between managed platforms (Inngest, Trigger.dev, QStash, Temporal Cloud) and self-hosted open-source tools (BullMQ, Temporal OSS, Quirrel). Your choice depends on whether you want to manage infrastructure or pay someone else to do it.


Comparison Table

FeatureInngestTrigger.devBullMQTemporalQStashQuirrel
TypeManagedManaged / OSSSelf-hostedSelf-hosted / CloudManagedSelf-hosted
Language SDKsTS, Python, GoTypeScriptNode.js, Python, Elixir, PHPGo, Java, TS, Python, PHPAny (HTTP)TypeScript
Step functionsYesYesNoYes (workflows)NoNo
Durable executionYesYesNoYesNoNo
Cron schedulingYesYesYes (repeatable jobs)YesYesYes
Delayed jobsYes (sleep)YesYesYes (timers)YesYes
RetriesAutomatic (per step)AutomaticManual configAutomatic (per activity)AutomaticAutomatic
Concurrency controlYesYesYes (per queue)YesNoNo
Rate limitingYes (throttle)NoYesNo (use activity options)YesNo
Monitoring UIBuilt-inBuilt-inBull Board (separate)Temporal UIUpstash ConsoleQuirrel UI
Free tier50K runs/mo$5 usage/moUnlimited (OSS)Unlimited (OSS)1,000 msgs/dayUnlimited (OSS)
Paid from~$75/mo~$10/mo (Hobby)N/A~$200/mo (Cloud)$1/100K msgsN/A

1. Inngest — Event-Driven Step Functions

Best for: Teams building multi-step workflows, AI agent orchestration, and event-driven architectures on serverless infrastructure.

Inngest is an event-driven background job platform built for modern serverless applications. You send events, Inngest triggers the right functions. What sets Inngest apart is its step function model: each function can contain multiple steps, and each step is independently executed, retried, and persisted. A multi-step function that sleeps for three days between steps counts as a single run, since Inngest tracks the entire lifecycle as one execution.

This approach is powerful for workflows that involve waiting, branching, or recovering from failure. If step four of a five-step function fails, Inngest retries step four -- not the entire function. For AI agent workflows where each step might involve an expensive LLM call, this saves both time and money.

Key strengths:

  • Event-driven architecture with automatic function triggering
  • Step functions with independent retry per step
  • Sleep, wait-for-event, fan-out, and debounce patterns
  • Priority queues and concurrency control
  • TypeScript, Python, and Go SDKs
  • Built-in monitoring dashboard with run inspection
  • Works with any hosting provider (Vercel, Railway, AWS, etc.)
  • Batch processing and throttling

Pricing:

  • Free: 50,000 runs/month, unlimited functions
  • Team: From ~$75/month with volume-based pricing
  • Enterprise: Custom pricing with SLAs

Retries count as additional runs. If a function fails and retries three times, that counts as four runs total.

Limitations: The event-driven paradigm has a learning curve if you are used to traditional task queues. Only three SDK languages (TypeScript, Python, Go). Self-hosting is possible but less documented than the managed platform.

Best when: You need multi-step workflows with durable execution, event-driven triggering, and managed infrastructure. Particularly strong for AI agent orchestration, payment processing flows, and onboarding sequences.


2. Trigger.dev — Serverless Long-Running Tasks

Best for: TypeScript teams wanting managed background jobs with support for long-running tasks, v3 architecture that eliminates double-billing.

Trigger.dev v3 fundamentally changed how managed background jobs work. Instead of running your code on your infrastructure and charging for orchestration, Trigger.dev hosts and executes your code on their machines. This eliminates the double-billing problem where you pay both your hosting provider and your background job provider for the same compute.

You define tasks as async TypeScript functions, configure the machine size, and Trigger.dev handles deployment, execution, retries, concurrency, and scheduling. Tasks can run for minutes or hours -- not limited to serverless timeout windows. The built-in dashboard provides real-time logs, run inspection, and replay capabilities.

Key strengths:

  • Code runs on Trigger.dev infrastructure (no double-billing)
  • Long-running tasks (minutes to hours)
  • Configurable machine sizes for different workloads
  • Real-time logs and run inspection dashboard
  • Automatic retries with configurable backoff
  • Cron scheduling and webhook triggers
  • Concurrency control with burst support (2x burst across multiple queues)
  • Run replays for debugging

Pricing:

  • Free: $5 free monthly usage, 10 concurrent runs, unlimited tasks, 1-day log retention
  • Hobby: $10/month included usage, 50 concurrent runs
  • Pro: $50/month included usage, 200 concurrent runs (additional 50 for $10/month each)

Pricing is based on compute usage (machine time) plus a small per-run invocation cost. Machine sizes and per-second rates are configurable.

Limitations: TypeScript only -- no Python, Go, or other language support. Newer platform compared to BullMQ or Celery (less battle-tested at extreme scale). Self-hosting is available but more complex than the managed offering.

Best when: TypeScript teams building serverless applications that need reliable background execution without managing worker infrastructure. Strong fit for AI batch processing, data pipelines, and webhook-triggered workflows.


3. BullMQ — The Open-Source Standard for Node.js

Best for: Node.js teams wanting a proven, free, Redis-backed task queue with full control over infrastructure.

BullMQ is the most popular open-source task queue for Node.js, trusted by thousands of companies processing billions of jobs every day. Built on Redis, it provides job priorities, delayed jobs, rate limiting, repeatable jobs, concurrency control, and real-time events. It is the natural choice for teams that want a free, battle-tested queue and are comfortable managing their own Redis and worker infrastructure.

Version 5.x (current as of early 2026) has expanded language support beyond Node.js to include Python, Elixir, and PHP SDKs, making it a viable polyglot option.

Key strengths:

  • Redis-backed: fast, atomic, and reliable
  • Job priorities and delayed/scheduled jobs
  • Rate limiting and throttling
  • Repeatable jobs (cron expressions)
  • Concurrency control per queue and per worker
  • Sandboxed processors for isolation
  • Flows (parent-child job dependencies)
  • Bull Board dashboard for monitoring
  • SDKs for Node.js, Python, Elixir, and PHP

Pricing:

  • Free (MIT open source). You pay only for Redis infrastructure.
  • Taskforce.sh: Optional paid monitoring dashboard by the BullMQ team

Limitations: You manage everything -- Redis, workers, scaling, monitoring. No built-in hosting or managed execution. Bull Board provides basic monitoring but is not comparable to the dashboards in Inngest or Trigger.dev. No built-in step functions or durable execution. Configuration for retries and backoff is manual.

Best when: You have the infrastructure expertise to manage Redis and workers, want zero vendor lock-in, need a proven queue at scale, and prefer open source. BullMQ is the right choice when you need full control and do not want to pay for a managed platform.


4. Temporal — Durable Execution for Mission-Critical Workflows

Best for: Enterprise teams building long-running, fault-tolerant business workflows that span days, weeks, or months.

Temporal is the enterprise standard for durable execution. Workflows are written as regular code (no state machines or YAML), and Temporal guarantees that they complete even if workers crash, networks fail, or deployments happen mid-execution. Each workflow automatically captures state at every step, so long-running processes resume exactly where they left off.

Used by Stripe, Netflix, Datadog, and hundreds of other companies for payment processing, order fulfillment, and data pipelines, Temporal is the most battle-tested option for mission-critical workflows. The tradeoff is complexity: Temporal has a significant learning curve and heavy infrastructure requirements for self-hosting.

Temporal raised $300M at a $5B valuation in 2026, reflecting strong enterprise demand for durable execution -- particularly for AI agent orchestration.

Key strengths:

  • Durable execution that survives crashes and deployments
  • Workflows that run for minutes, hours, days, or months
  • Multi-language SDKs: Go, Java, TypeScript, Python, PHP
  • Workflow versioning for safe deployments
  • Child workflows, signals (external input), and queries (inspect state)
  • Built-in retry policies per activity
  • Temporal UI for workflow inspection
  • Strong enterprise adoption and community

Pricing:

  • Self-hosted: Free (open source, Apache 2.0)
  • Temporal Cloud Essentials: From ~$200/month base, $50 per million actions for the first 5M
  • Temporal Cloud Business/Enterprise: Custom pricing with SLAs and commitments

Actions are the primary billing unit: starting workflows, recording heartbeats, sending signals, and other operations each count as actions.

Limitations: Significant learning curve -- the programming model is different from traditional task queues. Self-hosting requires PostgreSQL or Cassandra plus Elasticsearch, which is heavy to operate. Temporal Cloud pricing is expensive for small teams. Overkill for simple background jobs like sending emails or resizing images.

Best when: You are building workflows that must complete reliably over long time periods, handle complex branching and compensation logic, or require enterprise-grade durability. Payment processing, order fulfillment, and multi-step AI agent orchestration are ideal use cases.


5. QStash — Serverless HTTP Messaging

Best for: Serverless architectures that need reliable message delivery without persistent connections or infrastructure.

QStash from Upstash takes a fundamentally different approach. Instead of workers pulling jobs from a queue, you publish a message with a destination URL, and QStash delivers it via HTTP with automatic retries. No connections to maintain, no workers to scale, no Redis to manage. It works with any serverless platform -- Vercel, Cloudflare Workers, AWS Lambda, Netlify -- because the delivery mechanism is plain HTTP.

This simplicity makes QStash the easiest background job solution to adopt. If your app can receive an HTTP request, it can process QStash messages. The tradeoff is that QStash is a messaging layer, not a full workflow engine. There are no step functions, no durable execution, and no complex orchestration patterns.

Key strengths:

  • HTTP-based delivery (works with any platform)
  • No persistent connections or infrastructure to manage
  • Automatic retries with configurable backoff
  • Scheduling and delayed delivery
  • Dead letter queues for failed messages
  • Content-based routing and URL groups
  • Part of the Upstash ecosystem (Redis, Kafka, Vector)
  • Generous free tier

Pricing:

  • Free: 1,000 messages/day (increased from 500 at GA launch)
  • Pay-as-you-go: $1 per 100,000 messages, no monthly commitment
  • Pro: $40/month with better per-message rates and advanced features
  • Enterprise (Prod Pack): $200/month with uptime SLA

Limitations: HTTP-only delivery model -- no in-process workers or direct queue consumption. Single message at a time (no native batch processing). Higher latency than in-process queues like BullMQ. No step functions or workflow orchestration. Limited to simple job patterns (fire-and-forget, delayed, scheduled).

Best when: You run on serverless infrastructure, need reliable message delivery without managing queues or workers, and your background jobs are relatively simple (send email, process webhook, trigger API call). QStash is the lowest-friction option for serverless apps.


6. Quirrel — Lightweight Serverless Scheduling

Best for: Simple cron jobs and delayed tasks in serverless environments, teams wanting a lightweight open-source scheduler.

Quirrel is an open-source job scheduling service designed specifically for serverless environments. It supports delayed jobs, recurring jobs (cron), and fanout patterns. The API is simple: schedule a job with a URL and a delay or cron expression, and Quirrel calls your endpoint when it is time.

Quirrel was acquired by Netlify in 2022, and the technology powers Netlify Scheduled Functions. The open-source project remains available and maintained, but it is in maintenance mode -- no new feature development, just stability and bug fixes for existing users.

Key strengths:

  • Simple API for delayed and recurring jobs
  • Cron expression support
  • Works with Next.js, Nuxt, Remix, and other frameworks
  • Quirrel UI for job inspection
  • Open source and self-hostable
  • Low resource footprint

Pricing:

  • Free (open source, self-hosted)
  • Netlify Scheduled Functions (built on Quirrel technology) available through Netlify plans

Limitations: Maintenance mode -- no new features since the Netlify acquisition. Smaller community than BullMQ or Temporal. TypeScript SDK only. No step functions, durable execution, or advanced orchestration. Not suitable for complex multi-step workflows.

Best when: You need simple cron jobs or delayed task scheduling in a serverless app and want a lightweight, self-hosted solution. If you are on Netlify, Scheduled Functions (built on Quirrel) are available natively.


How to Choose

Use CaseRecommendedWhy
Multi-step workflows with retriesInngestStep functions with per-step retry and durable execution
TypeScript serverless background jobsTrigger.devManaged execution, long-running support, no double-billing
Self-hosted Node.js queueBullMQFree, Redis-backed, proven at billions of jobs/day
Mission-critical long-running workflowsTemporalDurable execution, enterprise-grade, multi-language
Simple serverless message deliveryQStashHTTP-based, no infrastructure, works everywhere
Lightweight cron/delayed jobsQuirrelSimple, open source, low overhead
AI agent orchestrationInngest or TemporalDurable execution prevents expensive LLM call retries
Budget-conscious startupBullMQ or QStashFree OSS queue or pay-per-message with no commitment
Python-heavy stackTemporal or CeleryMulti-language SDKs (Temporal) or Python-native (Celery)

Decision Framework

Start with three questions:

  1. Do you want to manage infrastructure? If no, choose Inngest, Trigger.dev, or QStash. If yes, BullMQ or Temporal give you full control.

  2. How complex are your workflows? Simple fire-and-forget jobs work fine with QStash or BullMQ. Multi-step workflows with sleep, branching, and compensation need Inngest or Temporal.

  3. What is your budget? BullMQ is free (plus Redis costs). QStash starts at $1/100K messages. Inngest and Trigger.dev have generous free tiers. Temporal Cloud starts around $200/month -- justified for mission-critical workflows but expensive for simple jobs.


Methodology

This guide evaluates background job services based on six criteria:

  1. Reliability. Does the system guarantee job completion? How does it handle failures, crashes, and retries?
  2. Developer experience. API design, SDK quality, documentation, and time-to-first-job.
  3. Scalability. How does the system perform under load? Concurrency control, rate limiting, and horizontal scaling.
  4. Observability. Built-in monitoring, logging, and debugging tools. Can you inspect a failed run and understand why it failed?
  5. Pricing. Free tiers, per-unit costs, and total cost at different scales (1K, 100K, 1M jobs/month).
  6. Ecosystem. Language support, framework integrations, and community size.

Pricing and feature details were verified against official documentation and pricing pages as of March 2026. Pricing may change -- always check the provider's website for current rates.


Building background jobs into your app? Compare Inngest, Trigger.dev, BullMQ, Temporal, and more on APIScout -- pricing, features, and developer experience across every major background job platform.

Comments