Hatchet rethinks task queues for the AI era. While Celery and BullMQ were designed for simple job processing, Hatchet handles the complex patterns that AI workloads demand: long-running agent loops, fan-out to multiple specialized models, rate limiting per API provider, and durable state that survives worker crashes. The PostgreSQL foundation means no Redis dependency and built-in ACID guarantees for task state.
Workflows are defined as code using TypeScript or Python SDKs with step-based composition. Each step can have its own retry policy, timeout, and concurrency limit. The platform supports cron scheduling, event-driven triggers, and webhook-based execution. The web dashboard provides real-time visibility into queue depths, worker health, step-level traces, and error rates — essential for debugging complex AI pipelines in production.
Hatchet is MIT licensed with 2,800+ GitHub stars and backing from Y Combinator's W24 batch. Self-hosting via Docker or Kubernetes is fully supported, with Hatchet Cloud offering managed hosting with usage-based pricing. The project is particularly popular for RAG pipeline orchestration, multi-step LLM workflows, and GPU task scheduling where workloads are bursty and need intelligent queue management.