I work 996 at a startup I love. Nine to nine, six days a week. The kind of schedule where "side project" means the 20 minutes before you pass out.
Event-driven systems exist — SNS, SQS, Kafka, Inngest. But every time I wired one up, I ended up rebuilding the same things on top: fan-out logic, dead letter queues, replay, filtering, ordering. I wanted a single declarative layer where I define an event, subscribe tasks, publish, and everything else just works.
So I forked Trigger.dev and built Fanout.sh. If you want to see this project go live, star the repo.
How it works
Three concepts. That's the entire API for 80% of use cases.
Define an event with a typed schema. Validated at publish time and at the server.
Subscribe tasks to that event. Each subscriber gets its own durable run with retries, logging, and full tracing. Add a subscriber next week, remove one next month — nothing else changes.
Publish with one call. All subscribers run. Done.
// 1. Define
export const orderCreated = event({
id: "order.created",
schema: z.object({
orderId: z.string(),
amount: z.number(),
customerId: z.string(),
}),
});
// 2. Subscribe
export const sendReceipt = task({
id: "send-receipt",
on: orderCreated,
run: async (payload) => {
await sendEmail(payload.customerId, payload.orderId);
},
});
// 3. Publish
await orderCreated.publish({
orderId: "order-123",
amount: 500,
customerId: "cust-1",
});
What's built in
Everything you'd normally have to wire up yourself:
- Dead letter queue — failed events go to the DLQ, not into the void. Inspect, retry, or discard from the dashboard.
- Event replay — bad deploy on Friday? Replay Monday's events through the fixed code. Any time range.
- Content-based filtering — subscribers declare what they want. Filtering happens server-side before the run is created.
- Ordering guarantees — events with the same key are processed in the order they arrived.
- Scatter-gather — fan out an event and wait for every subscriber to finish. Zero compute cost while waiting.
- Rate limiting — per-event, Redis-backed sliding window. No external rate limiter needed.
- Consumer groups — Kafka-style load balancing. Within a group, only one task receives each event.
- Full persistence — every event stored in ClickHouse with payload, fan-out count, tags, and timestamps.
How it compares
Vercel just launched Queues:
Vercel gives you the primitive. I needed the system.
What's next
Fanout.sh is a fork of Trigger.dev (Apache 2.0). Trigger.dev doesn't have pub/sub primitives, so I forked it to add them at the engine level — not as a plugin.
The obvious question: why fork instead of contributing upstream? Because pub/sub changes the programming model. It touches the scheduler, the run engine, the dashboard, the CLI. It's not a PR — it's a direction. And I'm not sure it's Trigger.dev's direction.
So the real question is: do I push to get this merged upstream, or do I build Fanout.sh into its own thing? I genuinely don't know yet. What I do know is that the system works and I want people to use it.
If you have opinions on this, or want to self-host it today, reach out.
GitHub: github.com/giovaborgogno/fanout.sh
Deep dive
Here's what each feature looks like in code.
Dead letter queue
export const orderCreated = event({
id: "order.created",
schema: orderSchema,
dlq: { enabled: true }, // default — can disable per event
});
Content-based filtering
export const highValueOrders = task({
id: "high-value-orders",
on: orderCreated,
filter: { amount: [{ $gte: 1000 }] },
run: async (payload) => {
await alertVipTeam(payload);
},
});
Filtering happens server-side before the run is created.
Ordering guarantees
await orderCreated.publish(
{ orderId: "123", amount: 500, customerId: "cust-1", items: [...] },
{
orderingKey: "cust-1", // sequential per customer
idempotencyKey: "order-123", // prevent duplicate publishes
}
);
Scatter-gather
Fan out and wait for every subscriber to finish:
const result = await orderCreated.publishAndWait({
orderId: "order-123",
amount: 500,
customerId: "cust-1",
});
// result.results: { [taskSlug]: { ok, output, error } }
Built on Trigger.dev's waitpoint system. The parent task suspends (zero compute cost), resumes when all children complete.
Rate limiting
Per-event, enforced server-side:
export const userActivity = event({
id: "user.activity",
schema: userActivitySchema,
rateLimit: { limit: 100, window: "1m" },
});
Backed by Redis sliding window. Returns 429 with Retry-After header when exceeded.
Consumer groups
Kafka-style load balancing. Within a group, only one task receives each event:
export const processorA = task({
id: "processor-a",
on: userActivity,
consumerGroup: "activity-processors",
run: async (payload) => { /* ... */ },
});
export const processorB = task({
id: "processor-b",
on: userActivity,
consumerGroup: "activity-processors",
run: async (payload) => { /* ... */ },
});
// This one is NOT in the group — receives ALL events
export const analytics = task({
id: "analytics",
on: userActivity,
run: async (payload) => { /* ... */ },
});
Mix fan-out and load balancing in the same event. No config files, no broker setup.