Your finance team asks "what does Stripe say we earned yesterday, and does it match our database?" every morning. You point them at Stripe Reports, they point you at the column they cannot find, and you end up writing a 30-line script that exports the prior day's charges, joins on your customer table, and posts the diff to a Slack channel. You name the script recon.py, you set up a system cron on a deploy box, and three months later the box has been replaced, the cron is gone, the finance team is upset, and nobody has alerted on the 18 days the diff would have flagged.
This post is the longer version of that story plus three more like it (custom dunning, multi-timezone monthly invoicing, webhook-replay loops), and the external-cron pattern that fires all four from one dashboard. The goal is not to replace Stripe. Stripe stays the processor and the source of truth. The goal is to give the four loops your backend actually runs against the Stripe API a real scheduler.
If you want the short version: build one auth-protected route per loop, point Crontap at it, set the cron expression and IANA timezone the customer-facing contract requires, and add a failure alert to email or webhook (Slack / Discord / Telegram). The same shape works for all four patterns below.
What Stripe gives you, and what the four loops actually need
Before reaching for an external scheduler, it is worth being explicit about what Stripe already covers, because the gaps are smaller than the marketing pages suggest and larger than most teams realize.
- Stripe Reports + Sigma export queryable views of your historical data. They are the right answer for accounting exports, year-end audits, and ad-hoc analysis. They are not job runners. There is no "fire this query every weekday at 06:00 Europe/London and post the result to Slack" primitive.
- Stripe Smart Retries retry failed card charges on Stripe's machine-learned cadence. It works for SaaS subscriptions on Stripe Billing. It cannot carry custom rules per customer tier, cannot pause for known-bad regions, and cannot be the only retry channel if your contract promises three attempts in 48 hours.
- Stripe webhook delivery retries with exponential backoff up to 3 days. That is delivery to your endpoint. If your downstream consumer (analytics warehouse, FP and A pipeline, customer-facing in-app history) was the one that dropped the event, Stripe cannot replay it for you.
- Stripe rate limits are 100 read and 100 write per second in live mode. That ceiling does not constrain a scheduler. It does constrain how the scheduled job batches work once it starts.
Inside those gaps live the four loops below.
Pattern A: daily reconciliation against yesterday's charges
A B2B SaaS team running on Stripe Connect wants the finance lead to start every morning with a single Slack message: "Yesterday's Stripe net revenue across all 12 connected accounts, broken out by account, plus any anomalies." The cron expression that ships this is 5 0 * * *, fired in Europe/Berlin because the finance lead is in Berlin and the day boundary should match her calendar, not UTC.
The job inside the route is straightforward. Pull yesterday's charges and application_fees via the Stripe API, group by connected account, compare against the same window in your own ledger, and post a digest. If the deltas exceed a threshold (more than 0.5 percent off), the digest gets a WARN prefix and a link to the per-charge breakdown. If they match, the digest is one line of green checkmarks.
The reasons this is not a Stripe Reports query:
- Joins live in your database, not Stripe. Reconciliation is interesting because you compare Stripe to your own record of what should have happened. Reports cannot reach your tenants table; your backend can.
- Per-tenant slicing is your business logic. Stripe Connect knows accounts; it does not know the customer-tier dimension your ARR dashboard uses. The diff that catches "tier B's December cohort is short" lives in your code.
- Schema versioning is your problem. When you add a coupon, an invoice line item, a tax behavior, or a refund flow, the recon definition shifts with it. A script you control changes faster than a Sigma view someone in finance ships every quarter.
What Stripe still does well alongside this loop: the source numbers. The script reads from Stripe; it does not become the truth. Stripe stays the processor. The recon is the trip wire that catches integration bugs in your code, not a billing replacement.
The Crontap fields:
- URL:
https://api.yourapp.com/jobs/stripe/recon - Method:
POST - Headers:
Authorization: Bearer <CRON_SECRET>plus a customX-Tenant-Groupif you fan out per environment. - Cadence:
5 0 * * * - Timezone:
Europe/Berlin - Failure alerts: email and a Slack webhook. The Slack alert fires on any 4xx or 5xx, with the response body in the payload, so a
503from your API box wakes someone up.
Fix this in 60 seconds with Crontap. Free tier available. No credit card. Schedule your first job.
Pattern B: custom dunning beyond Stripe automatic retries
An indie SaaS billing $40k MRR on Stripe Billing notices that Smart Retries recovers about 38 percent of failed charges, and that the lost 62 percent is where the team's attention should sit. The contract promises a customer "we will try three times across 7 days before we deactivate", and the team layers that on top of Smart Retries instead of replacing it.
The cron pattern is layered: a 0 */4 * * * schedule that fires the day-1 wave (every four hours through the first 24 after a failure), a 0 9,21 * * * schedule that fires the day-2-and-3 wave (every 12 hours), and a 0 9 * * * schedule that fires the day-4-through-7 wave (once a day at 09:00 in the customer's timezone, with an email blast on day 7 before the deactivation lands at midnight local on day 8).
Each schedule fires the same route, POST /jobs/stripe/dunning, with a query parameter selecting which wave is firing. The route reads the wave, picks up the customers in the matching state from your subscriptions table, and either calls paymentintents.confirm for the next attempt or sends the day-7 email. Stripe Smart Retries continues running in parallel; if Smart Retries recovers a charge between waves, your route sees the cleared state and skips that customer.
Three things this pattern does that Stripe Smart Retries alone does not:
- Custom rules per tier. Tier A gets the full 3-attempt schedule. Tier B gets one retry the next day and then the cancellation. Tier C is on annual and never enters the retry flow at all. Smart Retries does not branch on a custom dimension.
- Pause for known-bad windows. A region with a card processor outage gets the wave skipped for that day; the retry attempt would have failed anyway and consumed a card-network attempt count. Your route can read a feature flag; Smart Retries cannot.
- Visibility into the chosen wave. Each Crontap fire shows up in the dashboard with its response body. If the day-2 wave sent 47 retries and got 12 success / 35 still-failing, that is one click to see and one Slack alert if the success rate dropped below the threshold.
Stripe Smart Retries is still doing its work. The custom layer fires alongside it. The two together recover more than either alone, in the data the team has gathered across two billing cycles.
The cadence floor for the day-1 wave is 4 hours, well within Crontap's 1-minute minimum on Pro. The cadence floor matters less here than the reliability and the IANA timezone, which the day-7 email needs.
Pattern C: monthly invoice runs in the customer's billing timezone
A multi-tenant B2B platform invoices on the 1st of every month. The customer expects the invoice in their local 1st, not UTC's. For a tenant in Australia/Sydney, the 1st starts about 15 hours before it does in San Francisco. Sending one global UTC fire at midnight UTC means Sydney customers get their invoice on the 1st at 11:00 their time, while San Francisco customers got it on the 30th at 16:00 their time. Both sides perceive the timing as wrong.
The fix is one Crontap schedule per timezone group, not per tenant. The cron expression is the same on every schedule (5 0 1 * *, the 1st of the month at 00:05), and the timezone field is the only thing that varies. A schedule named invoice-run-au carries Australia/Sydney, a schedule named invoice-run-eu carries Europe/Berlin, a schedule named invoice-run-us-east carries America/New_York, and so on. Each schedule fires POST /jobs/stripe/invoice-run with the timezone group in the body. The route fetches the tenants in that group from the database and runs the invoice creation flow against the Stripe API for each.
DST is the scheduler's problem. Crontap evaluates 5 0 1 * * in Europe/Berlin against Berlin local time, which means the schedule lands at the same wall-clock minute every month, summer or winter, without any per-zone offset math. Adding a new timezone group is a single POST to the Crontap API. Adding a new tenant inside an existing group is a row in your tenants table.
The reasons per-IANA timezone matters here:
- VAT cutoffs. EU customers' invoices need to land in the same VAT period the underlying transactions sit in. The 23:55 UTC fire that lands on the 31st in Berlin and the 1st in London opens a per-jurisdiction edge case that does not exist if you fire on the local 1st.
- Statement clarity. Customers reconcile against their own bank statements. An invoice timestamped on the 1st of their month matches the line item on their bank statement; one timestamped on the 31st does not.
- Customer-perceived punctuality. "It is the 1st, and the invoice arrived at 00:05" reads as a tight operation. "It is the 31st, and the invoice arrived at 16:00" reads as a draft.
The cost of the pattern is the number of schedules (one per timezone group, typically 5 to 10), not the number of tenants. The Stripe API calls fire from your route the same way they would have under a single global schedule; only the trigger point shifts.
Pattern D: webhook replay loops for downstream consumers
An AI feedback platform consumes Stripe events into a Postgres queue, pipes them into a warehouse, and surfaces the per-customer billing state in-app. The platform sees about 30k Stripe events per day across paid customers. Stripe's webhook delivery is reliable; the platform's downstream consumer was, until recently, less so. When the warehouse loader was deployed during a 90-second window, the events Stripe sent during that window were marked delivered (because the platform's webhook endpoint returned 200) but never made it to the warehouse rows.
The fix is a two-loop replay pattern. A hot loop fires every 5 minutes (*/5 * * * *) and replays any queue rows from the last 24 hours flagged as consumer_failed. A cold loop fires once a day at 06:00 UTC (0 6 * * *) and replays anything older than 24 hours that is still flagged. The hot loop catches the common case (a downstream consumer was briefly down). The cold loop catches the long tail (a consumer schema change marked rows failed retroactively). Each loop fires POST /jobs/stripe/webhook-replay with a window parameter; the route reads the parameter, picks up the failed rows, and re-pushes them through the consumer pipeline.
If the hot loop ever needs to fire faster, Crontap supports cadences down to every 1 minute on Pro. The team has not needed it in the year the loop has been running; 5 minutes is the right cadence for the consumer's reliability profile.
What Stripe webhook delivery does and does not cover:
- Stripe retries deliveries to your endpoint with exponential backoff up to 3 days. If your endpoint returned a 5xx, Stripe will try again. If your endpoint returned a 200 because it queued the event but the queue's downstream consumer failed later, Stripe's retry path does not know that.
- Stripe Dashboard webhook replay is a manual UI action per event. It works for one-off recovery, not for a steady-state replay loop.
- Stripe webhook event history is queryable for 30 days via the API. Your replay loop reads from your own queue, not Stripe's history, because your queue carries the consumer state.
The pattern shape is the same as the other three. One route per job type. One Crontap schedule per cadence. One auth header on the request. One failure alert.
The shared shape (one route per loop, all four patterns)
All four patterns above use the same wiring. The differences are the cadence, the timezone, and the body the route does the work on.
Each route follows the same recipe:
import { type Request, type Response } from "express";
export async function reconCron(req: Request, res: Response) {
if (req.headers.authorization !== `Bearer ${process.env.CRON_SECRET}`) {
return res.status(401).json({ error: "Unauthorized" });
}
await enqueueReconJob({ window: "yesterday" });
return res.status(200).json({ ok: true });
}
Three things to call out:
- The route returns 200 quickly. The actual reconciliation runs in a background worker. Crontap is your scheduler, not your queue. A 90-second sync that hangs the HTTP connection open is the wrong shape; a 200 in 50ms followed by a queued job is the right one.
- The bearer is read from an environment variable. Crontap stores the bearer encrypted; rotation is a config change in Crontap and a deploy of the new env var, in either order.
- The route is
POSTand idempotent on the Stripe-API side via your own idempotency keys. If Crontap retries on 5xx, the route can run twice without double-charging anyone.
The Crontap schedule is the same shape across all four patterns. Headers carry the bearer. Cadence and timezone are the per-pattern fields. Failure alerts wire to email and a webhook (Slack / Discord / Telegram); the failure payload carries status code, duration, and response body, so a 503 wakes someone up with the context they need to triage.
FAQ
Does this replace Stripe automatic retries?
No. It augments them. Stripe Smart Retries continues to run on its machine-learned cadence; the external dunning loop adds custom rules per tier, per-region pauses, and the day-7 email. The two run in parallel. If Smart Retries recovers a charge between waves of the custom loop, the custom loop sees the cleared state and skips.
What is the shortest interval Crontap supports?
Every 1 minute on paid plans. Free tier available for slower cadences. The reconciliation, dunning, and monthly invoice patterns above run at hourly, daily, or monthly cadences, well above the floor. Pattern D's hot loop runs every 5 minutes today; if a faster loop ever shipped, Crontap supports it.
How do I keep the route safe from accidental hits?
The bearer header is the gate. The route reads the bearer from an environment variable and rejects requests without a match. Crontap stores the bearer encrypted and never displays it again after the schedule is saved. Rotating the bearer is two steps: deploy the new env var, then update the schedule's header field.
Will the route hit Stripe's rate limit?
Stripe is 100 read and 100 write per second per account in live mode, and the scheduler does not consume that budget. The route inside the scheduler does. Batching with jitter is the standard fix for the dunning and invoice loops; a queue worker that processes 50 customers per second sits well inside the ceiling.
Can the same pattern work with Lemon Squeezy or Paddle?
Yes. The pattern is shape-only. The route picks the API, the schedule fires the route. See the cron jobs for Lemon Squeezy use case page for the Lemon Squeezy variant. The 4 loops above translate to any billing processor that exposes the underlying primitives via API.
What about Stripe Reports for the reconciliation case?
Stripe Reports is the right answer for the audit and accounting export use cases. It is not the right answer for "reconcile against my own database every morning and post the diff", because the join lives outside Stripe. The recon loop and Reports complement each other; they are not alternatives.
References
- Stripe API: Charges
- Stripe Reports and Sigma
- Stripe Smart Retries (automatic retries)
- Stripe webhook delivery and replay
- Stripe rate limits
Related on Crontap
- Cron jobs for Stripe (reconciliation and dunning) use case. The use-case-first guide for Stripe-driven scheduling.
- Scheduled billing retries and dunning. The category page covering retry waves, monthly invoice runs, and dunning across processors.
- Cron jobs for Lemon Squeezy. The same four loops translated to Lemon Squeezy's API surface.
- Running weekly payroll on external cron for multi-tenant SaaS. The per-tenant timezone pattern this post's Pattern C builds on.
- Shopify Admin API recurring checkout sync. The same shape, applied to Shopify Admin API daily reconciliation.
- Crontap alternatives compared. Side-by-side feature comparison across cron services.
