It is 4:55pm on Friday. You open the Stripe Dashboard, filter to the last seven days, and start clicking. Disputes tab. Refunds tab. Eyeball the reasons. Click into the three you do not recognize. Copy the dollar amounts into a Notion page nobody else opens. Type a one-line "what happened this week" header above it. Save. Slack the link to your accountant on Monday. Repeat next week.
Most early-stage SaaS teams do some version of this ritual every Friday. Stripe's own product solves the parts on either side of it, just not the part in the middle. Smart Disputes submits evidence on your behalf. Stripe Workflows for Slack pings you in real time when a single dispute opens. Neither of them writes the narrative: "this week was expensive because four customers all said the same thing about the trial-end email, and one of them was the second time we lost a dispute to the same person."
That narrative is the part the finance team actually reads. Here is the pattern that writes it for you, on a clock, in your timezone, for about ten cents a month.
If you just want the short version: a Friday 5pm cron in your finance team's timezone fires a backend route that pulls last week's disputes and refunds from the Stripe API, hands the array to GPT with a citation-forcing JSON schema, and posts an HTML one-pager to Resend. Crontap owns the clock, your backend owns the API + LLM call, OpenAI writes the narrative.
Why the Stripe dashboard does not solve this
Stripe ships a lot of dispute and refund tooling. None of it writes a weekly summary because that was never the point of the product.
Smart Disputes focuses on the contested charge itself. It assembles the receipt, the IP address, the AVS check, and submits a counter-evidence packet on your behalf. Helpful for win rate. Silent on "what theme did this week's disputes share."
Stripe Workflows for Slack posts an alert when a dispute opens, when a refund is issued, when a payout settles. Real-time per-event. The finance team does not want twenty Slack pings on Friday afternoon; they want one paragraph on Monday morning.
The Reports tab in the Dashboard exports flat CSVs. You get every column you asked for and zero of the synthesis. Eyeballing twenty rows for a pattern is feasible. Eyeballing a hundred is not.
Sigma (Stripe's SQL layer) lets you write the query if you know SQL and want to keep paying for query runtime. The output is a table, not a narrative.
The shape that fits "I want a paragraph on Monday morning" is the boring shape: pull the data, pass it through an LLM with a strict schema, render an HTML email. The cron that fires it is the only piece that has to live somewhere outside your app.
The pattern
Three independent boxes, each doing one thing.
Crontap → HTTPS POST → /finance/weekly-stripe-digest → Stripe API → OpenAI → Resend emailThe cron fires on Friday at 17:00 in America/New_York (or whatever your finance team's actual timezone is). The route hits two Stripe REST endpoints with created[gte] set to seven days ago. It pages through results until the response stops returning has_more. It hands the assembled array to GPT with a JSON schema that forces every claim back to a real Stripe object ID. It templates the JSON into HTML and posts one email through Resend.
You can swap any of the three boxes without touching the others. Move from Resend to SendGrid, the cron does not care. Move from OpenAI to Anthropic, the schedule does not change. Change the cadence to twice a week, the backend does not need a redeploy.
Step 1: pull a week of disputes and refunds
The Stripe API has clean filters for both. The only catch is pagination: a busy account easily returns more than the default page size, so the route needs to follow has_more until it stops.
import Stripe from "stripe";
const stripe = new Stripe(process.env.STRIPE_SECRET_KEY!);
async function listAll<T>(fn: (params: any) => Promise<Stripe.ApiList<T>>, base: any) {
const out: T[] = [];
let starting_after: string | undefined;
while (true) {
const page = await fn({ ...base, limit: 100, starting_after });
out.push(...page.data);
if (!page.has_more) break;
starting_after = (page.data[page.data.length - 1] as any).id;
}
return out;
}
const sevenDaysAgo = Math.floor(Date.now() / 1000) - 7 * 24 * 3600;
const [disputes, refunds] = await Promise.all([
listAll((p) => stripe.disputes.list(p), { created: { gte: sevenDaysAgo } }),
listAll((p) => stripe.refunds.list(p), { created: { gte: sevenDaysAgo } }),
]);Both lists carry every field you need: the dispute reason, the amount, the linked charge ID, the customer ID. Refunds carry the reason text and the amount. You can expand: ["data.charge"] if you also want the customer email pulled in for the narrative.
Step 2: hand it to GPT with a citation-forcing schema
This is the step where it goes wrong if you are not careful. Ask GPT for a "summary" with no constraints and you will get a paragraph that reads convincingly, contains a number that is not in the data, and references a customer that does not exist. Three reports later your CFO stops trusting the email and you are back to eyeballing the dashboard.
The fix is structured outputs plus a citation-forcing rule: every claim in the narrative has to reference a dispute_id or refund_id that exists in the input array. Then you validate it server-side before you send the email.
import OpenAI from "openai";
const openai = new OpenAI();
const schema = {
type: "object",
required: ["headline_numbers", "themes", "repeat_offenders", "narrative"],
additionalProperties: false,
properties: {
headline_numbers: {
type: "object",
required: ["dispute_count", "dispute_total_cents", "refund_count", "refund_total_cents"],
additionalProperties: false,
properties: {
dispute_count: { type: "integer" },
dispute_total_cents: { type: "integer" },
refund_count: { type: "integer" },
refund_total_cents: { type: "integer" },
},
},
themes: {
type: "array",
items: {
type: "object",
required: ["label", "count", "cited_ids"],
additionalProperties: false,
properties: {
label: { type: "string" },
count: { type: "integer" },
cited_ids: { type: "array", items: { type: "string" } },
},
},
},
repeat_offenders: {
type: "array",
items: {
type: "object",
required: ["customer_id", "incident_count", "cited_ids"],
additionalProperties: false,
properties: {
customer_id: { type: "string" },
incident_count: { type: "integer" },
cited_ids: { type: "array", items: { type: "string" } },
},
},
},
narrative: { type: "string" },
},
};
const completion = await openai.chat.completions.create({
model: "gpt-4o-mini",
response_format: {
type: "json_schema",
json_schema: { name: "weekly_finance_digest", schema, strict: true },
},
messages: [
{
role: "system",
content:
"You write weekly finance digests. Every claim in `narrative` and every entry in `themes` and `repeat_offenders` must reference real ids from the input. Do not invent ids. Do not invent dollar amounts. Use commas, colons, or parentheses, not em-dashes.",
},
{
role: "user",
content: JSON.stringify({ disputes, refunds }),
},
],
});
const result = JSON.parse(completion.choices[0].message.content!);
const knownIds = new Set([...disputes.map((d) => d.id), ...refunds.map((r) => r.id)]);
for (const theme of result.themes) {
for (const id of theme.cited_ids) {
if (!knownIds.has(id)) throw new Error(`GPT cited unknown id ${id}`);
}
}The strict: true flag tells the OpenAI API to enforce the schema. The post-extraction validation closes the last hole: even if GPT returns a syntactically valid response, every cited ID must trace back to something Stripe actually returned. If validation fails, the route returns a 500 and Crontap retries the next fire. You will know about it in five minutes, not three weeks.
Step 3: render and email
The narrative + headline numbers fit into a small HTML template. Resend's API is a single POST with subject, from, to, html. Total round trip is well under a second.
import { Resend } from "resend";
const resend = new Resend(process.env.RESEND_API_KEY!);
await resend.emails.send({
from: "[email protected]",
to: ["[email protected]"],
subject: `Stripe weekly: ${result.headline_numbers.dispute_count} disputes, ${result.headline_numbers.refund_count} refunds`,
html: render(result),
});The subject line carries the headline numbers so anyone scanning their inbox can decide whether to open it. The body has the narrative on top, the themes table next, the repeat-offenders list at the bottom, and a link back to the Stripe Dashboard for any ID someone wants to investigate.
Worked example
A five-month-old SaaS with about $40k MRR ran one of these recently. The week ended with seven disputes ($1,247) and twelve refunds ($892).
The narrative GPT returned, lightly anonymized:
This week saw 7 disputes ($1,247) and 12 refunds ($892), both up week over week. The dominant theme was trial-end confusion: 4 disputes (
dp_3xx,dp_3yy,dp_3zz,dp_3aa, totalling $640) referenced not realising the trial converted. The same root cause appeared in 3 of the 12 refunds (re_3xx,re_3yy,re_3zz). One customer (cus_NXX) disputed for the second time in 60 days, pushing them into the repeat-offender list. Recommended action: tighten the trial-end email and consider a one-tap cancellation link 24 hours before conversion.
Every ID is real. Every dollar amount sums. The recommended action is the part the founder wrote next to it in the Notion page anyway.
The Crontap setup for this is one schedule:
- URL:
https://yourapp.com/finance/weekly-stripe-digest - Method:
POST - Headers:
Authorization: Bearer <CRON_SECRET> - Cadence:
0 17 * * 5(Friday 17:00) - Timezone:
America/New_York(or wherever your finance person actually lives) - Failure alert: email to the founder, fire on 4xx/5xx
The cadence and timezone are the only knobs that matter. The cron fires at 5pm New York time every Friday. If the Stripe API hiccups, Crontap retries. If the retry also fails, Crontap emails the founder while the failure is fresh.
Fix this in 60 seconds with Crontap. Free tier available. No credit card. Schedule your first job →
Cost math
The pipeline is cheap.
- Stripe API: free.
- OpenAI: ~50 disputes plus refunds per week is roughly 200 input tokens of context and 400 output tokens of narrative. On
gpt-4o-miniat 2026 list prices, that is about $0.0008 per fire, or $0.04 per year of weekly runs. - Resend: free tier covers 3,000 emails per month. One a week is rounding error.
- Crontap: the free tier covers one schedule at hourly cadence; weekly is well under that. If you also run a daily Stripe reconciliation, the Pro tier is $3.25 per month billed annually for unlimited schedules at minute cadence.
Total monthly cost: under a dollar for the whole pipeline, including Crontap Pro if you have other schedules running.
For comparison, Fleece AI and GAIA productize the same pattern. Both are real products with real teams. They charge between $50 and $200 per month per workspace and ship features beyond just the weekly digest. If you want it managed, they are good options. If you want to keep your data inside your own backend and own the prompt, the DIY pattern is what you are reading.
What to do when it fails
Three failure modes show up in production.
Stripe pagination drops mid-run. Network blip, expired key, rate limit on a busy account. The route should treat any has_more page that 500s as a hard fail: throw, return 500, let Crontap retry. Do not partially email a digest with half the data. The next fire (a few minutes later) will succeed and the missing data is still in Stripe.
OpenAI returns a 429. Rare for one weekly call but possible if your org is hot. Catch it, sleep a few seconds inside the route, retry once. If the retry also fails, return 500 and let the scheduler handle the next attempt.
Resend 5xx. The narrative is in memory and you have nowhere to put it. Two options: store the rendered HTML in your database before calling Resend, then mark "sent" only on success, or accept that the next fire will regenerate. For a weekly digest that takes one second to regenerate, regenerating is fine.
The meta-loop is the part that actually matters. If your Friday digest does not arrive, finance notices in 24 to 72 hours. Crontap notices in 5 minutes and emails you while the failure is fresh. That is the difference between "we missed last week" and "we caught it before the weekend."
When to skip this pattern
A few cases where weekly is the wrong cadence:
- You want real-time alerts on every dispute. Use Stripe Workflows for Slack. The two are complementary; the digest is the synthesis on top of the alerts.
- You want to fight every dispute aggressively. Smart Disputes is the right tool. The digest is for "what did this teach us", not "did we win that one."
- You want a SQL-first analytics workspace. Sigma plus a BI tool covers it.
For everything else (the Friday afternoon ritual that no one looks forward to), the cron-shaped pattern is the boring shape that works.
FAQ
What about Stripe Workflows for Slack?
Different layer. Workflows pings you per-event in real time. The digest synthesises a week of events into one paragraph. Most teams run both: Workflows for the urgent ones, this digest for the Monday morning catch-up.
What about Fleece AI or GAIA?
Both productize the same pattern and charge $50 to $200 a month. If you want it managed and do not want to maintain a small backend route, they are good options. If you already have a backend (you do, you ship a SaaS), the DIY pattern costs about a dollar a month and you keep the prompt + the data inside your own infrastructure.
Can I run this on a personal Stripe account?
Yes. The Stripe API is identical between Standard, Express, and personal accounts. The only thing that changes is which keys you put in the env. The dispute and refund endpoints are available on every account type.
How do I make sure GPT does not fabricate dispute IDs?
Two layers. First, structured outputs with a strict JSON schema force the response shape and the field types. Second, a server-side validator checks that every cited ID exists in the input array before the email goes out. If the validator throws, the route 500s, Crontap retries, and you find out about the validation failure before the email arrives in finance's inbox.
Why every Friday at 5pm specifically?
Two reasons. First, your finance person actually wants the email before they sign off, not Monday morning when they are catching up on email. Second, anything that breaks gets discovered Friday afternoon, which is recoverable; anything that breaks Monday morning becomes a fire drill.
Can I add a Notion page or a Slack channel to the same digest?
Yes. The route can fan out: send the email, also POST the rendered HTML to a Notion page (Notion API supports pages.create with content blocks), also post a summary to Slack via webhook. The cron does not care how many destinations the route hits; it just fires the route.
References
- Stripe Disputes API
- Stripe Refunds API
- Stripe Smart Disputes
- Stripe Workflows for Slack
- OpenAI structured outputs
Related on Crontap
- Scheduled billing retries and dunning use case. The use-case-first guide for billing teams running recurring jobs against Stripe.
- Recurring reports and email digests use case. The pattern this post fits into.
- Stripe reconciliation cron. The daily-cadence sibling: nightly reconciliation against your books.
- OpenAI scheduled jobs and sentiment pipeline. The sibling pattern for AI batches that drain a queue, with the same cron + handler split.
