Back to blog

Guides · Jan 27, 2026

Ghost for content, external cron for the clock: scheduling the creator backend

Ghost runs the public site, the Members tier, and the Newsletter. The work that lives on a clock is on you. Here is the external cron pattern most creators converge on once their backend grows past one scheduled job, without standing up Redis for the privilege.
crontap.com / blog
Ghost ships an editor, Members, Newsletter, and webhooks, but it does not ship a job scheduler. Most creators end up running a small custom backend alongside Ghost. Here is the external cron pattern that drives the backend's scheduled work without bringing Redis into a creator-scale stack for one scheduled job.

You picked Ghost because the editor is fast, the themes are clean, and the Members feature meant you did not have to build email subscriptions yourself. Then your audience grew. They started paying for a custom thing your Ghost site cannot do alone: a chess + arena game with weekly payouts, or a coaching cohort that needs reminders, or a generator backend that talks to OpenAI on a schedule. Ghost runs the public site beautifully. The clock for the rest is on you. This post walks through the pattern most creators converge on: keep Ghost for content, run a small custom backend, and let an external scheduler hold the schedule.

If you want the short version: keep Ghost for the public site and the Members primitives. Stand up a small Node, Rails, or Python backend for the work Ghost cannot do. Expose one HTTPS route per scheduled job, check a bearer token in the handler, and point Crontap at it. You get every 1 minute on Pro, per-IANA timezones, per-job failure alerts, and one dashboard across every cron the creator stack runs.

What Ghost ships out of the box

Ghost is, in the team's own words, "a publishing platform". The product surface is the editor, the themes, Members, Tiers, Newsletters, Tags, and the Content and Admin APIs. It is excellent at the thing it does and intentionally narrow about everything else.

What Ghost does not ship is a job scheduler. There is no Ghost Cron tab, no equivalent of WordPress Cron, no per-tier scheduled hook. The platform fires events (a new member signs up, a post publishes, a payment lands), and you can listen to those events through webhooks via the Ghost Webhooks docs. What you cannot do is say "every Tuesday at 14:00 Europe/Berlin, run this code". That part is yours to build.

For most creators, that is the right design. The work that lives "on a clock" is not Ghost's domain. It is the creator's custom backend, the third-party API the creator polls, the digest the creator wants to send next to the Newsletter cadence. Ghost holds the content. Something else holds the schedule.

The common creator pattern: Ghost plus a custom backend

Past a certain creator size, the stack tends to look like this:

  1. Ghost. The public blog, the paid Members tier, the Newsletter, the editor. Ghost runs on Ghost(Pro), a Digital Ocean droplet, a Cloudron install, or wherever the creator deploys it.
  2. Custom backend. A Node, Rails, or Python service that runs the work Ghost cannot represent. A chess matchmaking endpoint, a leaderboard recalculation, an arena payout job, an OpenAI summarization, a Stripe reconciliation, a member-segment build that feeds the next newsletter.
  3. A glue layer. Ghost webhooks fire into the backend on member events. The backend reads from Ghost via the Admin API when it needs the canonical view of who is paying.

The two halves complement each other. Ghost handles the content and the subscription mechanics. The backend handles everything else. The seam between them is HTTPS in both directions: webhooks one way, API reads the other.

What the seam does not handle is "fire this work at 04:00 every Sunday in Europe/Berlin". That is where the question of "do I add a scheduler" walks in.

Why creators should not add Redis for one scheduled job

The first instinct, especially if the backend is Node, is to reach for a queue. BullMQ is the obvious one. The setup walkthrough takes you 20 minutes, the API is clean, and the cron syntax is built in. You install a Redis, you add the BullMQ scheduler, you write the job, and now you have a clock.

This is fine if the creator already has a queue for other reasons (background jobs, image processing, retry policies, fan-out workloads). It's heavy if the creator does not.

What Redis costs in this shape:

  • Another moving part. A Redis instance to run, monitor, back up, version-bump, and pay for. The smallest managed Redis on Render or Railway is in the $10-15/mo range; a self-hosted one is 30 minutes a quarter for security updates you keep meaning to apply.
  • Another deploy lifecycle. The cron expressions live in your backend code. To change a cadence, you redeploy. The same shape Vercel Cron has, except now you also pay the Redis bill for the privilege.
  • An entire queue concept for the schedule alone. If your only async work is "fire this at 04:00 Sunday", BullMQ is bringing a job runner, a retry layer, a delayed-job table, and a worker process so the schedule has a place to live.
  • A second alerting story. When BullMQ misses the Sunday job because Redis went down between Saturday at 22:00 and Sunday at 06:00, the silence is loud. You add Healthchecks.io as a dead-man pulse, which is its own service.

For a creator with 1-3 scheduled jobs, the cost-to-value ratio of a Redis-backed queue is rough. The lighter answer is to keep the backend stateless on the schedule axis and put the clock somewhere else.

External cron hitting the custom backend API

The shape is the same one teams use for Render, Railway, Heroku, and any other long-running container backend. The schedule lives outside the deployment. The backend exposes one HTTPS route per scheduled job, the route checks a bearer token, and an external scheduler fires the route on cadence.

Crontap (cron)  →  HTTPS POST  →  https://api.your-creator-backend.com/crons/<job>  →  the actual work

That's it. Ghost stays Ghost. The custom backend stays a normal HTTP service. The schedule is one line in a separate dashboard.

Step 1: Add the route to the creator backend

Your backend already exposes routes. Add one more for each scheduled job. In a Node + Express service, that looks like:

import { type Request, type Response } from "express";

export async function weeklyPayoutsCron(req: Request, res: Response) {
  if (req.headers.authorization !== `Bearer ${process.env.CRON_SECRET}`) {
    return res.status(401).json({ error: "Unauthorized" });
  }

  await runWeeklyPayouts();
  return res.status(200).json({ ok: true });
}

Three things to note:

  1. The route reads the Authorization header and refuses anonymous traffic. Without this, your cron URL is a public POST endpoint anyone can fire, which for a payouts route is roughly the worst case.
  2. The handler returns 200 quickly. If the actual work takes more than a few seconds, push it onto a background task and return 200 from the route. You do not need a queue to do this; a simple setImmediate or an in-process job spawn is enough for a creator-scale workload.
  3. The endpoint is independent of Ghost. Ghost is not in the request path. Your backend hits the Ghost Admin API only when the job needs the canonical member list, the post payload, or the tier metadata.

Step 2: Generate and store the bearer token

Generate a long random string locally:

openssl rand -base64 32

Store it as CRON_SECRET in your backend's env: a .env file in development, your hosting provider's environment-variable settings in production, or a secrets manager (Doppler, 1Password CLI). Redeploy once so the variable is live. The Ghost site does not need this secret; only the custom backend reads it.

Step 3: Point Crontap at the backend URL

Head to Crontap and create a new schedule.

  1. URL. https://api.your-creator-backend.com/crons/weekly-payouts. Custom domain works the same.
  2. Method. POST.
  3. Headers. Add Authorization: Bearer <your CRON_SECRET>. Crontap stores the value encrypted; you do not see it again after saving.
  4. Cadence. Type plain English ("every Sunday at 04:00", "every minute", "every 15 minutes weekdays") or paste a cron expression. Crontap previews the next 5 fires inline so you can sanity-check before saving.
  5. Timezone. Pick the IANA zone the schedule actually runs in. The weekly payout at 04:00 in your members' timezone is one schedule with that timezone field set, no UTC math, no DST drift twice a year.
  6. Failure alerts. Add an integration: email / webhook (Slack / Discord / Telegram). Crontap fires on 4xx and 5xx with the response body and timing in the payload, so a Discord alert lands the moment the route returns 401 or 500.

Press Perform test to fire a real request before you trust the cadence. If the route returns 200, you are done. If you see 401, your bearer is mismatched. If you see 5xx, the route itself failed and the alerting just proved itself.

Fix this in 60 seconds with Crontap. Free tier available. No credit card. Schedule your first job →

Worked example: weekly payouts and per-minute chat updates

Here is a pattern from a creator running a chess + arena game backend alongside their Ghost site. The backend handles matchmaking, scoring, and a small chat layer. Ghost handles the public blog, the paid Members tier, and the email newsletter. The schedule needs:

  • A /crons/weekly-payouts job every Sunday at 04:00 in the creator's timezone, which calculates the previous week's arena results and triggers payouts.
  • A /crons/chat-tick job every minute, which prunes idle chat sessions, recomputes the public lobby count, and pushes a heartbeat to the front-end (the front-end uses WebSockets for real-time updates; the heartbeat is the side-channel for "is the game backend alive").
  • A /crons/leaderboard-reset job every hour at minute 0, which recomputes the public leaderboard from the previous hour's matches.

Old shape (Redis + BullMQ): three jobs registered in the BullMQ scheduler, a Redis instance running 24/7, a worker process running 24/7 to consume the queue, and a Healthchecks.io dead-man pulse to cover the case where Redis goes down between Saturday and Sunday. Total moving parts: 3 (Redis, worker, dead-man).

New shape (external cron): three Crontap schedules pointing at three routes on the existing backend. No Redis. No worker process. No dead-man because Crontap's own failure alert is the dead-man.

| Schedule | URL | Cadence | Timezone | |---|---|---|---| | weekly-payouts | https://api.creator.com/crons/weekly-payouts | 0 4 * * 0 | Europe/Berlin | | chat-tick | https://api.creator.com/crons/chat-tick | * * * * * | UTC | | leaderboard-reset | https://api.creator.com/crons/leaderboard-reset | 0 * * * * | Europe/Berlin |

Three rows in one dashboard. The three handlers in the backend are normal HTTP routes that test cleanly with curl and supertest. The chat-tick fires every minute, which is the cadence floor on Crontap Pro. If the creator later wants every 30 seconds for a bigger arena event, that is a different conversation; for the everyday cadence, every minute is plenty for chat-prune semantics.

The weekly-payouts run is the one that matters for the creator's revenue. When it fails, the Discord alert lands within seconds and the creator can ssh into the backend, inspect the route, and re-fire from Crontap with one click. No "let me figure out which BullMQ job failed and why".

When the creator backend is mature enough to add a queue

External cron is the small answer. There comes a moment when it is not enough. The honest signal: you have more than ten async tasks, several of them have retries with backoff, several of them depend on each other, and "scheduled" is only one of the patterns you need.

At that point, BullMQ (Node), Sidekiq (Ruby), Celery (Python), or Ghost's own internal job framework are the right next step. The schedule moves into the queue. Crontap can stay as the trigger ("every minute, fire the queue's poll endpoint"), or you can fold the schedule into the queue's own cron support and pay the Redis bill because by now you wanted Redis for other reasons anyway.

The migration is not painful. Each Crontap schedule maps cleanly to a BullMQ repeating job. The HTTP route stays as a manual-fire path for testing. You add the queue when the queue earns its keep, not before.

For most creators, "before" never happens. Three to seven schedules covering the entire backend is normal. External cron handles all of them, the dashboard is one tab, and the cost is $0 on the free tier or $3.25/mo on Crontap Pro billed annually for unlimited schedules at every-minute cadence.

FAQ

Does Crontap call into Ghost at all, or just my backend?

Just your backend. Ghost itself does not need a scheduler in this pattern. The custom backend is the one with scheduled work; Ghost provides webhooks (member events, post events) and the Admin API for reads. If you do want Crontap to call a Ghost endpoint directly (a custom theme route, a Ghost integration webhook), the same Authorization-header pattern works.

Can I use this if my Ghost site is on Ghost(Pro) instead of self-hosted?

Yes. Ghost(Pro) does not change the pattern. The custom backend is the one Crontap calls; Ghost(Pro) hosts the content site. The webhook integration into your backend is the same on Ghost(Pro) as on self-hosted Ghost.

My creator backend is small. Do I really need a separate scheduler service?

For one scheduled job, you can run a single setInterval in your Node process or cron in your container's entrypoint. For two or more, the cost of "where is the cadence list, did Tuesday's run fire, and how do I know" makes the central dashboard cheaper than the script. The break-even is somewhere around the second cron.

Will Crontap survive my Ghost host going down?

Crontap fires the URL regardless of who hosts what. If the backend is down, Crontap's request returns a connection error or a 5xx, and the failure alert lands in the channel of your choice. Crontap's reliability is independent of Ghost's reliability and the backend's reliability; that's the wedge. The dead-man behaviour you'd otherwise wire through Healthchecks.io is built into the alert path.

Can I trigger a Ghost newsletter send from cron?

Ghost's Newsletter feature runs on its own send schedule when a post publishes. If you want a recurring "weekly digest" that is not tied to a single post (a custom roundup, a member-segment-only send, a programmatic compilation), the pattern is: Crontap fires your backend, your backend builds the digest from Ghost data via the Admin API, and the backend either delivers via email itself or schedules a post in Ghost via the Admin API to use Ghost's send pipeline.

Does this work with Ghost integrations like Zapier or Pipedream?

Yes. Crontap can fire a Zapier Catch Hook or a Pipedream HTTP source on cadence, and the integration then calls Ghost or your backend. For most creator stacks the direct path (Crontap to your backend) is shorter, but the chained path is sometimes useful when the integration platform is already doing the data shaping.

References

Related on Crontap

From the blog

Read the blog

Guides, patterns and product updates.

Tutorials on scheduling API calls, webhooks and automations, plus deep dives into cron syntax, timezones and reliability.

Alternatives

Vercel Cron every minute: beating the Hobby hourly limit

Vercel Cron caps Hobby at hourly cadence and 5 jobs, and ties every change to a redeploy. Here is the external cron pattern teams use to ship per-minute schedules, per-IANA timezones, and one dashboard across projects without paying $20/mo per user for Pro.

Alternatives

Cloud Run cron without Cloud Scheduler

Cloud Scheduler costs $0.10 per job per month after the first 3 and asks for OIDC plus IAM bindings on every target. Here is the IAM-free pattern Cloud Run teams use to fire their .run.app URLs on a clock with one bearer token and one dashboard across every GCP project.

Alternatives

Heroku Scheduler alternative: any cron expression without the add-on

Heroku Scheduler caps you at three cadences and account-wide UTC, and spins a one-off dyno per run. Here is the external cron pattern that gives you any cron expression, per-schedule timezones, and zero per-execution dyno spin-up cost.

Guides

Running an OpenAI sentiment pipeline on a real scheduler

OpenAI batch work needs a clock, not a user session. Here is the scheduled HTTP-route pattern teams use to drain LLM batches at a sustainable rate inside OpenAI's rate limits, with per-task failure alerts.

Reference

Cron syntax cheat sheet with real-world examples

Cron syntax without the math. Every pattern you're likely to reach for (every 5 minutes, weekdays, business hours, first of the month), with a practical example and a link to a free debugger.