Back to blog

Monitoring · May 9, 2026

Cron heartbeats vs uptime monitors: which one do you actually need?

Heartbeats catch silent failures (the cron didn't fire); uptime monitors catch loud ones (the URL responded 500). They sound alike and aren't. A four-criteria framework for picking one, both, or neither — with concrete patterns for each.
crontap.com / blog
A heartbeat (dead-man check) tells you a job DIDN'T run. An uptime monitor tells you a URL stopped responding. They sound similar; they catch entirely different failure modes. Here's a four-criteria decision framework, when to pair both, and how to wire each in Crontap.

A customer pinged us last week: "do I need a heartbeat or an uptime monitor on my nightly invoice run?" The answer is "yes" — they catch different failure modes — but it's worth slowing down to spell out which failures, because picking the wrong one of the two lets the actual bad outcome slip through silently. This is the framework.

If you just want the short version: an uptime monitor watches whether a URL responds correctly. A heartbeat (dead-man check) watches whether a URL got pinged at all. The first catches loud failures (your app is returning 500s); the second catches the silent ones (the cron didn't fire, the worker is dead, the deploy turned the schedule off). For anything serious you usually want both, but on different things.

Definitions, used precisely

Uptime monitor

An external service polls your URL on a cadence (typically 1 to 5 minutes). It expects a 2xx response within a timeout. If a probe fails, after a small confirmation window the monitor flips to down and pages you. Recovery is automatic when the next probe succeeds. Examples: UptimeRobot, Better Stack, Pingdom, and Crontap's built-in uptime monitoring.

The check is active: somebody is actively poking the URL. The failure mode it catches is "the URL stopped responding correctly".

Heartbeat (dead-man check)

A worker — your own process, a cron job, a scheduled function — pings a unique URL on a cadence. If the heartbeat URL hasn't been pinged within the expected window, the heartbeat service pages you. Examples: Healthchecks.io, Cronitor, and the heartbeat features baked into UptimeRobot / Better Stack.

The check is passive: it waits for the work to phone home. The failure mode it catches is "the work didn't run at all". The absence of a ping is the signal.

The two patterns sound similar because both involve URLs and timeouts. The difference is which side initiates. An uptime monitor initiates from outside your stack. A heartbeat is initiated from inside.

Four decision criteria

When you're picking between them (or deciding to use both), four criteria do most of the work.

1. Is the failure loud or silent?

A loud failure is an HTTP 500 your endpoint actively returns. An uptime monitor sees it on the next probe. A silent failure is "the worker process crashed and isn't doing anything any more, but no request hit anything". An uptime probe doesn't see this — there's nothing to probe against. Heartbeats are the right tool: when the worker stops sending the periodic ping, the heartbeat service notices.

If your app's failure mode is "throws errors during requests", uptime is the answer. If your app's failure mode is "stops doing background work without making any noise", heartbeats are the answer.

2. Is the URL public-facing or internal?

Uptime monitors poll from outside your network. They work best on public URLs. Internal endpoints (behind a VPN, behind auth, on a private subnet) need either a public proxy or a different pattern. Heartbeats invert this: the worker pushes a ping outwards, so the work it's reporting on can be entirely private.

If you want to monitor an internal-only sync job, heartbeats are easier than punching a public URL through your VPN just so an uptime tool can poll it.

3. How frequently does the work run?

Uptime monitors are great at fast cadences (1-minute checks are typical). Heartbeats also work at fast cadences but you lose information density: if a process pings every 30 seconds and stops pinging once, you only know one ping is missing. A 1-minute uptime probe gives you "how the URL is responding right now" continuously.

For sparse cadences (a daily / weekly / monthly job), heartbeats are usually clearer. The uptime equivalent — "did this URL respond once in the last 24 hours?" — is something most uptime tools don't surface natively, while heartbeats are designed exactly for this question.

4. Are you trying to catch "the request returns wrong" or "the request never happened"?

Restating criterion 1 in different words because it's easy to mix up. An uptime monitor catches outcomes that occur (the URL responded with the wrong thing). A heartbeat catches outcomes that don't occur (a request that was supposed to happen, didn't).

Most production systems have both failure modes. Most SaaS apps with both background jobs and a customer-facing API need both monitoring patterns.

Use heartbeats when…

  • You have a periodic background job that should fire on a calendar (daily reports, weekly invoices, hourly imports). The fear is the job silently stops firing.
  • The worker is in a place an external probe can't reach (private network, behind auth, ephemeral container). You need the worker to phone home rather than asking an external service to find it.
  • You care about whether the job ran, not what it returned. The job either ran or it didn't.
  • You want to pair the heartbeat with the actual work, so the heartbeat is silent unless the work crashed.

The classic pattern: a Crontap schedule fires the work, and a second Crontap schedule (or the work itself, on success) hits a Healthchecks ping URL. Healthchecks knows the cadence; the silence of a missed ping is the alert. We wrote the external scheduler + dead-man check pattern up in detail.

Use uptime monitors when…

  • You have a public URL that should respond 200 within a timeout, on every probe.
  • The fear is that the URL starts returning 500 / timing out / serving the wrong content for everyone, not just for one job.
  • You want a continuous record of latency and error rate, not a single signal of "did it run?"
  • You want a chart of green and red bars over the last 90 days that you can show a customer support person.

The classic pattern: stick an uptime monitor on every public URL that customer-facing traffic reaches. https://yoursaas.com/, https://api.yoursaas.com/health, https://app.yoursaas.com/login. Crontap's built-in uptime feature is designed for this case: paste a URL, pick an interval, get a Resend email when it goes red.

Use both when…

The simplest test: do you have a customer-facing URL and a background job? Then yes, both.

The two patterns aren't competitive. They cover opposite halves of the failure space. Most production stacks have at least:

  • Uptime monitors on the customer-facing URLs (homepage, login, key APIs).
  • Heartbeats on the periodic backend work (cron jobs, scheduled imports, monthly billing reconciliation).

A useful test for whether your monitoring stack is balanced: imagine your worker process crashes silently. Will any of your monitors notice within an hour? If the answer is "no, our uptime monitors only catch URL-level failures", you're missing the heartbeat half.

The Crontap shape, both halves

Crontap supports both patterns natively, in the same dashboard:

  • Uptime monitoring lives at /uptime. Paste a URL, pick an interval (1 minute on Pro, 1 day on Free), get a 90-day chart and Resend alerts on the up→down transition.
  • Heartbeat scheduling lives at / (the schedule list). Use a Crontap schedule to ping your Healthchecks / dead-man URL on the cadence you expect the underlying work to run, or have your job hit the ping URL on success.

The single dashboard is the wedge: uptime and heartbeat live next to each other, alerts route through the same Resend pipeline, the same Pro tier unlocks both. If you're already using Crontap for one half, the other half is a one-click setup.

Common confusions

"My uptime monitor catches missed cron jobs." Only if the missed cron job is itself the URL being probed. The classic gotcha: you have a /run-daily-report URL that the cron hits. An uptime monitor on that URL says "the URL responds 200 to GET", which is independent of whether the cron actually fires the URL on schedule. The URL stays up; the cron silently stops; you never get paged.

"My heartbeat catches degraded performance." It doesn't. A heartbeat fires from your code. If your code is slow but still firing, the heartbeat looks healthy. Uptime monitors record duration on every probe, so they're the right place to watch for "the API is up but is now serving 8-second responses".

"I only need one of these." Genuinely possible if your app has only one of the two failure modes. A static site has no background jobs — uptime alone is enough. A pure batch system with no public surface area has no URLs — heartbeats alone are enough. Most SaaS sits between those two extremes.

Decision tree

A short one:

  1. Is there a public URL that customers hit? → Add an uptime monitor.
  2. Is there a background job that should fire on a calendar? → Add a heartbeat.
  3. Both? → Both.
  4. Neither? → Probably nothing to monitor.

For uptime, see /uptime for the feature page. For heartbeats, see the external dead-man pattern we walked through earlier. If you want the head-to-head against the dedicated uptime tools, the UptimeRobot alternative post covers that.

From the blog

Read the blog

Guides, patterns and product updates.

Tutorials on scheduling API calls, webhooks and automations, plus deep dives into cron syntax, timezones and reliability.

Product

Introducing Crontap built-in uptime monitoring

A customer asked. We shipped. Uptime monitoring lives next to your scheduled jobs now: 1-minute checks on Pro, 90-day bar chart, Resend alerts on the up→down transition.

Alternatives

Vercel Cron every minute: beating the Hobby hourly limit

Vercel Cron caps Hobby at hourly cadence and 5 jobs, and ties every change to a redeploy. Here is the external cron pattern teams use to ship per-minute schedules, per-IANA timezones, and one dashboard across projects without paying $20/mo per user for Pro.

Alternatives

Cloud Run cron without Cloud Scheduler

Cloud Scheduler costs $0.10 per job per month after the first 3 and asks for OIDC plus IAM bindings on every target. Here is the IAM-free pattern Cloud Run teams use to fire their .run.app URLs on a clock with one bearer token and one dashboard across every GCP project.

Alternatives

Heroku Scheduler alternative: any cron expression without the add-on

Heroku Scheduler caps you at three cadences and account-wide UTC, and spins a one-off dyno per run. Here is the external cron pattern that gives you any cron expression, per-schedule timezones, and zero per-execution dyno spin-up cost.

Guides

Running an OpenAI sentiment pipeline on a real scheduler

OpenAI batch work needs a clock, not a user session. Here is the scheduled HTTP-route pattern teams use to drain LLM batches at a sustainable rate inside OpenAI's rate limits, with per-task failure alerts.

Reference

Cron syntax cheat sheet with real-world examples

Cron syntax without the math. Every pattern you're likely to reach for (every 5 minutes, weekdays, business hours, first of the month), with a practical example and a link to a free debugger.