Back to blog

Alternatives · Apr 12, 2026

Cloud Run cron without Cloud Scheduler: the IAM-free path

Cloud Scheduler is fine when you need it, but for one HTTPS call on a clock it brings OIDC tokens, IAM bindings per project, and $0.10 per job per month after the first 3 free. Here is the IAM-free pattern Cloud Run teams use to fire their .run.app URLs directly with a bearer token, with one dashboard across every GCP project.
crontap.com / blog
Cloud Scheduler is fine, but for one HTTPS call on a clock it brings OIDC tokens, IAM bindings per project, and $0.10 per job per month after the first 3. Here is the no-IAM external cron pattern Cloud Run teams use to call .run.app URLs directly with a bearer token and one cross-project dashboard.

You shipped a Cloud Run service. The next ticket is "fire this URL daily at 15:00 America/Bogota". The default GCP answer is Cloud Scheduler, and Cloud Scheduler is a fine product when you want it, but for one HTTPS call on a clock it brings a lot of friction. OIDC tokens, service accounts, IAM bindings on the receiving service, a separate console per project, plus $0.10 per job per month after the first three free per billing account. If your relationship with Cloud Run is "call this URL on a schedule and alert me when it fails", there is a much shorter path.

If you just want the short version: keep your Cloud Run service, expose the work behind a POST route, check a bearer token in the handler, and point Crontap at the *.run.app URL. You get every 1 minute on Pro, per-IANA timezones, per-job failure alerts, and one dashboard across every GCP project (and every non-GCP target) you own, for $3.25 a month billed annually instead of $0.10 per job.

Why Cloud Scheduler feels heavy for Cloud Run triggers

Cloud Scheduler does the job, but the cost surface is not really the per-job dollar. The cost surface is the setup cost per target, multiplied by the number of projects you work across, multiplied by how often you change anything.

OIDC per target

The first paper cut is auth. Cloud Scheduler hits Cloud Run via an HTTP target with an OIDC token. To configure it you pick (or create) a service account, attach the right roles/run.invoker binding on the receiving service, and tell Scheduler to mint an OIDC token for that service account on each fire. The first time you do this it is twenty minutes of clicking. The fifth time you do it for a different project, it is twenty minutes again because IAM bindings do not cross projects.

If your service is --no-allow-unauthenticated, you cannot skip this. If your service is --allow-unauthenticated, you can technically skip the OIDC step, but then your URL is a public POST endpoint, and you still want a bearer in the handler so a curious crawler does not trigger your daily report by accident. Either way, somewhere in the chain there is a credential that needs rotation, audit, and a runbook.

IAM bindings per project

Cloud Scheduler is project-scoped. The Scheduler job lives inside a GCP project and the service account has to exist (or be impersonatable) inside that project. If you have a single GCP project for everything, this is invisible. If you have a project per environment (dev, staging, prod) or a project per product line, you now run the same OIDC dance once per project, and the Scheduler console you open changes depending on which project you are auditing.

The friction shows up in real ways. You go on call, an alert fires, and the first thing you do is figure out which project the schedule is even in. The dashboard does not exist as a single view; it is one console per project with the project picker in the GCP nav.

$0.10 per job per month after the free 3

This is the easy one to point at. Cloud Scheduler pricing is the first 3 jobs per billing account free, then $0.10 per job per month. For a small footprint that is genuinely cheap. For a real footprint, the math sneaks up on you.

Take a working Cloud Run team with 7 jobs across 3 projects. Three of those are free. The other 4 are $0.40 per month, plus the time it took to set them up (do not forget the time). Push the same setup to 20 jobs across 3 projects and you pay $1.70 per month in Scheduler fees alone, on top of the engineering cost of all the IAM glue. Push it to 50 jobs and you are at $4.70.

Crontap is flat priced at $3.25/mo billed annually for unlimited schedules at every-1-minute cadence. The Cloud Scheduler dollar wedge gets bigger the more jobs you have, and the Crontap dollar number stops moving.

That is the dollar story. The setup story (OIDC + IAM per project) is the bigger wedge for most teams.

The no-IAM path: call .run.app directly with a header

The cleaner shape, when your security policy allows it, is to treat the Cloud Run URL as a normal authenticated HTTP endpoint. Crontap is the clock, Cloud Run is the runtime, and the contract between them is one HTTPS POST with a bearer token.

Crontap (cron)  →  HTTPS POST  →  https://your-service-xxxx.run.app/task/refresh  →  the actual work

No OIDC. No service account. No roles/run.invoker binding to remember. The auth check moves from the GCP IAM layer into your handler, where you can read it on every request without leaving the codebase.

Step 1: Deploy Cloud Run with --allow-unauthenticated

You probably already have a Cloud Run service. If you are wiring this from scratch:

gcloud run deploy your-service \
  --source . \
  --region us-central1 \
  --allow-unauthenticated

--allow-unauthenticated does not mean "no auth". It means "GCP does not enforce auth at the platform layer, your code does". The *.run.app URL is now reachable from the public internet, which is what you want for an external scheduler to hit it. See Cloud Run authentication options for the full menu of choices if you have stricter requirements.

If your security policy mandates --no-allow-unauthenticated, skip to the "When to still use Cloud Scheduler" section below. The IAM-required workload is the case where Cloud Scheduler is genuinely the right answer.

Step 2: Enforce a bearer token in your code

Pick a strong random secret and store it as a Cloud Run environment variable:

openssl rand -base64 32

Set it as CRON_SECRET on the service:

gcloud run services update your-service \
  --region us-central1 \
  --update-secrets=CRON_SECRET=cron-secret:latest

(Use Secret Manager for the actual storage, then reference it in the deploy. The secret never lives in source.)

In your handler, refuse anything that is not a valid bearer:

export async function POST(request: Request) {
  const auth = request.headers.get("authorization");
  if (auth !== `Bearer ${process.env.CRON_SECRET}`) {
    return new Response("Unauthorized", { status: 401 });
  }

  await runDailyReport();
  return Response.json({ ok: true });
}

Three things to call out:

  1. The route is POST so a casual GET from a crawler does not trigger anything.
  2. The token rides in the Authorization header, not a query string. The secret never lands in access logs.
  3. The handler returns 200 quickly. If the actual work is long, kick it off as a background task, write to your audit log, and return; do not block the scheduler waiting for a 30-second job to finish.

Step 3: Store the token in Crontap

Head to Crontap and create a new schedule.

  1. URL. Paste the production URL of your Cloud Run service, e.g. https://your-service-865985058480.us-central1.run.app/task/refresh.
  2. Method. POST.
  3. Headers. Add Authorization: Bearer <your CRON_SECRET>. Crontap stores the value encrypted; you do not see it again after saving.
  4. Cadence. Type plain English ("every 5 minutes") or paste a cron expression. Crontap previews the next 5 fires inline so you can sanity-check before saving.
  5. Timezone. Pick the IANA zone that matches the schedule's intent. America/Bogota for the daily 15:00 SES report, America/New_York for a market-open ping, UTC for anything that is genuinely UTC.
  6. Failure alerts. Add an integration: email / webhook (Slack / Discord / Telegram). Crontap fires on 4xx and 5xx with the response body and timing in the payload, so the alert is immediately useful.

Press Perform test to fire a real request before you trust the cadence. If the route returns 200, you are done. If you see 401, your bearer is mismatched. If you see 5xx, the route itself failed and the alerting just proved itself.

Fix this in 60 seconds with Crontap. Free tier available. No credit card. Schedule your first job →

Multi-project and multi-region in one dashboard

This is the part that is hard to feel until you have several projects. Cloud Scheduler is regional and project-scoped. Each Scheduler job is bound to a region you pick at create time, and to the project that owns it. The console reflects that: you switch the project picker to see a different set of schedules.

Crontap is global and not bound to GCP at all. We watched a Cloud Run team running daily AWS SES reports and 9-hour domain monitors stand up 7 schedules across 3 GCP projects in one Crontap account, and the daily on-call view became "open one tab and read down the list", instead of "click into the GCP console, switch projects, find the Scheduler tab, repeat".

It also covers non-GCP targets. The same Crontap account schedules the Cloud Run job at 15:00 Bogota, the AWS Lambda Function URL once an hour, and the customer's own Vercel API route every 5 minutes. None of them care about each other, and the failure alerts route to one Slack channel.

Migration is incremental. You do not have to move everything at once. The pattern that works:

  1. List the Cloud Scheduler jobs and their target URLs. Note which run per project and which currently use IAM-bound auth.
  2. For each *.run.app target, pick: keep --no-allow-unauthenticated and call via a thin internal proxy that verifies a shared secret Crontap sends, or enable --allow-unauthenticated and check a bearer header in your handler.
  3. Recreate each schedule in Crontap with the URL plus the Authorization header. Disable the corresponding Cloud Scheduler job once Crontap has run cleanly for one cadence.

You can run both in parallel for a release. Cloud Scheduler will keep firing while Crontap ramps; once you trust the alerts, disable the Scheduler job and keep moving.

When to still use Cloud Scheduler (IAM-required workloads)

External cron is a shape, not a religion. Cloud Scheduler is genuinely the right answer when:

  • Your security policy mandates IAM-bound auth on the Cloud Run target. If --no-allow-unauthenticated is non-negotiable and you cannot front the service with a thin authenticated proxy, the OIDC story is what you need.
  • The trigger is Pub/Sub, not HTTP. Crontap is HTTP-only by design. For Pub/Sub-backed jobs, keep Cloud Scheduler as the trigger source.
  • You have fewer than 3 jobs ever. The free tier is genuinely free; the wedge does not exist until you cross the threshold.
  • Your audit policy mandates GCP-native, Cloud Logging-only history. Crontap stores its own run history, but the audit trail is in Crontap and not in Cloud Logging.

For everything else (the long tail of "fire this URL on a schedule"), the external pattern reads cleaner.

FAQ

Does Crontap support OIDC?

Not directly. Use a bearer token or shared secret verified in your code, or front the URL with a thin authenticated proxy that exchanges the bearer for an OIDC token before calling the inner service. The bearer-in-handler pattern is the cleaner one for most teams; the proxy pattern is the bridge if your policy mandates OIDC at the receiving service.

Will this work with --no-allow-unauthenticated?

Yes, indirectly. Expose a thin Cloud Run service (or a Cloud Function) with --allow-unauthenticated that verifies the Crontap bearer, mints an OIDC token via the metadata server, and forwards the request to the protected inner service. The proxy does the IAM work once; Crontap stays simple.

Can Crontap call private VPC endpoints?

Crontap is HTTPS-only and does not run inside your VPC. For VPC-bound Cloud Run services (Direct VPC egress with internal-only ingress), expose a public proxy that authenticates Crontap and forwards into the VPC. The same trick that handles --no-allow-unauthenticated handles ingress restrictions.

What about Cloud Scheduler for Pub/Sub triggers?

Keep Cloud Scheduler for Pub/Sub. It is the right tool for that target. You can pair both: Cloud Scheduler for Pub/Sub triggers, Crontap for HTTPS targets. The two do not conflict and most teams end up running both.

How does pricing compare at 20 jobs across 3 projects?

20 Cloud Scheduler jobs across 3 GCP projects: 17 chargeable jobs at $0.10 per month equals $1.70 baseline, plus the time spent on OIDC and IAM setup per target. Crontap Pro is $3.25/mo billed annually for unlimited schedules. The wedge is not the dollar; it is the setup time and the dashboard story. Once you have several jobs, the dollar follows.

Can I keep Cloud Logging for the run history?

Yes. Log inside your handler the way you already do, and Cloud Logging keeps working. Crontap stores its own request and response history alongside, so you have two views: the audit trail at the GCP layer, and the cron-side view of "did the schedule fire on time and did the response come back 200".

References

Related on Crontap

From the blog

Read the blog

Guides, patterns and product updates.

Tutorials on scheduling API calls, webhooks and automations, plus deep dives into cron syntax, timezones and reliability.

Alternatives

Vercel Cron every minute: beating the Hobby hourly limit

Vercel Cron caps Hobby at hourly cadence and 5 jobs, and ties every change to a redeploy. Here is the external cron pattern teams use to ship per-minute schedules, per-IANA timezones, and one dashboard across projects without paying $20/mo per user for Pro.

Alternatives

Cloud Run cron without Cloud Scheduler

Cloud Scheduler costs $0.10 per job per month after the first 3 and asks for OIDC plus IAM bindings on every target. Here is the IAM-free pattern Cloud Run teams use to fire their .run.app URLs on a clock with one bearer token and one dashboard across every GCP project.

Alternatives

Heroku Scheduler alternative: any cron expression without the add-on

Heroku Scheduler caps you at three cadences and account-wide UTC, and spins a one-off dyno per run. Here is the external cron pattern that gives you any cron expression, per-schedule timezones, and zero per-execution dyno spin-up cost.

Guides

Running an OpenAI sentiment pipeline on a real scheduler

OpenAI batch work needs a clock, not a user session. Here is the scheduled HTTP-route pattern teams use to drain LLM batches at a sustainable rate inside OpenAI's rate limits, with per-task failure alerts.

Reference

Cron syntax cheat sheet with real-world examples

Cron syntax without the math. Every pattern you're likely to reach for (every 5 minutes, weekdays, business hours, first of the month), with a practical example and a link to a free debugger.