Back to blog

Alternatives · Feb 11, 2026

Firebase scheduled functions without the Blaze plan

Firebase Scheduled Functions sit on Cloud Scheduler and Pub/Sub, both of which require a billing-enabled project. The Spark plan cannot deploy them. Here is the external HTTP cron pattern that keeps your Firebase project on Spark and runs the scheduled work on any HTTPS endpoint your stack already reaches.
crontap.com / blog
Firebase Scheduled Functions are built on Cloud Scheduler and Pub/Sub, both of which require a billing-enabled project, so the Spark plan cannot deploy them. Here is the external HTTP cron pattern that keeps your Firebase project on Spark, runs the scheduled work on any HTTPS endpoint your stack already reaches, and gives you per-IANA timezones, per-job alerts, and a predictable billing surface.

You wanted a Firebase Scheduled Function. You wrote functions.pubsub.schedule("every 5 minutes"), ran firebase deploy, and the CLI told you to upgrade to Blaze. The reason is real: Scheduled Functions are built on Cloud Scheduler and Pub/Sub, and both of those are billed Google Cloud products that need a billing-enabled project. The Spark plan does not include them, and there is no toggle that turns them on without leaving Spark.

If you just want the short version: skip Firebase Scheduled Functions, host your scheduled work on any HTTPS endpoint your stack reaches today, and point Crontap at the URL with an Authorization header. You stay on the Spark plan for the rest of your Firebase setup, you get every 1 minute on Pro, you get per-IANA timezones, you get email / webhook (Slack / Discord / Telegram) failure alerts, and your billing surface stays predictable.

Why Scheduled Functions require Blaze

Firebase's Scheduled Functions docs describe the feature as "an easy way to schedule code to run at recurring intervals". Behind the convenience, the architecture is three Google Cloud products glued together:

  1. The function itself, deployed to Cloud Functions.
  2. A Cloud Scheduler job, created by the Firebase CLI on your behalf, that fires on the cadence you wrote.
  3. A Pub/Sub topic, also created by the CLI, that the Scheduler job publishes to and your function subscribes to.

Each of those is a paid Google Cloud service. Cloud Scheduler is free for the first 3 jobs per billing account and $0.10 per job per month after that. Pub/Sub is metered on messages and bandwidth. Cloud Functions itself has a generous free tier on the Firebase pricing page but the surrounding services are billed regardless.

The Spark plan covers a bounded set of Firebase products: Firestore, Authentication, Hosting, Realtime Database, Cloud Storage, Remote Config, App Check. It explicitly does not include Cloud Scheduler or Pub/Sub. Without those two, Scheduled Functions cannot exist. That is why the CLI refuses to deploy functions.pubsub.schedule(...) on Spark; the deploy would fail at the Scheduler-or-Pub/Sub step regardless of what the CLI did locally.

The Firebase team is honest about this in the docs. The line you usually read is "Scheduled Functions require the Blaze (pay as you go) plan". The translation is "Scheduled Functions sit on top of two billed services, and Blaze is the plan that lets you use billed services".

Why teams resist upgrading to Blaze

Blaze is fine for production teams with a budget and a billing alert. The teams that hit this wall are usually one of three:

  • Indie projects. A solo dev shipping a side project on Firestore + Hosting + Auth, all comfortably inside Spark. The first scheduled function (a daily content moderation pass, a weekly digest, a newsletter sender) is the moment they have to pick: upgrade to a per-usage plan with no hard cap, or find another way.
  • Workshops, classes, demo accounts. A workshop runs 30 student accounts on Spark. Asking each student to attach a credit card to enable Blaze for one scheduled function is a non-starter for most institutions.
  • Cost-sensitive teams in regulated environments. A Firebase team on the Spark plan running a content moderation app at modest volume wants the cap that Spark's metered limits provide. Blaze removes that cap; even with a budget alert, an unattended runaway function billing $400 in egress is the scenario you do not want explaining to legal.

For all three, the answer is not "Blaze is bad". The answer is "you do not need Blaze for this specific job". A scheduled HTTP call once an hour is a 50-cents-a-month problem on Cloud Scheduler. The friction is the plan switch and the billing surface, not the dollar.

The external HTTP pattern

Here is the shape we recommend. It works on any Firebase project, free or paid, and it does not require Cloud Scheduler or Pub/Sub.

Crontap (cron)  ->  HTTPS POST  ->  https://your-endpoint/your-route  ->  the actual work

Firebase still owns the data, the auth, and the frontend hosting. Crontap owns the clock. The runtime sits wherever your scheduled work is comfortable: a small backend you already have, a Cloud Run service in the same Google Cloud project, a Render service, a Railway service, a Cloudflare Worker, your own VM. The contract between Crontap and the runtime is one HTTPS POST per cadence with a bearer token.

The trick is that the runtime does not have to be Firebase Cloud Functions. Firebase is your frontend stack; the scheduled work is one HTTPS endpoint somewhere reachable. The two stay loosely coupled, which is the shape most "I want a scheduled job that touches my Firestore data" stories want anyway.

What goes inside the handler

Whatever code your Scheduled Function would have run. The interesting part is auth and Firebase access from outside Firebase:

export async function POST(request: Request) {
  const auth = request.headers.get("authorization");
  if (auth !== `Bearer ${process.env.CRON_SECRET}`) {
    return new Response("Unauthorized", { status: 401 });
  }

  const admin = await getFirebaseAdmin();
  const stale = await admin.firestore()
    .collection("posts")
    .where("status", "==", "pending")
    .where("createdAt", "<", Date.now() - 24 * 60 * 60 * 1000)
    .get();

  await Promise.all(stale.docs.map((d) => d.ref.update({ status: "expired" })));

  return Response.json({ ok: true, expired: stale.size });
}

You authenticate the inbound call with a bearer token, then use the Firebase Admin SDK to read and write Firestore exactly as a Scheduled Function would have. The Admin SDK works from any Node, Python, Go, or Java backend; it is documented under the Firebase docs as the way "trusted server environments" interact with Firebase.

The Spark plan does not block server-side Admin SDK usage. The metered limits on Spark (Firestore reads, writes, and storage) still apply, but they apply to your function calls regardless of where the function is hosted. Spark's caps are about volume; the trigger source does not matter.

Where to host the runtime

A handful of patterns are common, in roughly increasing order of operational weight.

A backend you already have

If you already run a Node or Python backend somewhere, the cheapest move is "add one POST route to it". Render, Railway, Fly.io, your own VM, all fine. The route does the auth check, calls your Firebase Admin SDK, and returns 200. No new infrastructure.

A Cloud Run service

If your team is already a Google Cloud shop and you want the runtime in the same project as Firebase, Cloud Run is the closest peer to Firebase Functions. You can deploy a small Express or FastAPI service with a single POST /cron/dispatch route, set --allow-unauthenticated, gate it with a bearer header, and Crontap calls it directly. Cloud Run has its own free tier and ships in seconds.

We have written this story up in detail in our Cloud Run cron pattern post, which you can ignore here unless you specifically want the Cloud Run flavor.

A Cloudflare Worker

If your scheduled work is short and reads Firestore through the REST API rather than the Admin SDK, a Cloudflare Worker (free tier: 100k requests per day) is a clean home. The Worker handles the bearer check, calls Firestore, and returns. No always-on container.

A second Firebase project on Blaze, hosting only the function

A hybrid pattern: keep the main project on Spark for everything (data, auth, hosting), spin up a second tiny Firebase project on Blaze that hosts only the Cloud Function, and let the function read the main project's Firestore via the Admin SDK with a service account. Now you pay Blaze pennies on the second project for the function, the main project stays on Spark, and the billing surface is contained.

This is more setup than the "any HTTPS backend" path, but some teams prefer it because the function lives next to the rest of the Firebase tooling and the billable surface is one tiny project they can shut down without touching the main app.

Crontap setup, click by click

Once you have an HTTPS endpoint that does the work, the rest is mechanical.

Step 1: Generate and store a bearer token

On your runtime host, generate a long random string and store it as an environment variable:

openssl rand -base64 32

Save it as CRON_SECRET in your hosting platform's environment variables (Render, Railway, Cloud Run, your VM). Redeploy once so the variable is live. The handler reads it with process.env.CRON_SECRET and refuses any request without a matching Authorization: Bearer ... header.

Step 2: Point Crontap at the endpoint

Head to Crontap and create a new schedule.

  1. URL. Paste the HTTPS URL of your endpoint, e.g. https://your-app.fly.dev/cron/expire-stale-posts or https://your-service.run.app/cron/dispatch.
  2. Method. POST.
  3. Headers. Add Authorization: Bearer <your CRON_SECRET>. Crontap stores the value encrypted; you do not see it again after saving.
  4. Cadence. Type plain English ("every 30 minutes") or paste a cron expression. Crontap previews the next 5 fires inline so you can sanity-check before saving.
  5. Timezone. Pick the IANA zone the schedule should run in. Cloud Scheduler is per-job UTC unless you set it explicitly; Crontap's timezone field is per-schedule and DST-aware.
  6. Failure alerts. Add an integration: email / webhook (Slack / Discord / Telegram). Crontap fires on 4xx and 5xx with the response body and timing in the payload.

Press Perform test to fire a real request before you trust the cadence. If the route returns 200, you are done. If you see 401, your bearer is mismatched. If you see 5xx, the route itself failed and the alerting just proved itself.

Fix this in 60 seconds with Crontap. Free tier available. No credit card. Schedule your first job.

Worked example: daily content moderation on Spark

Take a Firebase team on the Spark plan running a content moderation app. Firestore stores user-submitted posts; Authentication gates the frontend; Hosting serves the SPA. The team wants a daily 02:00 Europe/Berlin job that flags posts older than 24 hours and still in pending status, plus an hourly job that pings the OpenAI API to re-check borderline content.

On Blaze, that is two functions.pubsub.schedule() declarations and a deploy. On Spark, it is the same two functions hosted somewhere else. The team picked a small Render service ($7 a month for the always-on instance, or $0 on Render's free tier with cold-start tradeoffs). The service exposes two POST routes: /cron/expire-stale-posts and /cron/recheck-borderline. Each route checks the bearer header, reads Firestore via the Admin SDK with a service account JSON loaded from Render's secret store, and returns 200.

In Crontap, the team has two schedules. The first runs at 0 2 * * * in Europe/Berlin against /cron/expire-stale-posts. The second runs at 0 * * * * in UTC against /cron/recheck-borderline. Both have Slack failure alerts pointing at the team's #alerts channel. Total monthly cost: Render's bill, Crontap's $3.25/mo billed annually, the Firebase project still on Spark.

When the team eventually outgrows the Render-and-Spark shape (more functions, longer runtimes, the need for Pub/Sub-driven event handlers), they upgrade to Blaze without losing any of this setup. The HTTPS endpoints stay; only the trigger source might change for some of them. Most teams never make that switch because the external pattern keeps working.

When to upgrade to Blaze anyway

External cron is a shape, not a religion. Blaze is the right answer when your Firebase project genuinely uses Pub/Sub, BigQuery export, or Extensions that require a billing-enabled project. It is also the right answer when your scheduled work is so small and so Firebase-coupled (one tiny onSchedule that touches one collection once a day) that "deploy a separate runtime" is more operational weight than the function itself. In that case, Blaze with a tight budget alert is the simpler path.

For everything else, the Spark-plus-external-cron shape keeps your billing surface predictable and your scheduled work portable.

FAQ

Can I use the Firebase Admin SDK from a non-Firebase host?

Yes. The Admin SDK is documented to work from any "trusted server environment". You initialize it with a service account JSON downloaded from your Firebase project console, and from then on it can read and write Firestore, manage Auth users, and call any other Admin SDK API. Render, Railway, Fly.io, Cloud Run, your own VM all qualify.

Will my Spark plan quotas still apply?

Yes. Firestore reads and writes, Storage egress, and Auth operations all count against the Spark caps documented on the Firebase pricing page regardless of where the trigger comes from. Crontap firing your endpoint does not bypass the limits; it only changes who fires the endpoint.

Does this work with Firebase Authentication?

Yes. Your scheduled job runs server-side with Admin SDK credentials, which can read any user, send password reset emails, mint custom tokens, and so on. Authentication is unchanged by where the scheduler lives.

What if my function is more than a few seconds of work?

Host it on a runtime that does not cap at 30 seconds. Render, Railway, Cloud Run with a longer timeout, or a dedicated VM all let you run jobs that take a few minutes. The Crontap call does not care how long the route takes; it logs the response when it comes back.

Can I keep one Scheduled Function on Blaze and add Crontap for the rest?

Yes. They run independently. Some teams keep a single Pub/Sub-coupled function on Blaze and add Crontap for everything that is just an HTTPS trigger. The two do not conflict.

References

Related on Crontap

From the blog

Read the blog

Guides, patterns and product updates.

Tutorials on scheduling API calls, webhooks and automations, plus deep dives into cron syntax, timezones and reliability.

Alternatives

Vercel Cron every minute: beating the Hobby hourly limit

Vercel Cron caps Hobby at hourly cadence and 5 jobs, and ties every change to a redeploy. Here is the external cron pattern teams use to ship per-minute schedules, per-IANA timezones, and one dashboard across projects without paying $20/mo per user for Pro.

Alternatives

Cloud Run cron without Cloud Scheduler

Cloud Scheduler costs $0.10 per job per month after the first 3 and asks for OIDC plus IAM bindings on every target. Here is the IAM-free pattern Cloud Run teams use to fire their .run.app URLs on a clock with one bearer token and one dashboard across every GCP project.

Alternatives

Heroku Scheduler alternative: any cron expression without the add-on

Heroku Scheduler caps you at three cadences and account-wide UTC, and spins a one-off dyno per run. Here is the external cron pattern that gives you any cron expression, per-schedule timezones, and zero per-execution dyno spin-up cost.

Guides

Running an OpenAI sentiment pipeline on a real scheduler

OpenAI batch work needs a clock, not a user session. Here is the scheduled HTTP-route pattern teams use to drain LLM batches at a sustainable rate inside OpenAI's rate limits, with per-task failure alerts.

Reference

Cron syntax cheat sheet with real-world examples

Cron syntax without the math. Every pattern you're likely to reach for (every 5 minutes, weekdays, business hours, first of the month), with a practical example and a link to a free debugger.