Back to blog

Comparisons · Jan 12, 2026

Airtable Automations vs external cron: when you need to leave the no-code scheduler

Airtable Automations are great for event-driven work but plan-gated and cadence-capped for scheduled triggers. Here is the external cron pattern teams use to call Airtable's REST API on any cadence, from any plan, including Free.
crontap.com / blog
Airtable Automations are event-driven and the scheduled trigger is plan-gated and cadence-capped. Here is the external cron pattern teams use to call Airtable's REST API on any cadence, from any plan including Free, with a small backend endpoint and a Personal Access Token.

You opened Airtable Automations expecting "I want this to fire every 15 minutes and call my API". You found a beautiful tool for the wrong job. Airtable Automations are event-driven: a record gets created, a button is tapped, a form is submitted, a date field comes due, and the automation runs. The native scheduled trigger exists but is plan-gated, capped on cadence, and locked to per-base scope. The clean alternative is to leave Automations alone for the event-driven cases and call Airtable's REST API directly from your own backend on whatever cadence you need. Here is the pattern.

If you want the short version: generate an Airtable Personal Access Token (PAT), expose a thin endpoint on your backend that POSTs into https://api.airtable.com/v0/{baseId}/{tableId} with the PAT, and let Crontap fire that endpoint on whatever cadence you actually want. Every 1 minute on Pro, per IANA timezone, no plan-gated cadence cliffs, no surprise upgrade prompt the first time you ask for sub-hour.

What Airtable Automations do

Airtable Automations are the event-driven half of an Airtable base. A typical Automation has the shape:

  1. A trigger ("when a record is created", "when a record matches conditions", "when a form is submitted", "when a button is clicked", "when a record enters a view", "at a scheduled time").
  2. One or more action steps (create or update a record, send an email, run a script, call a webhook, post to Slack).
  3. A run history that shows every fire with the trigger payload and any error.

For event-driven work, this is great. The "Send a Slack message when a record is approved" pattern fits Automations cleanly. So does "Update the Notes column when a date passes" and "Email the requester when their submission is processed". You get a no-code editor, deep integration with the base, and the scripting step for anything Automation actions cannot do natively.

The shape that does not fit is "fire on the wall clock at any cadence I want, and run something that may or may not have anything to do with this base". That is what an external cron is for.

The scheduled-trigger plan gate

Airtable does have a scheduled trigger ("At a scheduled time") inside Automations. Two things shape how usable it is:

  • Plan gating. The free Airtable plan (Free) gives you a small number of automation runs per month and a limited cadence. The Team and Business plans (formerly Plus / Pro) lift the run quotas and unlock fuller scheduling control. Specifics evolve; the live source of truth is the Airtable plans and pricing page.
  • Cadence cliffs. Even on paid plans, the smallest meaningful cadence Automations support is hourly for most use cases. Sub-hour scheduling, especially every-minute or every-5-minute work, is not the path Automations are designed for. The product nudges you toward upgrading, batching, or switching to a different trigger.

For the use case "I want this to fire every 15 minutes and pull data from a third-party API into a Records table", the scheduled Automation hits two walls at once: you may need to upgrade to get a usable cadence, and even then you are paying per run against the plan's automation quota.

The external cron + Airtable REST pattern

Here is the shape we recommend for any "fire on a clock and write rows into Airtable" job. It works on every Airtable plan, including the free one.

Crontap (cron)  →  HTTPS POST  →  Your /jobs/sync-airtable URL  →  Airtable REST API

Airtable still owns the data. Crontap owns the clock. Your backend owns the work. Airtable Automations are still available for the event-driven cases that fit them; they just stop being responsible for "every 15 minutes".

Step 1: Generate an Airtable Personal Access Token

In Airtable, click your profile icon and go to Developer hub > Personal access tokens. Create a new token with the scopes you actually need (typically data.records:read, data.records:write, and schema.bases:read) and the bases you want to write into.

Copy the token. Airtable only shows it once. Store it as AIRTABLE_PAT in your backend environment.

Step 2: Find the base ID and table ID

Open your base in Airtable and look at the URL. It looks like https://airtable.com/appAbCdEfGhIjK1L2/tblXyz123/.... The appXXX is your base ID; the tblXXX is the table ID. You can also find them in the Airtable API docs for the base, which generates per-base reference docs the moment you open the page.

Step 3: Build the job endpoint

The endpoint is a thin wrapper around your work. Two rules:

  1. Refuse anonymous traffic. Read an Authorization header and reject if it does not match your CRON_SECRET.
  2. Keep the work short, or kick it off as a background task and return 200 quickly. Crontap will time out a long-hanging request like any HTTP client; you do not want your scheduler to be your queue.
export async function POST(request: Request) {
  const auth = request.headers.get("authorization");
  if (auth !== `Bearer ${process.env.CRON_SECRET}`) {
    return new Response("Unauthorized", { status: 401 });
  }

  const records = await fetchUpstreamRecords();

  const res = await fetch(
    `https://api.airtable.com/v0/${process.env.AIRTABLE_BASE_ID}/${process.env.AIRTABLE_TABLE_ID}`,
    {
      method: "POST",
      headers: {
        Authorization: `Bearer ${process.env.AIRTABLE_PAT}`,
        "Content-Type": "application/json",
      },
      body: JSON.stringify({
        records: records.map((r) => ({ fields: r })),
      }),
    },
  );

  if (!res.ok) {
    return new Response(await res.text(), { status: 500 });
  }
  return Response.json({ ok: true });
}

Two things to note:

  • The endpoint refuses anonymous calls. Crontap will pass the Authorization header on every fire.
  • Airtable's REST API accepts up to 10 records per POST. If you have more, batch in groups of 10 inside the loop and respect Airtable's rate limit of 5 requests per second per base.

Step 4: Generate the cron secret and store it on both sides

Generate a long random string locally:

openssl rand -base64 32

Add it as CRON_SECRET to your backend environment.

Step 5: Point Crontap at the endpoint

Head to Crontap and create a new schedule.

  1. URL. Paste your endpoint URL: https://your-app.example.com/jobs/sync-airtable.
  2. Method. POST.
  3. Headers. Add Authorization: Bearer <your CRON_SECRET>. Crontap stores the value encrypted; you do not see it again after saving.
  4. Cadence. Type plain English ("every 15 minutes") or paste a cron expression. Crontap previews the next 5 fires inline so you can sanity-check before saving.
  5. Timezone. Pick the IANA zone the schedule actually runs in. Airtable's scheduled triggers run on the base's account default; Crontap is per-schedule.
  6. Failure alerts. Add an integration: email / webhook (Slack / Discord / Telegram). Crontap fires on 4xx and 5xx with the response body and timing in the payload, so a Slack alert lands the moment Airtable returns 422 (validation failure) or 429 (rate-limited).

Press Perform test to fire a real request before you trust the cadence. If your endpoint returns 200 and Airtable shows the records in the table, you are done. If you see 401, the bearer is mismatched. If Airtable returns 422, the payload shape is off; check the field names against the table schema.

Fix this in 60 seconds with Crontap. Free tier available. No credit card. Schedule your first job.

Worked example: every 15 minutes, sync a third-party API into a Records table

A common shape: an Airtable user has a base called OpsTracker with a table called Tickets. They want to pull new tickets from an internal support API every 15 minutes and append them to Tickets so the team's Airtable views stay current.

Old shape (Airtable Automation, scheduled trigger).

  • Set the scheduled trigger to "every hour" (the realistic floor on most plans). Accept up to 60 minutes of staleness.
  • Add a script step that fetches the upstream API, transforms the response, and creates the records via Airtable's scripting context.
  • Pay for the plan that gives you enough automation runs per month and a fast enough cadence.

New shape (Crontap + Airtable REST).

  • Build a small endpoint on your backend (/jobs/sync-tickets) that does the upstream fetch and POSTs to https://api.airtable.com/v0/appAbCdEf/tblTickets with the PAT.
  • One Crontap schedule, every 15 minutes, in your team's local timezone.
  • Total cost: $3.25/mo on Crontap Pro (or the free tier if your free-tier limits cover it). Airtable plan stays whatever it was; you no longer need to upgrade for the scheduler.

The behaviour the user wanted ("every 15 minutes, fresh tickets") is exactly what they get. Airtable is back to being the database; the scheduling decision lives where scheduling decisions belong.

When to keep Airtable Automations

External cron is a shape, not a religion. There are cases where Airtable Automations are exactly the right primitive and you should not touch them:

  • Event-driven triggers. "When a record is created in Submissions, send the requester a confirmation email." Automations are perfect; an external scheduler cannot tell you a record was created.
  • Button taps and form submissions. "When the user clicks the Approve button, run the script that updates linked records." Native Automation, no question.
  • Match-conditions triggers. "When a record's status moves to Archived, copy it into the historical table." Same shape: the trigger is internal Airtable state.
  • Anything where the source of truth is Airtable. If both the read and the write happen inside Airtable and nothing else cares, Automations keep the logic close to the data.

The wedge is the trigger. If the trigger is "the wall clock said it is time", reach for Crontap and let it call into Airtable's REST API. If the trigger is "something happened in Airtable", let Automations do their job.

How this relates to the existing Zapier + Airtable how-to

If your starting point is Zapier rather than your own backend, we already have a click-by-click guide for that flow: Integrate Crontap with Zapier and Airtable webhooks. That post covers the Zapier-as-glue path, which is great if you are not a developer or you want a no-code seam between the cron and the Airtable write. This post is for the case where you have your own backend and prefer to skip Zapier entirely.

FAQ

Does this work on Airtable's free plan?

Yes. The Airtable REST API is available on every plan, including Free, as long as you generate a Personal Access Token with the right scopes. Crontap fires HTTP requests at your backend; your backend hits the Airtable API; Airtable does not see the cron at all. There is no plan upgrade required to schedule arbitrary work on a wall clock.

Can Crontap call api.airtable.com directly and skip my backend?

For very simple jobs ("write one fixed record every hour"), yes. You would store the Airtable PAT in Crontap's headers and POST directly against https://api.airtable.com/v0/{baseId}/{tableId} with a fixed JSON body. Most real jobs need a fetch + transform + write loop, which is more comfortable to express in a small backend endpoint than in a static request body. Both shapes are valid; the trade-off is "one less moving part" vs "more dynamic data".

What about Airtable's rate limits?

Airtable enforces 5 requests per second per base, with 30-second penalty windows on overflow (see Airtable API rate limits). If your job writes a lot of records at once, batch in groups of 10 and add a small delay between batches. For most every-15-minute syncs this is a non-issue.

Will Automations and Crontap conflict if I run both?

No. They run independently. Most teams keep Airtable Automations for event-driven work and add Crontap for everything that needs a real schedule, sub-hour cadence, or a target outside Airtable. Failure alerts and timezones live with the scheduler that is firing the work.

Can I run a cadence below 1 minute?

Crontap's minimum cadence is every 1 minute on Pro. Airtable's API rate limit caps you well above that anyway (5 requests per second per base), so for any practical Airtable sync, every 1 minute is more than fast enough.

Where should I put the backend endpoint?

Anywhere that accepts HTTPS. Common choices: Vercel API routes, Cloudflare Workers, Railway services, your own backend on Cloud Run, an existing Node or Python app. The endpoint is small (parse auth, do the work, return 200) so it fits any of those. Crontap does not care what is on the other end of the URL.

References

Related on Crontap

From the blog

Read the blog

Guides, patterns and product updates.

Tutorials on scheduling API calls, webhooks and automations, plus deep dives into cron syntax, timezones and reliability.

Alternatives

Vercel Cron every minute: beating the Hobby hourly limit

Vercel Cron caps Hobby at hourly cadence and 5 jobs, and ties every change to a redeploy. Here is the external cron pattern teams use to ship per-minute schedules, per-IANA timezones, and one dashboard across projects without paying $20/mo per user for Pro.

Alternatives

Cloud Run cron without Cloud Scheduler

Cloud Scheduler costs $0.10 per job per month after the first 3 and asks for OIDC plus IAM bindings on every target. Here is the IAM-free pattern Cloud Run teams use to fire their .run.app URLs on a clock with one bearer token and one dashboard across every GCP project.

Alternatives

Heroku Scheduler alternative: any cron expression without the add-on

Heroku Scheduler caps you at three cadences and account-wide UTC, and spins a one-off dyno per run. Here is the external cron pattern that gives you any cron expression, per-schedule timezones, and zero per-execution dyno spin-up cost.

Guides

Running an OpenAI sentiment pipeline on a real scheduler

OpenAI batch work needs a clock, not a user session. Here is the scheduled HTTP-route pattern teams use to drain LLM batches at a sustainable rate inside OpenAI's rate limits, with per-task failure alerts.

Reference

Cron syntax cheat sheet with real-world examples

Cron syntax without the math. Every pattern you're likely to reach for (every 5 minutes, weekdays, business hours, first of the month), with a practical example and a link to a free debugger.