Back to blog

Guides · Mar 28, 2026

Shopify Admin API: recurring checkout sync via external HTTP cron

Shopify Flow is great at event-driven work, but it cannot fire on a clock or call the Admin API on a custom cadence per shop. Here is the windowed GET pattern multi-shop operators use to poll checkouts, orders and customers in shop-local time, scheduled in 60 seconds per shop with Crontap.
crontap.com / blog
Shopify Flow does not fire on a clock and cannot poll the Admin API on a custom per-shop cadence. Here is the windowed GET pattern (created_at_min, X-Shopify-Access-Token, per-shop timezone) and how to schedule it in 60 seconds per shop with Crontap.

You run a Shopify store, or you build for merchants who do, and you've already noticed the gap. Shopify Flow fires on events: order created, customer tagged, fulfilment status changed. It does not fire on a clock. So when the work is "every 10 minutes, fetch checkouts created since the last run, sync them to our CRM and dunning system", Flow is the wrong tool. The right tool is a recurring HTTP GET against the Admin API on a per-shop cadence, scheduled by Crontap.

The short version: point Crontap directly at https://{shop}.myshopify.com/admin/api/2024-01/checkouts.json?created_at_min=... with the shop's access token in an X-Shopify-Access-Token header, fire it every 10 minutes in the shop's local timezone, and read the response from the Crontap log or from a "call on success" webhook.

Why you would poll Shopify Admin API on a schedule

Shopify pushes a lot through webhooks, and webhooks are the right answer for most event-driven work. But there are jobs where polling is the cleaner shape:

  • Reconciling missed webhooks. Shopify webhooks are at-least-once and can drop on transport errors. A periodic backstop GET against checkouts.json (or orders.json, customers.json) catches anything that didn't make it through.
  • Abandoned checkouts and dunning sweeps. A "send a recovery email at T+30 minutes after abandonment" workflow needs to read the checkouts list on a cadence, not on an event.
  • Multi-shop operators. A multi-shop Shopify operator running 22 stores wants one dashboard for "are we caught up on every shop?", not 22 webhook handlers.
  • Cross-system sync. Pulling from Shopify on a schedule, transforming, then pushing to a CRM or ERP is simpler than wiring webhook to queue to worker to destination.

In every case the question is the same: who holds the clock that fires the GET? Shopify Flow does not, the Admin UI does not, the platform itself does not. You need an external scheduler.

The created_at_min windowed GET pattern

The shape is one parameterised GET per shop, fired on a cadence.

GET https://{shop}.myshopify.com/admin/api/2024-01/checkouts.json?created_at_min=2026-04-27T08:00:00Z&limit=250
Headers:
  X-Shopify-Access-Token: shpat_xxxxxxxxxxxxxxxxxxxxxxxxx

Three pieces matter here:

  1. {shop}.myshopify.com is the canonical Shopify domain for the shop, even if the shop has a custom domain in front. The Admin API responds at *.myshopify.com.
  2. /admin/api/2024-01/checkouts.json is the abandoned checkouts endpoint. For orders use /orders.json, for customers /customers.json; the windowed-GET pattern is the same shape across the API.
  3. created_at_min is an ISO-8601 timestamp. You set it to "the start of the window I'm asking for" and Shopify returns objects created at or after that timestamp.

The window logic is the part teams over-engineer. The simple version:

  • Cadence is every 10 minutes; pick a small overlap (each fire asks for the last 15 minutes) so you never miss a record on the boundary.
  • Build the URL with created_at_min of now - 15 minutes, ISO-formatted in UTC. The server interprets ISO timestamps regardless of request origin.
  • On the consumer side, dedupe by checkout ID. The windowed overlap means you'll see a few records twice; deduping is one set lookup.

Why 15 minutes for a 10-minute cadence? Clock skew can shift by a few seconds each direction, the request itself takes time, and 5 minutes of overlap is cheap to dedupe. If you want strict no-overlap, store the last successful run's timestamp in Redis or a single-row Postgres table and use that as created_at_min on the next run.

Why Shopify Flow cannot replace this

Shopify Flow is excellent at event-driven work. It fires on triggers like "order created", "customer tagged", "product variant out of stock", and runs through a chain of conditions and actions. The Flow trigger docs list every supported event.

What Flow does not do:

  • Fire on a clock. There is no "every 10 minutes" trigger. Triggers are events in your shop's data, not intervals on a schedule.
  • Call arbitrary REST endpoints on a custom cadence. Flow has a "Send HTTP request" action, but it runs as a step inside an event-triggered workflow. There is no Flow workflow whose root node is "every 10 minutes".
  • Run cross-shop. Flow is per-shop. 22 stores means 22 Flow installations. None coordinate.

So if your job is "every 10 minutes, GET checkouts.json for each of my 22 shops, write the results to our CRM", Flow is structurally the wrong primitive. Flow stays in the toolbox for the event-driven half of the work; the scheduled half lives somewhere else.

Per-shop timezone cadence

Shopify shops have a configured timezone in the admin (Settings, General, Store details). It defaults to whatever the merchant picked at signup: Europe/Berlin, Australia/Sydney, US/Eastern, Europe/Paris.

Timezone matters for scheduled polling in two cases. Business-hours-only cadences like */10 9-18 * * 1-5 for support inboxes or dunning sweeps need to be evaluated in shop-local time, not UTC, or the window slides off business hours twice a year on DST. Daily reports and digests like "09:00 local checkout summary" are one schedule per shop, each pinned to the shop's IANA zone, not a single UTC schedule that lands at a different local hour for every shop.

Crontap stores timezone per schedule. Duplicate the schedule, change the timezone field, save. No DST math; the cron expression is interpreted in the assigned zone and DST transitions are applied per zone.

The Crontap setup

The Crontap setup is three steps: generate the access token in Shopify, store it as a header in Crontap, set the cadence and timezone. The total time is about 60 seconds per shop once you've done one.

Step 1: Generate a custom app access token in Shopify

Inside the shop admin, custom apps are the way to get a long-lived Admin API token without going through OAuth.

  1. Open the shop admin and go to Settings -> Apps and sales channels -> Develop apps. (You may need to enable custom app development at the org level first; the link is on the same page.)
  2. Click Create an app and give it a name like "Crontap polling".
  3. Open the Configuration tab and click Configure under Admin API integration.
  4. Grant the scopes the endpoint needs. For abandoned checkouts you need read_checkouts. For orders you'd add read_orders. Grant the minimum that covers your endpoints, no more.
  5. Click Save, then go to the API credentials tab and click Install app. Shopify reveals an Admin API access token that starts with shpat_. Copy it now; you can only view it once after install.

The token is per-shop. If you operate 22 shops, you do this once per shop. The token does not expire as long as the custom app stays installed.

Step 2: Store the access token as X-Shopify-Access-Token header in Crontap

Head to Crontap and create a new schedule.

  1. URL. https://{shop}.myshopify.com/admin/api/2024-01/checkouts.json?created_at_min={{nowMinus15Minutes}}&limit=250. Replace {shop} with your shop's myshopify subdomain. Crontap does not template {{nowMinus15Minutes}} today; the simpler version is to compute the window on the consumer side and use a fixed URL, or to use Crontap's built-in support for relative timestamps if your plan includes it.
  2. Method. GET. The Admin API uses GET for read endpoints.
  3. Headers.
    • X-Shopify-Access-Token: shpat_xxxxxxxxxxxxxxxxxxxxxxxxx (the token from step 1).
    • Accept: application/json.

Crontap stores the access token encrypted; you don't see it again after saving. If you operate multiple shops, you create one schedule per shop, each with its own token in its own header.

Step 3: Schedule the GET with your cadence

In the same form:

  1. Cadence. Type plain English ("every 10 minutes") or paste a cron expression like */10 * * * *. For business-hours-only polling, use */10 9-18 * * 1-5. Crontap previews the next 5 fires inline so you can sanity-check.
  2. Timezone. Pick the IANA zone the shop is configured for. Europe/Berlin, Australia/Sydney, Europe/Paris, or US/Eastern are all common; the dropdown is the full IANA list.
  3. Failure alerts. Add an integration: email / webhook (Slack / Discord / Telegram). For Admin API polling, alerting on 4xx (auth or scope problem) and 5xx (Shopify is having a moment) is the cheap monitor that pays for itself the first time the token rotates.
  4. Body. Leave empty; this is a GET.

Press Perform test to fire a real request before you trust the cadence. If the response is 200 with a checkouts array (possibly empty), you're done. If you see 401 or 403, the token is wrong or the scope is missing. If you see 429, jump to the rate-limit section below.

Step 4: Handle rate limits (429 retry)

Shopify rate-limits the Admin API per shop. The defaults from the rate limits docs are:

  • REST Admin API: 2 requests per second on standard plans, 4 on Shopify Plus, with a leaky-bucket model that lets short bursts above the steady-state.
  • Storefront API and GraphQL Admin API have their own quotas; this post is about REST, but the windowed-GET pattern translates directly.

For polling at every 10 minutes, this is comfortably under budget. Six requests per hour per shop, against a 2-per-second limit, leaves plenty of headroom. The pattern starts to matter when:

  • You poll multiple endpoints per cadence (checkouts, orders, customers), each as a separate request.
  • You scale to many shops without spreading the requests in time. If 22 shops all fire at minute 0, that's 22 requests in the same second.

The simplest fix for the second case is a small jitter: stagger the schedules by 30 seconds across the 22 shops so they don't dogpile the bucket on the same wall-clock minute. Crontap lets you set the cadence per schedule; offsetting the start minute is a property of the cron expression itself (*/10 * * * * vs 1-59/10 * * * * vs 2-59/10 * * * *).

For 429 responses, the retry behaviour is built into how Crontap fires:

  • A 429 response is logged with the response body and headers (Shopify returns Retry-After).
  • Failure alerts fire on the 429 the same way they fire on any other 4xx.
  • Crontap will not automatically replay the request inside the same cadence window; the next fire happens on the next scheduled tick. If you want strict back-pressure-aware retries, the cleanest pattern is to absorb the 429 on a small consumer that you point Crontap at, rather than at Shopify directly. See "Worked example" below.

Fix this in 60 seconds with Crontap. Free tier available. No credit card. Schedule your first job →

Worked example: every 10 minutes in Europe/Berlin

Here's the pattern in action for a multi-shop Shopify operator running 22 stores, polling abandoned checkouts every 10 minutes, with each store's polling schedule pinned to the store's configured timezone.

For one shop in Europe/Berlin:

  • URL. https://acme-store.myshopify.com/admin/api/2024-01/checkouts.json?created_at_min=2026-04-27T08:00:00Z&limit=250
  • Method. GET
  • Headers. X-Shopify-Access-Token: shpat_*** and Accept: application/json
  • Cadence. */10 * * * *
  • Timezone. Europe/Berlin
  • Failure alert. Slack webhook into #shopify-ops, fire on 4xx and 5xx

Crontap fires the GET every 10 minutes in Berlin local time. The response lands in the Crontap log with status code, headers, response body, and timing. The downstream consumer reads from the log via Crontap's API, or you wire a "call on success" webhook that forwards each response to your CRM endpoint.

For the next shop in Australia/Sydney, the steps are identical with a different myshopify subdomain, a different access token, and Australia/Sydney as the timezone. Across 22 stores the schedule footprint is 22 cron entries, each parameterised by URL, token, and timezone, with one dashboard view that shows the latest fire status per shop.

If you want a queue between Crontap and Shopify (to absorb 429s, batch responses, retry with backoff), expose /poll-shop/{shop} on your own backend, have the consumer make the actual Shopify call with full retry control, and point Crontap at your consumer instead of at Shopify. The cadence stays the same; the rate-limit management moves from Crontap-level config to consumer-level code.

FAQ

Can Shopify Flow do this?

Not on its own. Flow fires on events like "order created", not on a clock. There is no "every 10 minutes" trigger. The Flow trigger reference is explicit. For scheduled REST polling against the Admin API, you need an external scheduler.

Do I need a public app, or is a custom app enough?

A custom app is enough for one shop's own polling. Custom apps are per-shop and cannot be distributed; they're meant for the merchant who owns the shop. If you're building for many merchants you don't operate, you'd build a public app with OAuth, store the per-shop token your app receives, and point Crontap at one schedule per installed shop, each with that shop's token.

What about webhooks?

Webhooks and polling complement each other. Use webhooks for the real-time path; the order-created handler should fire within seconds, not minutes. Use scheduled polling as the reconciliation backstop and for jobs that are inherently periodic (digest emails, abandoned-checkout sweeps, cross-system sync).

Can I poll GraphQL instead of REST?

Yes. The GraphQL Admin API has its own cost-based rate-limit budget and otherwise the same shape: pick the query, parameterise the window, fire on a cadence. The Crontap setup is identical except the URL is https://{shop}.myshopify.com/admin/api/2024-01/graphql.json, the method is POST, and the body carries the GraphQL query with Content-Type: application/json.

What if the response is paginated?

For polling, set limit=250 (the maximum) and accept that any window with more than 250 records will be incomplete on the first fire; the next fire's overlap window picks them up. If your shop genuinely has 250+ records every 15 minutes, switch to GraphQL with cursor pagination, or run a consumer that pages through on your behalf.

Will polling burn through my rate-limit budget?

At every 10 minutes, no. Six requests per hour against a 2-per-second budget is comfortable. The shape that gets close is many shops polling on the same wall-clock minute. Stagger the schedules across the minute boundary if you operate at scale.

Does Crontap retry 429s automatically?

Crontap logs the 429 with the Retry-After header and fires the failure alert. It does not auto-replay inside the same window. For strict back-pressure-aware retries, point Crontap at a consumer endpoint you control and absorb the 429 there with standard Retry-After semantics.

References

Related on Crontap

From the blog

Read the blog

Guides, patterns and product updates.

Tutorials on scheduling API calls, webhooks and automations, plus deep dives into cron syntax, timezones and reliability.

Alternatives

Vercel Cron every minute: beating the Hobby hourly limit

Vercel Cron caps Hobby at hourly cadence and 5 jobs, and ties every change to a redeploy. Here is the external cron pattern teams use to ship per-minute schedules, per-IANA timezones, and one dashboard across projects without paying $20/mo per user for Pro.

Alternatives

Cloud Run cron without Cloud Scheduler

Cloud Scheduler costs $0.10 per job per month after the first 3 and asks for OIDC plus IAM bindings on every target. Here is the IAM-free pattern Cloud Run teams use to fire their .run.app URLs on a clock with one bearer token and one dashboard across every GCP project.

Alternatives

Heroku Scheduler alternative: any cron expression without the add-on

Heroku Scheduler caps you at three cadences and account-wide UTC, and spins a one-off dyno per run. Here is the external cron pattern that gives you any cron expression, per-schedule timezones, and zero per-execution dyno spin-up cost.

Guides

Running an OpenAI sentiment pipeline on a real scheduler

OpenAI batch work needs a clock, not a user session. Here is the scheduled HTTP-route pattern teams use to drain LLM batches at a sustainable rate inside OpenAI's rate limits, with per-task failure alerts.

Reference

Cron syntax cheat sheet with real-world examples

Cron syntax without the math. Every pattern you're likely to reach for (every 5 minutes, weekdays, business hours, first of the month), with a practical example and a link to a free debugger.