Batch Screenshot API: Capture Multiple URLs Programmatically

February 21, 2026 · 8 min read

You need screenshots of 50 product pages. Or 200 competitor landing pages. Or every page on your own site for a visual QA audit. Doing it manually is out of the question. You need a batch screenshot API -- a way to feed in a list of URLs and get back images, fast.

This guide covers how to capture multiple website screenshots programmatically using GrabShot's screenshot API, with working code in curl, Node.js, and Python. We will also cover concurrency patterns, error handling, and real-world use cases.

Why Batch Screenshots?

Single screenshot APIs are straightforward: one URL in, one image out. But most real workflows involve multiple URLs:

The trick is doing this efficiently without hammering the API or losing track of failures.

Single Screenshot Basics

Before we batch, here is the single-URL call. Try it live to see it in action:

curl "https://grabshot.dev/api/screenshot?url=https://example.com&width=1280&height=800&format=png" \
  -H "X-API-Key: YOUR_API_KEY" \
  --output screenshot.png

That returns a PNG of the target page. The same endpoint supports format=jpeg, custom viewport sizes, full-page capture, and more parameters.

Batch Screenshots with a Shell Script

The simplest approach: loop through a file of URLs. Create urls.txt with one URL per line:

https://stripe.com
https://github.com
https://vercel.com
https://linear.app
https://notion.so

Then run them in parallel with xargs:

cat urls.txt | xargs -P 5 -I {} sh -c '
  SLUG=$(echo "{}" | sed "s|https\?://||;s|/|_|g")
  curl -s "https://grabshot.dev/api/screenshot?url={}&width=1280&format=png" \
    -H "X-API-Key: YOUR_API_KEY" \
    --output "screenshots/${SLUG}.png"
  echo "Done: {}"
'

The -P 5 flag runs 5 requests concurrently. Adjust based on your plan's rate limit.

Batch Screenshots in Node.js

For production workflows, Node.js gives you better control over concurrency, retries, and error handling:

const fs = require('fs');

const API_KEY = process.env.GRABSHOT_API_KEY;
const BASE = 'https://grabshot.dev/api/screenshot';
const CONCURRENCY = 5;

const urls = [
  'https://stripe.com',
  'https://github.com',
  'https://vercel.com',
  'https://linear.app',
  'https://notion.so',
  'https://figma.com',
  'https://tailwindcss.com',
  'https://nextjs.org',
];

async function captureOne(url, retries = 2) {
  const params = new URLSearchParams({
    url,
    width: '1280',
    height: '800',
    format: 'png',
  });

  for (let attempt = 0; attempt <= retries; attempt++) {
    try {
      const res = await fetch(`${BASE}?${params}`, {
        headers: { 'X-API-Key': API_KEY },
      });

      if (!res.ok) throw new Error(`HTTP ${res.status}`);

      const buffer = Buffer.from(await res.arrayBuffer());
      const slug = url.replace(/https?:\/\//, '').replace(/\//g, '_');
      fs.writeFileSync(`screenshots/${slug}.png`, buffer);

      console.log(`OK: ${url}`);
      return { url, status: 'ok' };
    } catch (err) {
      if (attempt === retries) {
        console.error(`FAIL: ${url} - ${err.message}`);
        return { url, status: 'error', error: err.message };
      }
      await new Promise(r => setTimeout(r, 1000 * (attempt + 1)));
    }
  }
}

async function batchCapture(urls, concurrency) {
  const results = [];
  const queue = [...urls];

  async function worker() {
    while (queue.length) {
      const url = queue.shift();
      results.push(await captureOne(url));
    }
  }

  fs.mkdirSync('screenshots', { recursive: true });
  await Promise.all(Array(concurrency).fill(null).map(worker));
  return results;
}

batchCapture(urls, CONCURRENCY).then(results => {
  const ok = results.filter(r => r.status === 'ok').length;
  console.log(`\nDone: ${ok}/${results.length} succeeded`);
});

Key patterns here:

Batch Screenshots in Python

Python with asyncio and aiohttp handles bulk captures efficiently:

import asyncio
import aiohttp
import os
from urllib.parse import urlparse

API_KEY = os.environ['GRABSHOT_API_KEY']
BASE = 'https://grabshot.dev/api/screenshot'
CONCURRENCY = 5

urls = [
    'https://stripe.com',
    'https://github.com',
    'https://vercel.com',
    'https://linear.app',
    'https://notion.so',
    'https://figma.com',
    'https://tailwindcss.com',
    'https://nextjs.org',
]

async def capture_one(session, sem, url, retries=2):
    params = {'url': url, 'width': '1280', 'height': '800', 'format': 'png'}
    headers = {'X-API-Key': API_KEY}

    async with sem:
        for attempt in range(retries + 1):
            try:
                async with session.get(BASE, params=params, headers=headers) as resp:
                    if resp.status != 200:
                        raise Exception(f'HTTP {resp.status}')

                    data = await resp.read()
                    slug = urlparse(url).netloc.replace('.', '_')
                    path = f'screenshots/{slug}.png'
                    with open(path, 'wb') as f:
                        f.write(data)

                    print(f'OK: {url}')
                    return {'url': url, 'status': 'ok'}

            except Exception as e:
                if attempt == retries:
                    print(f'FAIL: {url} - {e}')
                    return {'url': url, 'status': 'error', 'error': str(e)}
                await asyncio.sleep(1 * (attempt + 1))

async def main():
    os.makedirs('screenshots', exist_ok=True)
    sem = asyncio.Semaphore(CONCURRENCY)

    async with aiohttp.ClientSession() as session:
        tasks = [capture_one(session, sem, url) for url in urls]
        results = await asyncio.gather(*tasks)

    ok = sum(1 for r in results if r['status'] == 'ok')
    print(f'\nDone: {ok}/{len(results)} succeeded')

asyncio.run(main())

The semaphore limits active requests to 5 at a time, just like the Node.js worker pool.

Handling Large Batches (100+ URLs)

When you are dealing with hundreds or thousands of URLs, a few extra considerations:

1. Respect Rate Limits

GrabShot's rate limits depend on your plan. The Pro plan handles high-volume workloads. Add a small delay between batches if you are hitting limits:

// Process in chunks of 20 with a pause between
for (let i = 0; i < urls.length; i += 20) {
  const chunk = urls.slice(i, i + 20);
  await batchCapture(chunk, CONCURRENCY);
  await new Promise(r => setTimeout(r, 2000)); // 2s pause
}

2. Save Metadata Alongside Images

For audits and reports, save a manifest with timestamps and status:

const manifest = results.map(r => ({
  ...r,
  timestamp: new Date().toISOString(),
  file: r.status === 'ok'
    ? `screenshots/${r.url.replace(/https?:\/\//, '').replace(/\//g, '_')}.png`
    : null,
}));

fs.writeFileSync('manifest.json', JSON.stringify(manifest, null, 2));

3. Use Full-Page Capture Selectively

Full-page screenshots of long pages produce large files. For batch jobs, above-the-fold captures (default viewport) are usually enough. Reserve full_page=true for pages where you need the complete content.

Real-World Use Cases

Competitor Dashboard

Run a daily cron job that screenshots 20 competitor homepages. Store them by date. Build a simple viewer to compare changes over time. Pair it with visual regression testing to get automated diff alerts.

E-commerce QA

After every deployment, capture your top 50 product pages at both desktop (1280px) and mobile (375px) viewports. Flag any page where the image size drops below a threshold (likely a rendering error).

Agency Reporting

Screenshot every client website on the 1st of each month. Include the captures in automated PDF reports using PDFMagic for HTML-to-PDF conversion.

SEO Monitoring

Capture SERP results for your target keywords alongside your landing pages. Track visual changes in both. Combine with MetaPeek to audit meta tags at the same time.

Start Capturing Screenshots in Bulk

GrabShot's API handles the rendering so you can focus on your workflow. Free tier includes 25 captures/month to test your batch pipeline.

Try the API Free

Tips for Production Batch Pipelines

Wrapping Up

Batch screenshots are just a loop around a single API call, but the details matter: concurrency control, retry logic, and result tracking turn a fragile script into a reliable pipeline. The code examples above work out of the box with GrabShot -- swap in your API key and start capturing.

For more advanced workflows, check out automated website monitoring and change detection with screenshots.