How to Capture Website Screenshots in Python (3 Methods)

February 22, 2026 · 10 min read

Taking screenshots of websites programmatically is a common requirement. You might need them for link previews, visual testing, PDF reports, social sharing cards, or archiving pages. Python gives you several solid options, each with different tradeoffs.

This guide covers three approaches: using a screenshot API (simplest), Selenium (most common), and Playwright (most modern). We'll look at real code, error handling, and when each method makes sense.

Method 1: Screenshot API (Recommended for Production)

The fastest way to get website screenshots in Python is through an API. No browser installation, no dependency management, no memory issues on your server. You send a URL, you get an image back.

Here's how it works with GrabShot:

Basic Screenshot

import requests

api_key = "your_api_key"
url = "https://example.com"

response = requests.get(
    "https://grabshot.dev/api/screenshot",
    params={
        "url": url,
        "width": 1280,
        "height": 800,
        "format": "png"
    },
    headers={"x-api-key": api_key}
)

if response.status_code == 200:
    with open("screenshot.png", "wb") as f:
        f.write(response.content)
    print(f"Saved screenshot ({len(response.content)} bytes)")
else:
    print(f"Error: {response.status_code} - {response.text}")

Batch Screenshots with Error Handling

In production you'll usually need to screenshot multiple URLs. Here's a more robust pattern:

import requests
import time
from pathlib import Path

class ScreenshotClient:
    def __init__(self, api_key, base_url="https://grabshot.dev/api"):
        self.api_key = api_key
        self.base_url = base_url
        self.session = requests.Session()
        self.session.headers["x-api-key"] = api_key

    def capture(self, url, width=1280, height=800, fmt="png",
                full_page=False, retries=3):
        params = {
            "url": url,
            "width": width,
            "height": height,
            "format": fmt,
            "fullPage": full_page
        }

        for attempt in range(retries):
            try:
                resp = self.session.get(
                    f"{self.base_url}/screenshot",
                    params=params,
                    timeout=30
                )
                if resp.status_code == 200:
                    return resp.content
                elif resp.status_code == 429:
                    wait = int(resp.headers.get("Retry-After", 5))
                    time.sleep(wait)
                    continue
                else:
                    raise Exception(f"API error {resp.status_code}: {resp.text}")
            except requests.Timeout:
                if attempt == retries - 1:
                    raise
                time.sleep(2 ** attempt)

        raise Exception(f"Failed after {retries} retries")

    def capture_batch(self, urls, output_dir="screenshots"):
        Path(output_dir).mkdir(exist_ok=True)
        results = []

        for url in urls:
            slug = url.replace("https://", "").replace("/", "_")[:50]
            filename = f"{output_dir}/{slug}.png"

            try:
                data = self.capture(url)
                Path(filename).write_bytes(data)
                results.append({"url": url, "file": filename, "ok": True})
                print(f"OK  {url}")
            except Exception as e:
                results.append({"url": url, "error": str(e), "ok": False})
                print(f"ERR {url}: {e}")

        return results

# Usage
client = ScreenshotClient("your_api_key")
urls = [
    "https://github.com",
    "https://news.ycombinator.com",
    "https://dev.to",
]
results = client.capture_batch(urls)

The API handles all the browser complexity for you: JavaScript rendering, cookie banners, lazy-loaded images, responsive layouts. You can test it interactively before writing any code.

Try GrabShot Free

25 free screenshots per month. No credit card required.

Try It Now →

Method 2: Selenium

Selenium is the established workhorse for browser automation in Python. It's widely supported and well-documented, but requires managing a browser binary and driver.

Setup

pip install selenium webdriver-manager

Take a Screenshot

from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service
from webdriver_manager.chrome import ChromeDriverManager

def screenshot_with_selenium(url, output="screenshot.png",
                              width=1280, height=800):
    options = Options()
    options.add_argument("--headless=new")
    options.add_argument("--no-sandbox")
    options.add_argument("--disable-dev-shm-usage")
    options.add_argument(f"--window-size={width},{height}")

    service = Service(ChromeDriverManager().install())
    driver = webdriver.Chrome(service=service, options=options)

    try:
        driver.get(url)
        # Wait for page load (adjust as needed)
        driver.implicitly_wait(5)
        driver.save_screenshot(output)
        print(f"Screenshot saved to {output}")
    finally:
        driver.quit()

screenshot_with_selenium("https://example.com")

Full-Page Screenshot

Getting a full-page screenshot in Selenium requires a bit more work since save_screenshot only captures the viewport:

def full_page_screenshot(driver, output="full_page.png"):
    # Get full page dimensions
    total_height = driver.execute_script(
        "return document.body.scrollHeight"
    )
    viewport_width = driver.execute_script(
        "return document.body.scrollWidth"
    )

    # Resize window to fit entire page
    driver.set_window_size(viewport_width, total_height)
    import time
    time.sleep(1)  # Let layout settle

    driver.save_screenshot(output)

When to use Selenium: You need it if you're already using Selenium for other testing, or if you need to interact with the page (click buttons, fill forms) before capturing.

Method 3: Playwright

Playwright is the newer alternative from Microsoft. It has better defaults for screenshots, handles full-page captures natively, and is generally faster than Selenium for this use case.

Setup

pip install playwright
playwright install chromium

Basic Screenshot

from playwright.sync_api import sync_playwright

def screenshot_with_playwright(url, output="screenshot.png",
                                width=1280, height=800, full_page=False):
    with sync_playwright() as p:
        browser = p.chromium.launch(headless=True)
        page = browser.new_page(viewport={"width": width, "height": height})

        page.goto(url, wait_until="networkidle")
        page.screenshot(path=output, full_page=full_page)

        browser.close()
        print(f"Screenshot saved to {output}")

screenshot_with_playwright("https://example.com", full_page=True)

Async Version for High Throughput

import asyncio
from playwright.async_api import async_playwright

async def capture_many(urls, concurrency=5):
    async with async_playwright() as p:
        browser = await p.chromium.launch(headless=True)
        semaphore = asyncio.Semaphore(concurrency)

        async def capture_one(url):
            async with semaphore:
                page = await browser.new_page(
                    viewport={"width": 1280, "height": 800}
                )
                try:
                    await page.goto(url, wait_until="networkidle",
                                    timeout=15000)
                    slug = url.replace("https://", "").replace("/", "_")[:50]
                    await page.screenshot(path=f"screenshots/{slug}.png")
                    return {"url": url, "ok": True}
                except Exception as e:
                    return {"url": url, "ok": False, "error": str(e)}
                finally:
                    await page.close()

        tasks = [capture_one(url) for url in urls]
        results = await asyncio.gather(*tasks)
        await browser.close()
        return results

# Usage
urls = ["https://github.com", "https://dev.to", "https://news.ycombinator.com"]
results = asyncio.run(capture_many(urls))
for r in results:
    status = "OK" if r["ok"] else f"FAIL: {r['error']}"
    print(f"{r['url']}: {status}")

When to use Playwright: When you need full-page screenshots, device emulation, or high-throughput capturing on your own infrastructure.

Comparison: Which Method Should You Use?

FactorScreenshot APISeleniumPlaywright
Setup time~1 minute~10 minutes~5 minutes
Server requirementsNone (API call)Chrome + RAMChromium + RAM
Full-page supportBuilt-inManual workaroundBuilt-in
Cost at 10k/month~$29Server costsServer costs
MaintenanceZeroDriver updatesBrowser updates
ReliabilityHigh (managed)MediumHigh
Page interactionLimitedFull controlFull control

Use an API if screenshots are a feature in your product, not the product itself. You don't want to manage headless browsers on your servers.

Use Selenium if you're already invested in its ecosystem or need complex page interactions before capturing.

Use Playwright if you want self-hosted screenshots with modern tooling and async support.

Common Pitfalls

Quick Reference: curl One-Liner

Sometimes you just want a screenshot from the command line. Here's the curl equivalent:

curl -o screenshot.png \
  "https://grabshot.dev/api/screenshot?url=https://example.com&width=1280&height=800&format=png" \
  -H "x-api-key: your_api_key"

Wrapping Up

For most Python projects, a screenshot API is the pragmatic choice. You skip the browser management headache entirely and get reliable results with a few lines of code. If you need more control or want to keep everything on your own servers, Playwright is the modern pick.

GrabShot's API documentation has more examples including device emulation, PDF generation, and dark mode captures. You can try it free with 25 screenshots per month.