You ship a CSS fix on Friday afternoon. Tests pass. Linter is happy. You merge and deploy. Monday morning, your designer pings you: the hero section is broken on mobile, the pricing cards overlap, and the footer has vanished on Safari.
Unit tests and integration tests are great at catching logic bugs, but they are blind to how your app actually looks. Visual regressions slip through because no one is comparing what the page looked like before and after every deploy.
That is where a screenshot API in your CI/CD pipeline changes everything. Instead of eyeballing staging, you automate it: capture screenshots of key pages on every pull request, compare them against a baseline, and block the merge if something looks off.
You could spin up a headless browser in your pipeline. Many teams do. But it comes with real costs:
A screenshot API offloads all of that. You make an HTTP request, you get a pixel-perfect screenshot back. The rendering environment is consistent across runs, which means fewer false positives and faster pipelines.
Here is the pattern that works for most teams:
The simplest starting point. Using GrabShot's screenshot API, you can capture any URL in one request:
curl "https://grabshot.dev/api/screenshot?url=https://staging.example.com&width=1440&height=900&fullPage=false" \
-H "X-API-Key: YOUR_API_KEY" \
--output homepage-current.png
Capture mobile too:
curl "https://grabshot.dev/api/screenshot?url=https://staging.example.com&width=375&height=812&fullPage=false&device=mobile" \
-H "X-API-Key: YOUR_API_KEY" \
--output homepage-mobile-current.png
Here is a practical Node.js script that captures screenshots and compares them pixel-by-pixel using pixelmatch:
const https = require('https');
const fs = require('fs');
const { PNG } = require('pngjs');
const pixelmatch = require('pixelmatch');
const API_KEY = process.env.GRABSHOT_API_KEY;
const BASE_URL = process.env.STAGING_URL || 'https://staging.example.com';
const PAGES = [
{ name: 'homepage', path: '/' },
{ name: 'pricing', path: '/pricing' },
{ name: 'docs', path: '/docs' },
];
async function captureScreenshot(url, outputPath) {
const apiUrl = `https://grabshot.dev/api/screenshot?url=${encodeURIComponent(url)}&width=1440&height=900&format=png`;
return new Promise((resolve, reject) => {
const file = fs.createWriteStream(outputPath);
https.get(apiUrl, { headers: { 'X-API-Key': API_KEY } }, (res) => {
res.pipe(file);
file.on('finish', () => { file.close(); resolve(); });
}).on('error', reject);
});
}
function compareImages(baselinePath, currentPath, diffPath) {
const baseline = PNG.sync.read(fs.readFileSync(baselinePath));
const current = PNG.sync.read(fs.readFileSync(currentPath));
const { width, height } = baseline;
const diff = new PNG({ width, height });
const mismatchedPixels = pixelmatch(
baseline.data, current.data, diff.data,
width, height, { threshold: 0.1 }
);
fs.writeFileSync(diffPath, PNG.sync.write(diff));
const totalPixels = width * height;
const diffPercent = ((mismatchedPixels / totalPixels) * 100).toFixed(2);
return { mismatchedPixels, diffPercent };
}
async function run() {
const results = [];
for (const page of PAGES) {
const url = `${BASE_URL}${page.path}`;
const currentPath = `screenshots/${page.name}-current.png`;
const baselinePath = `screenshots/${page.name}-baseline.png`;
const diffPath = `screenshots/${page.name}-diff.png`;
console.log(`Capturing ${page.name}...`);
await captureScreenshot(url, currentPath);
if (fs.existsSync(baselinePath)) {
const result = compareImages(baselinePath, currentPath, diffPath);
console.log(` ${page.name}: ${result.diffPercent}% different`);
results.push({ page: page.name, ...result });
} else {
console.log(` No baseline for ${page.name}, saving current as baseline`);
fs.copyFileSync(currentPath, baselinePath);
}
}
const failed = results.filter(r => parseFloat(r.diffPercent) > 0.5);
if (failed.length > 0) {
console.error('\nVisual regression detected:');
failed.forEach(f => console.error(` ${f.page}: ${f.diffPercent}% changed`));
process.exit(1);
}
console.log('\nAll pages within threshold.');
}
run().catch(err => { console.error(err); process.exit(1); });
Drop this into .github/workflows/visual-qa.yml:
name: Visual QA
on:
pull_request:
branches: [main]
jobs:
visual-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
- name: Install dependencies
run: npm install pngjs pixelmatch
- name: Download baselines
uses: actions/cache@v4
with:
path: screenshots/
key: visual-baselines-${{ github.base_ref }}
- name: Wait for preview deploy
run: sleep 30 # or use a deployment status check
- name: Run visual comparison
env:
GRABSHOT_API_KEY: ${{ secrets.GRABSHOT_API_KEY }}
STAGING_URL: ${{ env.PREVIEW_URL }}
run: node visual-compare.js
- name: Upload diff artifacts
if: failure()
uses: actions/upload-artifact@v4
with:
name: visual-diffs
path: screenshots/*-diff.png
When the check fails, reviewers can download the diff images to see exactly what changed, pixel by pixel.
If your team runs Python, here is a stripped-down version for GitLab:
import requests
import os
from PIL import Image
import numpy as np
API_KEY = os.environ['GRABSHOT_API_KEY']
STAGING_URL = os.environ.get('STAGING_URL', 'https://staging.example.com')
def capture(url, output):
resp = requests.get(
'https://grabshot.dev/api/screenshot',
params={'url': url, 'width': 1440, 'height': 900, 'format': 'png'},
headers={'X-API-Key': API_KEY}
)
resp.raise_for_status()
with open(output, 'wb') as f:
f.write(resp.content)
def diff_images(baseline_path, current_path):
baseline = np.array(Image.open(baseline_path))
current = np.array(Image.open(current_path))
if baseline.shape != current.shape:
return 100.0 # dimensions changed, definitely different
diff = np.abs(baseline.astype(int) - current.astype(int))
changed_pixels = np.sum(diff.max(axis=2) > 25)
total_pixels = baseline.shape[0] * baseline.shape[1]
return round((changed_pixels / total_pixels) * 100, 2)
pages = ['/', '/pricing', '/docs', '/login']
for path in pages:
name = path.strip('/') or 'homepage'
capture(f'{STAGING_URL}{path}', f'{name}-current.png')
baseline = f'baselines/{name}.png'
if os.path.exists(baseline):
pct = diff_images(baseline, f'{name}-current.png')
print(f'{name}: {pct}% diff')
if pct > 0.5:
raise SystemExit(f'Visual regression on {name}: {pct}%')
else:
print(f'{name}: no baseline, saving')
os.makedirs('baselines', exist_ok=True)
os.rename(f'{name}-current.png', baseline)
And the GitLab CI stage:
visual-qa:
stage: test
image: python:3.12
script:
- pip install requests pillow numpy
- python visual_compare.py
artifacts:
when: on_failure
paths:
- "*-current.png"
You do not need to screenshot every page. Focus on the ones that break most often:
| Page | Why It Matters |
|---|---|
| Homepage | First impression, most visited |
| Pricing | Revenue-critical, complex layout |
| Sign-up / Login | Conversion funnel, must work |
| Dashboard | Core product, heavy CSS |
| Docs / Blog | SEO landing pages, content layout |
Start with 5-10 pages. You can always add more as you identify fragile areas.
A 0% diff threshold will drive you insane. Subpixel rendering, anti-aliasing, and dynamic content (timestamps, avatars) cause tiny differences every time. Here is what works in practice:
You can also mask specific regions (like timestamps or user-generated content) to reduce noise. Crop those areas out before comparing, or overlay a black rectangle on both images at known coordinates.
Do not just test desktop. The GrabShot API lets you specify viewport dimensions, so you can cover the breakpoints that matter:
# Desktop (1440px)
curl "https://grabshot.dev/api/screenshot?url=https://staging.example.com&width=1440&height=900" \
-H "X-API-Key: $KEY" -o desktop.png
# Tablet (768px)
curl "https://grabshot.dev/api/screenshot?url=https://staging.example.com&width=768&height=1024" \
-H "X-API-Key: $KEY" -o tablet.png
# Mobile (375px)
curl "https://grabshot.dev/api/screenshot?url=https://staging.example.com&width=375&height=812" \
-H "X-API-Key: $KEY" -o mobile.png
Three viewports, three API calls. No browser installation, no flakiness. Each screenshot comes back in under two seconds.
GrabShot gives you consistent, fast screenshots with a simple API. Free tier includes 25 screenshots per month, enough to test your setup.
Try It FreeFailing the build is useful, but showing the team what changed is even better. You can post the diff images directly as PR comments using the GitHub API:
# Upload diff image and comment on PR (simplified)
DIFF_URL=$(curl -s -X POST "https://api.github.com/repos/$REPO/issues/$PR/comments" \
-H "Authorization: token $GITHUB_TOKEN" \
-d "{\"body\": \"## Visual Diff Detected\\n\\n\\nChanged: ${DIFF_PERCENT}%\"}")
Some teams upload the before/current/diff as a triptych so reviewers can see exactly what changed at a glance, without downloading artifacts.
Let us do the math. If you have 10 critical pages, 3 viewports each, that is 30 screenshots per PR. With a team pushing 10 PRs per day, that is 300 screenshots daily, or about 9,000 per month.
With GrabShot's Pro plan at $29/month, you get more than enough volume for a mid-sized team. Compare that to the engineering hours lost debugging a visual regression that made it to production.
You will catch your first visual regression within the first week. It is one of those tools that pays for itself immediately.