Need to capture website screenshots from your Ruby application? Whether you're building link previews, generating social sharing images, or archiving web pages, a screenshot API saves you from managing headless browsers yourself. In this guide, you'll learn how to capture screenshots using Ruby's standard library, the Faraday gem, and production-ready Rails patterns.
Running Puppeteer or Selenium on your own servers means dealing with Chrome binaries, memory management, font rendering, and scaling headaches. A screenshot API like GrabShot handles all of that. You send a URL, you get an image back. No dependencies, no browser processes eating your RAM.
This matters especially in Ruby/Rails apps where memory is already a concern. Adding headless Chrome to a Puma or Sidekiq worker can double your memory footprint overnight.
Before writing any Ruby, let's verify the API works:
curl "https://grabshot.dev/api/screenshot?url=https://example.com&width=1280&height=800" \
-H "X-API-Key: YOUR_API_KEY" \
--output screenshot.png
That's it. You get a PNG back. Now let's do the same in Ruby.
Ruby's standard library is all you need for simple captures:
require 'net/http'
require 'uri'
require 'json'
def capture_screenshot(url, options = {})
api_key = ENV['GRABSHOT_API_KEY']
params = {
url: url,
width: options[:width] || 1280,
height: options[:height] || 800,
format: options[:format] || 'png',
full_page: options[:full_page] || false
}
uri = URI("https://grabshot.dev/api/screenshot")
uri.query = URI.encode_www_form(params)
request = Net::HTTP::Get.new(uri)
request['X-API-Key'] = api_key
response = Net::HTTP.start(uri.hostname, uri.port, use_ssl: true) do |http|
http.request(request)
end
if response.code == '200'
response.body
else
raise "Screenshot failed: #{response.code} - #{response.body}"
end
end
# Usage
image_data = capture_screenshot('https://example.com', width: 1440, full_page: true)
File.binwrite('screenshot.png', image_data)
Most Rails apps already use Faraday for HTTP requests. Here's a reusable client class:
require 'faraday'
class GrabshotClient
BASE_URL = 'https://grabshot.dev/api'
def initialize(api_key: ENV['GRABSHOT_API_KEY'])
@conn = Faraday.new(url: BASE_URL) do |f|
f.request :url_encoded
f.adapter Faraday.default_adapter
f.headers['X-API-Key'] = api_key
f.options.timeout = 30
f.options.open_timeout = 10
end
end
def screenshot(url, width: 1280, height: 800, format: 'png', full_page: false)
response = @conn.get('screenshot', {
url: url,
width: width,
height: height,
format: format,
full_page: full_page
})
raise "API error #{response.status}: #{response.body}" unless response.success?
response.body
end
def screenshot_to_file(url, path, **options)
data = screenshot(url, **options)
File.binwrite(path, data)
path
end
end
# Usage
client = GrabshotClient.new
client.screenshot_to_file('https://example.com', 'output.png', width: 1440)
In a Rails app, you probably want to attach screenshots to models using Active Storage. Here's how:
# app/models/page.rb
class Page < ApplicationRecord
has_one_attached :screenshot
end
# app/services/screenshot_service.rb
class ScreenshotService
def initialize
@client = GrabshotClient.new
end
def capture_and_attach(page)
image_data = @client.screenshot(page.url, width: 1280)
page.screenshot.attach(
io: StringIO.new(image_data),
filename: "screenshot-#{page.id}.png",
content_type: 'image/png'
)
page.update(screenshot_taken_at: Time.current)
end
end
# In a controller or console:
ScreenshotService.new.capture_and_attach(page)
Screenshots take a few seconds. Never capture them in a web request. Use Active Job with Sidekiq (or any backend):
# app/jobs/capture_screenshot_job.rb
class CaptureScreenshotJob < ApplicationJob
queue_as :default
retry_on StandardError, wait: :polynomially_longer, attempts: 3
def perform(page_id)
page = Page.find(page_id)
ScreenshotService.new.capture_and_attach(page)
end
end
# Enqueue it
CaptureScreenshotJob.perform_later(page.id)
# Or capture in bulk after import
Page.where(screenshot_taken_at: nil).find_each do |page|
CaptureScreenshotJob.perform_later(page.id)
end
The retry_on ensures transient failures (network timeouts, rate limits) get retried with exponential backoff automatically.
Need to capture many pages? Use Ruby's built-in threading or the concurrent-ruby gem:
require 'concurrent'
def batch_screenshots(urls, max_threads: 5)
client = GrabshotClient.new
pool = Concurrent::FixedThreadPool.new(max_threads)
results = Concurrent::Hash.new
urls.each do |url|
pool.post do
filename = url.gsub(/[^a-z0-9]/i, '_') + '.png'
begin
client.screenshot_to_file(url, "screenshots/#{filename}")
results[url] = { status: :ok, file: filename }
rescue => e
results[url] = { status: :error, message: e.message }
end
end
end
pool.shutdown
pool.wait_for_termination
results
end
urls = ['https://github.com', 'https://ruby-lang.org', 'https://rubyonrails.org']
results = batch_screenshots(urls)
results.each { |url, r| puts "#{url}: #{r[:status]}" }
25 screenshots per month on the free plan. No credit card required.
Get Your API Key →If your team also uses Node.js, the same API works identically. Check out our Node.js screenshot guide for JavaScript examples. The API is language-agnostic, so you can mix and match across services.
Production code needs to handle edge cases. Here's a robust wrapper:
class GrabshotClient
class RateLimitError < StandardError; end
class ApiError < StandardError; end
def screenshot_with_retry(url, retries: 3, **options)
attempts = 0
begin
attempts += 1
screenshot(url, **options)
rescue RateLimitError
if attempts <= retries
sleep_time = 2 ** attempts # 2, 4, 8 seconds
Rails.logger.warn("GrabShot rate limited, retrying in #{sleep_time}s")
sleep(sleep_time)
retry
else
raise
end
end
end
private
def handle_response(response)
case response.status
when 200
response.body
when 429
raise RateLimitError, "Rate limit exceeded"
when 400..499
raise ApiError, "Client error #{response.status}: #{response.body}"
when 500..599
raise ApiError, "Server error #{response.status}"
else
raise ApiError, "Unexpected status #{response.status}"
end
end
end
A common use case: showing link previews when users paste URLs into your app. Here's a minimal implementation:
# app/models/link_preview.rb
class LinkPreview < ApplicationRecord
has_one_attached :thumbnail
validates :url, presence: true, uniqueness: true
def self.for(url)
find_or_create_by(url: url).tap do |preview|
CapturePreviewJob.perform_later(preview.id) unless preview.thumbnail.attached?
end
end
end
# app/jobs/capture_preview_job.rb
class CapturePreviewJob < ApplicationJob
def perform(preview_id)
preview = LinkPreview.find(preview_id)
client = GrabshotClient.new
data = client.screenshot(preview.url, width: 800, height: 600)
preview.thumbnail.attach(
io: StringIO.new(data),
filename: "preview-#{preview.id}.png",
content_type: 'image/png'
)
end
end
| Parameter | Type | Description |
|---|---|---|
| url | string | The URL to capture (required) |
| width | integer | Viewport width in pixels (default: 1280) |
| height | integer | Viewport height in pixels (default: 800) |
| format | string | Output format: png, jpeg, webp |
| full_page | boolean | Capture the full scrollable page |
| delay | integer | Wait milliseconds before capture |
| selector | string | CSS selector to capture a specific element |
See the full API documentation for all available options including custom headers, cookies, and JavaScript injection.
Capturing website screenshots in Ruby doesn't require managing headless browsers. With a screenshot API and a few lines of Ruby, you get reliable captures that scale independently from your application. The patterns above work whether you're building a small Sinatra app or a large Rails monolith.
The free tier gives you 25 screenshots per month to test with. Enough to build a proof of concept before committing to a paid plan.