Home >Backend Development >Python Tutorial >How To Take Screenshots In Python?

How To Take Screenshots In Python?

WBOY
WBOYOriginal
2024-08-08 18:54:30801browse

How To Take Screenshots In Python?

Headless browser automation libraries offer a wide range of configuration options, which can be applied to take screenshots. In this guide, we'll explain taking Python screenshots through Selenium and Playwright. Then, we'll explore common browser tips and tricks for customizing web page captures. Let's get started!

Basic Screenshot Functionalities

In this guide, we'll start by covering the core Selenium and Playwright methods, including the installation required to take Python screenshots. Then, we'll explore common functionalities to take customized Selenium and Playwright screenshots.

Selenium Screenshots

Before exploring using Selenium to screenshot in Python, let's install it. Use the below pip command to install Selenium alongside webdriver-manager:

pip install selenium webdriver-manager

We'll be using the webdriver-manager Python library to automatically download the required browser drivers:

from selenium import webdriver
from selenium.webdriver.chrome.service import Service as ChromeService
from webdriver_manager.chrome import ChromeDriverManager

driver = webdriver.Chrome(service=ChromeService(ChromeDriverManager().install()))

Now that the required installation is ready, let's use selenium python to take screenshots:

from selenium import webdriver
from selenium.webdriver.chrome.service import Service as ChromeService
from webdriver_manager.chrome import ChromeDriverManager

driver = webdriver.Chrome(service=ChromeService(ChromeDriverManager().install()))

# request target web page
driver.get("https://web-scraping.dev/products")

# take sceenshot and directly save it
driver.save_screenshot('products.png')

# image as bytes
bytes = driver.get_screenshot_as_png()

# image as base64 string
base64_string = driver.get_screenshot_as_base64()

The above Python script for taking Selenium Python screenshots is fairly straightforward. We use the save_screenshot method to take a screenshot of the full driver viewport and save the image file to the products.png file. An alternative to directly saving to disk, other methods are available to save the plain image data as binary or base64 for further processing.

For further details on Selenium, refer to our dedicated guide.

Playwright Screenshots

The Playwright API is available in different programming languages. Since we'll use take screenshots with Python Playwright. Install its Python package using the below pip command:

pip install playwright

Next, install the required Playwright web diver binaries:

playwright install chromium # alternatively install `firefox` or `webkit`

To take Playwright screenshot, we can use the .screenshot method:

from pathlib import Path
from playwright.sync_api import sync_playwright

with sync_playwright() as p:
    browser = p.chromium.launch(headless=False)
    context = browser.new_context()
    page = context.new_page()

    # request target web page
    page.goto('https://web-scraping.dev/products')

    # take sceenshot and directly save it
    page.screenshot(path="products.png")

    # or screenshot as bytes
    image_bytes = page.screenshot()
    Path("products.png").write_bytes(image_bytes)

Above, we start by launching a new Playwright headless browser instance and then opening a new tab inside it. Then, we use screenshot the page and save it to the products PNG file.

Have a look at our dedicated guide on Playwright for further details on using it for web scraping.

Waits and Timeouts

Images on web pages are dynamically loaded. Hence, correctly waiting for them to load is crucial to prevent corrupt website screenshots. Let's explore different techniques for defining waits and timeouts.

Fixed Timeouts

Fixed timeouts are the most basic type of headless browser wait functionalites. By waiting for a certain amount of time before capturing screenshots, we ensure that all DOM elements are correctly loaded.

Selenium:

import time
# ....

driver.get("https://web-scraping.dev/products")

# wait for 10 seconds before taking screenshot
time.sleep(10)

driver.save_screenshot('products.png')

Playwright

# ....

with sync_playwright() as p:
    # ....
    page.goto('https://web-scraping.dev/products')

    # wait for 10 seconds before taking screenshot
    page.wait_for_timeout(10000)

    page.screenshot(path="products.png")

Above, we use Playwright's wait_for_timeout method to define a fixed wait condition before proceeding with the web page screenshot. Since Selenium doesn't provide built-in methods for fixed timeouts, we use Python's built-in time module.

Selectors

Dynamic wait conditions involve waiting for specific elements' selectors to become visible on the page before proceeding. If the selector is found within the defined timeout, the waiting process is terminated.

Selenium

from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions
from selenium.webdriver.common.by import By
# ....

driver.get("https://web-scraping.dev/products")

_timeout = 10 # set the maximum timeout to 10 seconds
wait = WebDriverWait(driver, _timeout)
wait.until(expected_conditions.presence_of_element_located(
    (By.XPATH, "//div[@class='products']") # wait for XPath selector
    # (By.CSS_SELECTOR, "div.products") # wait for CSS selector
    )
)

driver.save_screenshot("products.png")

Playwright

# ....

with sync_playwright() as p:
    # ....
    page.goto('https://web-scraping.dev/products')

    # wait for XPath or CSS selector
    page.wait_for_selector("div.products", timeout=10000)
    page.wait_for_selector("//div[@class='products']", timeout=10000)

    page.screenshot(path="products.png")

Above, we utilize dynamic conditions to wait for selectors before taking Python screenshots using Selenium's expected_conditions and Playwright's wait_for_selector method.

Load State

The last wait condition available is the load state. It waits for the browser page to reach a specific state :

  • domcontentloaded: Wait for the full DOM tree to load
  • networkidle: Wait until there are no network connections for at least 500 ms

Waiting for the networkidle state is especially helpful when capturing web page snapshots with multiple images to render. Since these pages are bandwidth-intensive, it's easier to wait for all network calls to finish instead of waiting for specific selectors.

Here's how to utilize the waitForLoadState method to wait before taking Playwright screenshots:

# ....

with sync_playwright() as p:
    # ....
    page.goto('https://web-scraping.dev/products')

    # wait for load state
    page.wait_for_load_state("domcontentloaded") # DOM tree to load
    page.wait_for_load_state("networkidle") # network to be idle

    page.screenshot(path="products.png")

Note that Selenium doesn't have available methods to intercept with the driver's load state, but it can be implemented using custom JavaScript execution.

Emulation

Emulation enables customizing the headless browser configuration to simulate common web browsers' types and user preferences. These settings are reflected on the webpage screenshots taken accordingly.

For instance, by emulating a specific phone browser, the website screenshot taken appears as if it was captured by an actual phone.

Viewport

Viewport settings represent the resolution of the browser device through width and height dimensions. Here's how to change Python screenshot viewport.

Selenium

# ....

# set the viewport dimensions (width x height)
driver.set_window_size(1920, 1080)

driver.get("https://web-scraping.dev/products")

driver.save_screenshot("products.png")

Playwright:

# ....

with sync_playwright() as p:
    browser = p.chromium.launch(headless=False)

    context = browser.new_context(
        viewport={"width": 1920, "height": 1080}, # set viewport dimensions
        device_scale_factor=2, # increase the pixel ratio
    )

    page = context.new_page()
    page.goto("https://web-scraping.dev/products")

    page.screenshot(path="products.png")

Here, we use Selenium's set_window_size method to set the browser viewport. As for Playwright, we define a browser context to set the viewport in addition to increasing the pixel ratio rate through the the device_scale_factor property for higher quality.

Playwrights provides a wide range of device presets to emulate multiple browsers and operating systems, enabling further Playwright screenshot customization:

# ....

with sync_playwright() as p:
    iphone_14 = p.devices['iPhone 14 Pro Max']
    browser = p.webkit.launch(headless=False)
    context = browser.new_context(
        **iphone_14,
    )

    # open a browser tab with iPhone 14 Pro Max's device profile
    page = context.new_page()

    # ....

The above Python script selects a device profile to automatically define its settings, including UserAgent, screen viewport, scale factor, and browser type. For the full list of available device profiles, refer to the official device registry.

Locale and Timezone

Taking a screenshot on websites with localization features can make the images look different based on the locale language and timezone settings used. Hence, corretly setting these values ensures the correct behavior.

Selenium

from selenium import webdriver
from selenium.webdriver.chrome.service import Service as ChromeService
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.chrome.options import Options

driver_manager = ChromeService(ChromeDriverManager().install())
options = Options()
# set locale
options.add_argument("--lang=fr-FR")

driver = webdriver.Chrome(service=driver_manager, options=options)

# set timezone using devtools protocol
timezone = {'timezoneId': 'Europe/Paris'}
driver.execute_cdp_cmd('Emulation.setTimezoneOverride', timezone)

driver.get("https://webbrowsertools.com/timezone/")
# ....

Playwright

# ....

with sync_playwright() as p:
    browser = p.chromium.launch(headless=False)
    context = browser.new_context(
        locale='fr-FR',
        timezone_id='Europe/Paris',
    )
    page = context.new_page()
    page.goto("https://webbrowsertools.com/timezone/")
    # ....

In this Python code, we set the browser's localization preferences through the locale and timezone settings. However, other factors can affect the localization profile used. For the full details, refer to our dedicated guide on web scraping localization.

Geolocation

Taking Python screenshots on websites can often be affected by automatic browser location identification. Here's how we can change it through longitude and latitude values.

Selenium

# ....
driver_manager = ChromeService(ChromeDriverManager().install())
driver = webdriver.Chrome(service=driver_manager)

geolocation = dict(
    {
        "latitude": 37.17634,
        "longitude": -3.58821,
        "accuracy": 100
    }
)

# set geolocation using devtools protocol
driver.execute_cdp_cmd("Emulation.setGeolocationOverride", geolocation)

Playwright

# ....
with sync_playwright() as p:
    browser = p.chromium.launch(headless=False)
    context = browser.new_context(
        geolocation={"longitude": 37.17634, "latitude": -3.58821},
        permissions=["geolocation"]
    )
    page = context.new_page()

Dark Mode

Taking webpage screenshots in the dark mode is quite popular. To approach it, we can change the browser's default color theme preference.

Selenium

# ....

options = Options()
options.add_argument('--force-dark-mode')

driver_manager = ChromeService(ChromeDriverManager().install())
driver = webdriver.Chrome(service=driver_manager, options=options)

driver.get("https://reddit.com/") # will open in dark mode

Playwright

# ....

with sync_playwright() as p:
    browser = p.chromium.launch(headless=False)
    context = browser.new_context(
        color_scheme='dark'
    )
    page = context.new_page()
    page.goto("https://reddit.com/") # will open in dark mode

The above code sets the default browser theme to dark mode, enabling dark-mode web screenshots accordingly. However, it has no effect on websites without native theme-modification support.

Pro Tip: Force Dark Mode

To force dark-mode screenshots across all websites, we can use Chrome flags. To do this, start by retrieving the required argument using the below steps:

  • Open the available chrome flags from the address chrome://flags/
  • Search for the enable-force-dark flag and enable it with selective inversion of everything
  • Relaunch the browser
  • Go to chrome://version/ and copy the created flag argument from the command line property

After retrieving the flag argument, add it to the browser context to force dark website screenshots in Python.

Selenium

# ....

driver_manager = ChromeService(ChromeDriverManager().install())
options = Options()

options.add_argument('--enable-features=WebContentsForceDark:inversion_method/cielab_based')
driver = webdriver.Chrome(service=driver_manager, options=options)

driver.get("https://web-scraping.dev/products")
driver.save_screenshot('dark_screenshot.png')

Playwright

# ....

with sync_playwright() as p:
    browser = p.chromium.launch(
        headless=False,
        args=[
            '--enable-features=WebContentsForceDark:inversion_method/cielab_based'
        ]
    )
    context = browser.new_context()
    page = context.new_page()

    page.goto("https://web-scraping.dev/products")
    page.screenshot(path="dark_screenshot.png")

Here's what the retrieved dark-mode Python screenshot looks like:

How To Take Screenshots In Python?

Selection Targeting

Lastly, let's explore using Python to screenshot webpages through area selection. It enables targeting specific areas of the page.

Full Page

Taking full-page screenshots is an extremely popular use case, allowing snapshots to be captured at the whole page's vertical height.

Full-page screenshots are often _ misunderstood _. Hence, it's important to differentiate between two distinct concepts:

  • Screenshot viewport, the image dimensions as height and width.
  • Browser scrolling, whether the driver has scrolled down to load more pages.

A headless browser can scroll down, but its screenshot height hasn't been updated for the new height , or vice versa. Hence, the retrieved web snapshot doesn't look as expected.

Here's how to take scrolling screenshots with Selenium and Playwright.

Selenium

# ....

def scroll(driver):
    _prev_height = -1
    _max_scrolls = 100
    _scroll_count = 0
    while _scroll_count < _max_scrolls:
        # execute JavaScript to scroll to the bottom of the page
        driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
        # wait for new content to load (change this value as needed)
        time.sleep(1)
        # check whether the scroll height changed - means more pages are there
        new_height = driver.execute_script("return document.body.scrollHeight")
        if new_height == _prev_height:
            break
        _prev_height = new_height
        _scroll_count += 1

driver_manager = ChromeService(ChromeDriverManager().install())
options = Options()
options.add_argument("--headless") # ⚠️ headless mode is required
driver = webdriver.Chrome(service=driver_manager, options=options)

# request the target page and scroll down
driver.get("https://web-scraping.dev/testimonials")
scroll(driver)

# retrieve the new page height and update the viewport
new_height = driver.execute_script("return document.body.scrollHeight")
driver.set_window_size(1920, new_height)

# screenshot the main page content (body)
driver.find_element(By.TAG_NAME, "body").screenshot("full-page-screenshot.png")

Playwright

# ....

def scroll(page):
    _prev_height = -1
    _max_scrolls = 100
    _scroll_count = 0
    while _scroll_count < _max_scrolls:
        # execute JavaScript to scroll to the bottom of the page
        page.evaluate("window.scrollTo(0, document.body.scrollHeight)")
        # wait for new content to load
        page.wait_for_timeout(1000)
        # check whether the scroll height changed - means more pages are there
        new_height = page.evaluate("document.body.scrollHeight")
        if new_height == _prev_height:
            break
        _prev_height = new_height
        _scroll_count += 1

with sync_playwright() as p:
    browser = p.chromium.launch(headless=False)
    context = browser.new_context(
        viewport={"width": 1920, "height": 1080}
    )
    page = context.new_page()

    # request the target page and scroll down
    page.goto("https://web-scraping.dev/testimonials")
    scroll(page)

    # automatically capture the full page
    page.screenshot(path="full-page-screenshot.png", full_page=True)

Since Selenium doesn't provide automatic full page screenshot capturing capabilities, we utilize additional steps:

  • Get the new page height after scrolling and use it to update the viewport.
  • Find the body element of the page and target it with a screenshot.

Selectors

So far, we have been taking web page screenshots against the entire screen viewport. However, headless browsers allow targeting a specific area by screenshotting elements using their equivalent selectors :

Selenium

# ....

driver.set_window_size(1920, 1080)
driver.get("https://web-scraping.dev/product/3")

# wait for the target element to be visible
wait = WebDriverWait(driver, 10)
wait.until(expected_conditions.presence_of_element_located(
    (By.CSS_SELECTOR, "div.row.product-data")
))

element = driver.find_element(By.CSS_SELECTOR, 'div.row.product-data')

# take web page screenshot of the specific element
element.screenshot('product-data.png')

Playwright

# ....

with sync_playwright() as p:
    browser = p.chromium.launch(headless=False)
    context = browser.new_context(
        viewport={"width": 1920, "height": 1080}
    )
    page = context.new_page()

    page.goto('https://web-scraping.dev/product/3')

    # wait for the target element to be visible
    page.wait_for_selector('div.row.product-data')

    # take web page screenshot of the specific element
    page.locator('div.row.product-data').screenshot(path="product-data.png")

In the above code, we start by waiting for the desired element to appear in the HTML. Then, we select it and specifically capture it. Here's what's the retrieved Python screenshot looks like:

How To Take Screenshots In Python?

Coordinates

Furthermore, we can customize the webpage Python screenshots using coordinate values. In other words, it crops the web page into an image using four attributes :

  • X-coordinate of the clip area's horizontal position (left to right)
  • Y-coordinate of the clip area's vertical position (top to bottom)
  • Width and height dimensions

Here's how to take clipped Playwright and Selenium screenshots:

Selenium

from PIL import Image # pip install pillow
from io import BytesIO

# ....
driver.get("https://web-scraping.dev/product/3")

wait = WebDriverWait(driver, 10)
wait.until(expected_conditions.presence_of_element_located(
    (By.CSS_SELECTOR, "div.row.product-data")
))
element = driver.find_element(By.CSS_SELECTOR, 'div.row.product-data')

# automatically retrieve the coordinate values of selected selector
location = element.location
size = element.size
coordinates = {
    "x": location['x'],
    "y": location['y'],
    "width": size['width'],
    "height": size['height']
}
print(coordinates)
{'x': 320.5, 'y': 215.59375, 'width': 1262, 'height': 501.828125}

# capture full driver screenshot
screenshot_bytes = driver.get_screenshot_as_png()

# clip the screenshot and save it
img = Image.open(BytesIO(screenshot_bytes))
clip_box = (coordinates['x'], coordinates['y'], coordinates['x'] + coordinates['width'], coordinates['y'] + coordinates['height'])
cropped_img = img.crop(clip_box)
cropped_img.save('clipped-screenshot.png')

Playwright

# ....

with sync_playwright() as p:
    browser = p.chromium.launch(headless=False)
    context = browser.new_context()
    page = context.new_page()

    page.goto('https://web-scraping.dev/product/3')
    page.wait_for_selector('div.row.product-data')

    element = page.query_selector("div.row.product-data")

    # automatically retrieve the coordinate values of selected selector
    coordinates = element.bounding_box()
    print(coordinates)
    {'x': 320.5, 'y': 215.59375, 'width': 1262, 'height': 501.828125}

    # capture the screenshot with clipping
    page.screenshot(path="clipped-screenshot.png", clip=coordinates)

We use Playwright's built-in clip method to automatically crop the captured screenshot. As for Selenium, we use Pillow to manually clip the full web page snapshot.

Banner Blocking

Websites' pop-up banners prevent taking clear screenshots. One of these is the famous " Accept Cookies" banner on web-scraping.dev as an example:

How To Take Screenshots In Python?

The above banner is displayed through cookies. If we click "accept", a cookie value will be saved on the browser to save our reference and prevent displaying the banner again.

If we observe observe browser developer tools

, we'll find the cookiesAccepted cookie set to true. So, to block cookie banners while taking Python screenshots, we'll set this cookie before navigating to the target web page.

Selenium

# ....

driver.get("https://web-scraping.dev")

# add the cookie responsible for blocking screenshot banners
driver.add_cookie({'name': 'cookiesAccepted', 'value': 'true', 'domain': 'web-scraping.dev'})

driver.get("https://web-scraping.dev/login?cookies=")
driver.save_screenshot('blocked-banner-screenshot.png'

Playwright

with sync_playwright() as p:
    browser = p.chromium.launch(headless=False)
    context = browser.new_context()
    # add the cookie responsible for blocking screenshot banners
    context.add_cookies(
        [{'name': 'cookiesAccepted', 'value': 'true', 'domain': 'web-scraping.dev', 'path': '/'}]
    )
    page = context.new_page()

    page.goto('https://web-scraping.dev/login?cookies=')
    page.screenshot(path='blocked-banner-screenshot.png')

For further details on using cookies, refer to our dedicated guide.

Powering Up With ScrapFly

So far, we have explored taking website screenshots using a basic headless browser configuration. However, modern websites prevent screenshot automation using anti-bot measures. Moreover, maintaining headless web browsers can be complex and time-consuming.

ScrapFly is a screenshot API that enables taking web page captures at scale by providing:

  • Antibot protection bypass - screenshot web pages on protected domains without being blocked by antibot services like Cloudflare.
  • Built-in rotating proxies
  • Prevents IP address blocking encountered by rate-limit rules.
  • Geolocation targeting access location-restricted domains through an IP address pool of +175 countries.
  • JavaScript execution - take full advantage of headless browser automation through scrolling, navigating, clicking buttons, and filling out forms etc.
  • Full screenshot customization - controls the webpage screenshot capture behavior by setting its file type, resolution, color mode, viewport, and banners settings.
  • Python and Typescript SDKs.

How To Take Screenshots In Python?
ScrapFly abstracts away all the required engineering efforts!

Here's how to take Python screenshots using ScrapFly's screenshot API. It's as simple as sending an API request:

from pathlib import Path
import urllib.parse
import requests

base_url = 'https://api.scrapfly.io/screenshot?'

params = {
    'key': 'Your ScrapFly API key',
    'url': 'https://web-scraping.dev/products', # web page URL to screenshot
    'format': 'png', # screenshot format (file extension)
    'capture': 'fullpage', # area to capture (specific element, fullpage, viewport)
    'resolution': '1920x1080', # screen resolution
    'country': 'us', # proxy country
    'rendering_wait': 5000, # time to wait in milliseconds before capturing
    'wait_for_selector': 'div.products-wrap', # selector to wait on the web page
    'options': [
        'dark_mode', # use the dark mode
        'block_banners', # block pop up banners
        'print_media_format' # emulate media printing format
    ],
    'auto_scroll': True # automatically scroll down the page
}

# Convert the list of options to a comma-separated string
params['options'] = ','.join(params['options'])
query_string = urllib.parse.urlencode(params)
full_url = base_url + query_string

response = requests.get(full_url)
image_bytes = response.content
# save to disk
Path("screenshot.png").write_bytes(image_bytes) 

Try for FREE!

More on Scrapfly

FAQ

To wrap up this guide on taking website screenshots with Python Selenium and Playwright, let's have a look at some frequqntly asked questions.

Are there alternatives to headless browsers for taking Python screenshots?

Yes, screenshot APIs are great alternatives. They manage headless browsers under the hood, enabling website snapshots through simple HTTP requests. For further details, refer to our guide on the best screenshot API.

How to take screenshots in NodeJS?

Puppeteer is a popular headless browser that allows web page captures using the page.screenshot method. For more, refer to our guide on taking screenshots with Puppeteer.

How to take full webpage screenshots in Python?

To take full page screenshot, scroll down the page using Selenium or Playwright if needed. Then, use the fullpage method in Playwright: screenshot(path, full_page=True) to automatically capture screenshots at the full viewport.

As for Selenium, manually update the browser's viewport height after scrolling to cover the entire vertical height.

Summary

In this guide, we explained how to take Playwright and Selenium screenshots in Python. We have started by covering installation and basic usage.

We went through a step-by-step guide on using advanced Selenium and Playwright features to capture customized screenshots:

  • Waiting for fixed ti timeouts, selector, and load states
  • Emulating browser preferences, viewport, geolocation, teme, locale, and timezone
  • Full page capturing, selection targeting, and banner blocking

The above is the detailed content of How To Take Screenshots In Python?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn