Home  >  Article  >  Backend Development  >  Crawl images from the website and automatically download them locally

Crawl images from the website and automatically download them locally

WBOY
WBOYOriginal
2023-06-13 13:28:503195browse

In the Internet era, people have become accustomed to downloading pictures from various websites such as galleries and social platforms. If you only need to download a small number of images, manual operation is not cumbersome. However, if a large number of pictures need to be downloaded, manual operation will become very time-consuming and laborious. At this time, automation technology needs to be used to realize automatic downloading of pictures.

This article will introduce how to use Python crawler technology to automatically download images from the website to the local computer. This process is divided into two steps: the first step is to use Python's requests library or selenium library to grab the image links on the website; the second step is to download the images to the local through Python's urllib or requests library according to the obtained links.

Step one: Get the image link

  1. Use the requests library to crawl the link

Let’s first look at how to use the requests library to crawl the image link . The sample code is as follows:

import requests
from bs4 import BeautifulSoup

url = 'http://example.com'
response = requests.get(url)
soup = BeautifulSoup(response.content, 'html.parser')

img_tags = soup.find_all('img')

urls = [img['src'] for img in img_tags]

Taking the Example website as an example, first use the requests library to crawl web content, and use the BeautifulSoup library to parse HTML. Then, we use the soup.find_all('img') method to get all img tags in HTML, and use list comprehensions to extract the value of the src attribute in each tag.

  1. Use selenium library to crawl links

Another way to get image links is to use selenium library. The sample code is as follows:

from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from time import sleep

url = 'http://example.com'

options = Options()
options.add_argument('--headless')

service = Service('/path/to/chromedriver')
driver = webdriver.Chrome(service=service, options=options)
driver.get(url)

sleep(2)

img_tags = driver.find_elements_by_tag_name('img')

urls = [img.get_attribute('src') for img in img_tags]

Here we ChromeDriver is used. When using it, you need to fill in the path of ChromeDriver on your computer to 'path/to/chromedriver' in the sample code. The second line of code enables a headless browser, which avoids operating in the Chrome browser window and increases speed. Then we use the webdriver module in the selenium library to create an instance of the Chrome browser and open the Example website by setting driver.get(url). Then use driver.find_elements_by_tag_name('img') to get all img tags, and then get the value of the src attribute in each tag.

Step 2: Download images

There are many ways to download images. Here we use Python’s own urllib library or requests library to download. The sample code is as follows:

import urllib.request

for url in urls:
    filename = url.split('/')[-1]
    urllib.request.urlretrieve(url, filename)

Here, the urllib.request library is used to download images from the network to the local, and url.split('/')[-1] is used to obtain the image files. name, and assign it to the variable filename, and finally use urllib.request.urlretrieve(url, filename) to download the image locally. It should be noted that if the URL contains Chinese characters, the URL also needs to be encoded.

Here is a brief introduction to how to use the requests library to download images. The sample code is as follows:

import requests

for url in urls:
    filename = url.split('/')[-1]
    response = requests.get(url)
    with open(filename, 'wb') as f:
        f.write(response.content)

Here, the requests library is used to obtain the image binary file and write it to the file. It should be noted that since the binary file writing mode is 'wb', you need to use with open(filename, 'wb') as f: to open the file and write , making sure each file is closed correctly.

Summary

In summary, through Python crawler technology, we can easily crawl images on the website and automatically download them locally. This automation technology can help us improve work efficiency and is very helpful for work that requires processing a large number of images. At the same time, we need to be reminded that crawling images from websites needs to comply with relevant laws and regulations and respect the copyright of the website. If you do not have official authorization or permission from the website, do not crawl images on the website without permission.

The above is the detailed content of Crawl images from the website and automatically download them locally. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn