Home >Backend Development >Python Tutorial >Simple example analysis of how to implement crawler images in Python

Simple example analysis of how to implement crawler images in Python

黄舟
黄舟Original
2017-06-04 10:14:551743browse

This article mainly introduces the relevant information about the simple implementation of Python crawlerPicture. Friends in need can refer to

Simple implementation of Python crawler picture

I often browse Zhihu, and sometimes I want to save pictures of some questions together. Hence this program. This is a very simple image crawler program that can only crawl the part of the image that has been brushed out. Since I am not familiar with this part of the content, I will just say a few words and record the code without explaining too much. If you are interested, you can use it directly. Personal testing is available on websites such as Zhihu.

The previous article shared how to open images through URLs. The purpose is to first see what the crawled images look like, and then filter and save them.

The requests library is used here to obtain page information. It should be noted that when obtaining page information, a header is needed to disguise the program as a browser to access the server, otherwise May be rejected by the server. Then use BeautifulSoup to filter excess information to get the image address. After getting the picture, filter out some small pictures such as avatars and emoticons based on the size of the picture. Finally, you have more choices when opening or saving images, including OpenCV, skimage, PIL, etc.

The procedure is as follows:

# -*- coding=utf-8 -*-
import requests as req
from bs4 import BeautifulSoup
from PIL import Image
from io import BytesIO
import os
from skimage import io

url = "https://www.zhihu.com/question/37787176"
headers = {'User-Agent' : 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.96 Mobile Safari/537.36'}
response = req.get(url,headers=headers)
content = str(response.content)
#print content

soup = BeautifulSoup(content,'lxml')
images = soup.find_all('img')
print u"共有%d张图片" % len(images)

if not os.path.exists("images"):
  os.mkdir("images")

for i in range(len(images)):
  img = images[i]
  print u"正在处理第%d张图片..." % (i+1)
  img_src = img.get('src')
  if img_src.startswith("http"):
    ## use PIL
    '''
    print img_src
    response = req.get(img_src,headers=headers)
    image = Image.open(BytesIO(response.content))
    w,h = image.size
    print w,h
    img_path = "images/" + str(i+1) + ".jpg"
    if w>=500 and h>500:
      #image.show()
      image.save(img_path)

    '''

    ## use OpenCV
    import numpy as np
    import urllib
    import cv2

    resp = urllib.urlopen(img_src)

    image = np.asarray(bytearray(resp.read()), dtype="uint8")
    image = cv2.imdecode(image, cv2.IMREAD_COLOR)
    w,h = image.shape[:2]
    print w,h
    img_path = "images/" + str(i+1) + ".jpg"
    if w>=400 and h>400:
      cv2.imshow("Image", image)
      cv2.waitKey(3000)
      ##cv2.imwrite(img_path,image)

    ## use skimage

    ## image = io.imread(img_src)
    ## w,h = image.shape[:2]
    ## print w,h
    #io.imshow(image)
    #io.show()

    ## img_path = "images/" + str(i+1) + ".jpg"
    ## if w>=500 and h>500:
      ## image.show()
      ## image.save(img_path)
      ## io.imsave(img_path,image)

print u"处理完成!"

The above is the detailed content of Simple example analysis of how to implement crawler images in Python. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn