Home  >  Article  >  Backend Development  >  Example of implementing simple crawler function in python

Example of implementing simple crawler function in python

WBOY
WBOYOriginal
2016-12-05 13:27:121804browse

When we browse the Internet every day, we often see some good-looking pictures, and we want to save and download these pictures, or use them as desktop wallpapers, or as design materials.

Our most common method is to right-click the mouse and select Save As. However, some pictures do not have a save as option when you right-click the mouse. Another way is to capture them with a screenshot tool, but this will reduce the clarity of the picture. Okay~! In fact, you are very good. Right-click to view the page source code.

We can use python to implement such a simple crawler function and crawl the code we want locally. Let's take a look at how to use python to implement such a function.

One, get the entire page data

First we can get the entire page information of the image to be downloaded.
getjpg.py

#coding=utf-8
import urllib
def getHtml(url):
  page = urllib.urlopen(url)
  html = page.read()
  return html

html = getHtml("http://tieba.baidu.com/p/2738151262")
print html

The Urllib module provides an interface for reading web page data. We can read data on www and ftp just like reading local files. First, we define a getHtml() function:

urllib.urlopen() method is used to open a URL address.

The

read() method is used to read the data on the URL, pass a URL to the getHtml() function, and download the entire page. Executing the program will print out the entire web page.

Second, filter the data you want on the page
Python provides very powerful regular expressions. We need to know a little bit about python regular expressions first.

Suppose we find a few beautiful wallpapers in Baidu Tieba and go to the previous section to view the tools. Found the address of the picture, such as: src=”http://imgsrc.baidu.com/forum...jpg” pic_ext=”jpeg”

Modify the code as follows:

import re
import urllib

def getHtml(url):
  page = urllib.urlopen(url)
  html = page.read()
  return html

def getImg(html):
  reg = r'src="(.+?\.jpg)" pic_ext'
  imgre = re.compile(reg)
  imglist = re.findall(imgre,html)
  return imglist   
  
html = getHtml("http://tieba.baidu.com/p/2460150866")
print getImg(html)

We have also created the getImg() function, which is used to filter the required image links in the entire page obtained. The re module mainly contains regular expressions:

re.compile() can compile a regular expression into a regular expression object.

The

re.findall() method reads data containing imgre (regular expression) in html.

Running the script will get the URL address of the image contained in the entire page.

Three, save the page filtered data locally

Traverse the filtered image addresses through a for loop and save them locally. The code is as follows:

#coding=utf-8
import urllib
import re

def getHtml(url):
  page = urllib.urlopen(url)
  html = page.read()
  return html

def getImg(html):
  reg = r'src="(.+?\.jpg)" pic_ext'
  imgre = re.compile(reg)
  imglist = re.findall(imgre,html)
  x = 0
  for imgurl in imglist:
    urllib.urlretrieve(imgurl,'%s.jpg' % x)
    x+=1


html = getHtml("http://tieba.baidu.com/p/2460150866")

print getImg(html)

The core here is to use the urllib.urlretrieve() method to directly download remote data to the local.

Traverse the obtained image connections through a for loop. In order to make the file name of the image look more standardized, rename it. The naming rule is to add 1 to the x variable. The save location defaults to the program's storage directory.

After the program is completed, you will see the files downloaded to the local directory.

Thanks for reading, I hope it can help everyone, thank you for your support of this site!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn