Home >Backend Development >Python Tutorial >How to use the python scheduler
Continue with the content of the previous article. In the previous article, the crawler scheduler has been written. The scheduler is the "brain" of the entire crawler program and can also be called the command center. Now, what we have to do is to write other components used in the scheduler. The first is the URL manager. Since it serves as a manager, it must distinguish between the URLs to be crawled and the URLs that have been crawled, otherwise the crawling will be repeated. The tutorial here uses a set collection to temporarily store the two URLs in the collection, that is, in the memory. After all, the crawled data is relatively small. Of course, it can also be stored in other places, such as cache or relational database.
The first time is to create the urlmanager object in the scheduler initialization function,
The second time is to call the add_new_url method to add the initial url to the crawled collection.
The third time is to determine whether there is a URL to be crawled during the crawling process,
The fourth time is to take the URL to be crawled from the collection,
The fifth time is to add a new set of URLs parsed from the page to the crawling collection again
Then what we have to do next is to use code to implement these functions:
1 class UrlManager(object): 2 """docstring for UrlManager""" 3 def __init__(self): 4 self.new_urls = set() 5 self.old_urls = set() 6 #向管理器中添加一个新的url 7 def add_new_url(self,url): 8 if url is None: 9 return10 if url not in self.new_urls and url not in self.old_urls:11 self.new_urls.add(url)12 #从爬取数据中向管理器中批量添加url13 def add_new_urls(self,urls):14 if urls is None or len(urls) == 0:15 return16 for url in urls:17 self.add_new_url(url)18 #判断是否有新的url19 def has_new_url(self):20 return (len(self.new_urls) != 0)21 #从管理器中取出一个新的url22 def get_new_url(self):23 new_url = self.new_urls.pop()24 self.old_urls.add(new_url)25 return new_url
Okay, here, the url manager is done!
The next step is the url downloader. It is a very simple function that saves the page accessed by the program.
The downloader only appears twice in the scheduler:
The first time is created during initialization
The second time is immediately after getting the url. Call it to download the page
In the url downloader, the original tutorial uses the urllib library, which I think is a bit cumbersome. So I switched to a more useful library: requests. This library can help me block many technical difficulties and directly crawl the pages we want to visit, and it is very simple to use.
1 import requests 2 3 class HtmlDownloader(object): 4 """docstring for HtmlDownloader""" 5 def download(self,url): 6 if url is None: 7 return 8 response = requests.get(url, timeout = 0.1) 9 response.encoding = 'utf-8'10 if response.status_code == requests.codes.ok:11 return response.text12 else:13 return
Let me briefly talk about this code:
a. First, import the requests library. This is because it is the first Third-party library, so you need to download it yourself. Enter at the command line: pip install requests
b. Then start writing the downloader class. This class has only one method, which is download. This method will first accept the URL you gave, and then determine whether it exists.
c. Then call the get method of requests, which accepts two parameters, one is the url, and the other is timeout
The timeout is added by myself, which is the access timeout. If timeout is not added, the program will freeze, which means it will always wait for the page's response without throwing an exception.
d. Then set the encoding of the returned response. Because the crawled Baidu Encyclopedia page is utf-8, it is best to set it here. Although requests will intelligently judge, it is better to change it manually. .
e. Then determine whether the page is responsive. The codes.ok here is actually 200, which means the webpage responds normally. It is no problem if you directly write response.status_code == 200 here.
f. Finally, return all the content of the page. The text here is a string, which contains all the code (html, css, js) of a page.
The above is the detailed content of How to use the python scheduler. For more information, please follow other related articles on the PHP Chinese website!