Home > Article > Backend Development > Detailed explanation of IP automatic proxy method using python to crawl soft exam questions
Recently, I plan to capture soft exam questions from the Internet for the exam. I encountered some problems during the capture. The following article mainly introduces the use of python Crawling the relevant information of IP automatic proxy for soft exam questions. The article introduces it in great detail. Friends who need it can take a look below.
Preface
There is a software professional level exam recently, hereafter referred to as the soft exam. In order to better review and prepare for the exam, I plan to grab the soft exam questions from www.rkpass.cn .
First of all, let me tell you the story (keng) about how I crawled the soft exam questions. Now I can automatically crawl all the questions in a certain module, as shown below:
Currently, all 30 test question records of the information system supervisor can be captured. The results are as shown below:
Captured Content picture:
Although some information can be captured, the quality of the code is not high. Taking the capture of information system supervisors as an example, because the goal is clear, each The parameters are clear. In order to capture the test paper information in a short time, I did not do Exception handling. I filled in the pits for a long time last night.
Go back to the topic and write today. This blog is because I encountered a new pitfall. From the title of the article, we can guess that it must have been requested too many times, so the IP was blocked by the anti-crawler mechanism of the website.
Living people cannot suffocate to death. The deeds of our revolutionary ancestors tell us that as the successors of socialism, we cannot succumb to difficulties, open roads over mountains and build bridges across rivers. In order to solve the IP problem, IP proxy This idea comes out.
In the process of crawling information, if the crawling frequency exceeds the set threshold of the website, access will be prohibited. Usually, the anti-crawling mechanism of the website is. To identify crawlers based on IP
##So crawler developers usually need to take two methods to solve this problem:
# #1. Slow down the crawling speed and reduce the pressure on the target website. However, this will reduce the amount of crawling per unit time.
2. The second method is to set the proxy IP. and other means to break through the anti-crawler mechanism and continue crawling at a high frequency. However, this requires multiple stable proxy IPs
Not much to say, just go to the code:
# IP地址取自国内髙匿代理IP网站:http://www.xicidaili.com/nn/ # 仅仅爬取首页IP地址就足够一般使用 from bs4 import BeautifulSoup import requests import random #获取当前页面上的ip def get_ip_list(url, headers): web_data = requests.get(url, headers=headers) soup = BeautifulSoup(web_data.text) ips = soup.find_all('tr') ip_list = [] for i in range(1, len(ips)): ip_info = ips[i] tds = ip_info.find_all('td') ip_list.append(tds[1].text + ':' + tds[2].text) return ip_list #从抓取到的Ip中随机获取一个ip def get_random_ip(ip_list): proxy_list = [] for ip in ip_list: proxy_list.append('http://' + ip) proxy_ip = random.choice(proxy_list) proxies = {'http': proxy_ip} return proxies #国内高匿代理IP网主地址 url = 'http://www.xicidaili.com/nn/' #请求头 headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36'} #计数器,根据计数器来循环抓取所有页面的ip num = 0 #创建一个数组,将捕捉到的ip存放到数组 ip_array = [] while num < 1537: num += 1 ip_list = get_ip_list(url+str(num), headers=headers) ip_array.append(ip_list) for ip in ip_array: print(ip) #创建随机数,随机取到一个ip # proxies = get_random_ip(ip_list) # print(proxies)
Screenshot of the running results:
In this way, when the crawler requests, setting the request IP to the automatic IP can effectively avoid the simple blocking and fixed IP method in the anti-crawler mechanism ##. #
The above is the detailed content of Detailed explanation of IP automatic proxy method using python to crawl soft exam questions. For more information, please follow other related articles on the PHP Chinese website!