Home >Backend Development >Python Tutorial >Python crawling article example tutorial
This article mainly introduces you to the relevant information of using python to crawl prose network articles. The introduction in the article is very detailed and has certain reference and learning value for everyone. Friends who need it can take a look below.
This article mainly introduces to you the relevant content about python crawling prose network articles. It is shared for your reference and study. Let’s take a look at the detailed introduction:
The rendering is as follows:
Configure python 2.7
bs4 requests
Installation using pipsudo pip install bs4
##
sudo pip install requestsBriefly explain the use of bs4 because it is crawling web pages, so I will introduce it. find and find_allThe difference between find and find_all is that they return different things. find returns the first matched tag and the content in the tagfind_all returns a list
For example, we write a test.html to test the difference between find and find_all.
The content is:
<html> <head> </head> <body> <p id="one"><a></a></p> <p id="two"><a href="#" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" >abc</a></p> <p id="three"><a href="#" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" >three a</a><a href="#" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" >three a</a><a href="#" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" >three a</a></p> <p id="four"><a href="#" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" >four<p>four p</p><p>four p</p><p>four p</p> a</a></p> </body> </html>Then the code of test.py is:
from bs4 import BeautifulSoup import lxml if __name__=='__main__': s = BeautifulSoup(open('test.html'),'lxml') print s.prettify() print "------------------------------" print s.find('p') print s.find_all('p') print "------------------------------" print s.find('p',id='one') print s.find_all('p',id='one') print "------------------------------" print s.find('p',id="two") print s.find_all('p',id="two") print "------------------------------" print s.find('p',id="three") print s.find_all('p',id="three") print "------------------------------" print s.find('p',id="four") print s.find_all('p',id="four") print "------------------------------"After running, we can see the result. When getting the specified tag, there is not much difference between the two. When getting a group of tags, the difference between the two will be displayed.
def get_html(): url = "https://www.sanwen.net/" two_html = ['sanwen','shige','zawen','suibi','rizhi','novel'] for doc in two_html: i=1 if doc=='sanwen': print "running sanwen -----------------------------" if doc=='shige': print "running shige ------------------------------" if doc=='zawen': print 'running zawen -------------------------------' if doc=='suibi': print 'running suibi -------------------------------' if doc=='rizhi': print 'running ruzhi -------------------------------' if doc=='nove': print 'running xiaoxiaoshuo -------------------------' while(i<10): par = {'p':i} res = requests.get(url+doc+'/',params=par) if res.status_code==200: soup(res.text) i+=iIn this part of the code, I did not process the
res.status_code that is not 200. The resulting problem is that the error will not be displayed, and the crawled content will have lost. Then I analyzed the web page of prose website and found that it was www.sanwen.net/rizhi/&p=1
def soup(html_text): s = BeautifulSoup(html_text,'lxml') link = s.find('p',class_='categorylist').find_all('li') for i in link: if i!=s.find('li',class_='page'): title = i.find_all('a')[1] author = i.find_all('a')[2].text url = title.attrs['href'] sign = re.compile(r'(//)|/') match = sign.search(title.text) file_name = title.text if match: file_name = sign.sub('a',str(title.text))There is something wrong when getting the title. Please tell me. Guys, why do you add slashes in the title when writing prose? Not only one but also two. This problem directly caused an error in the file name when I wrote the file later, so I wrote a regular expression and I changed it for you. Bar.
def get_content(url): res = requests.get('https://www.sanwen.net'+url) if res.status_code==200: soup = BeautifulSoup(res.text,'lxml') contents = soup.find('p',class_='content').find_all('p') content = '' for i in contents: content+=i.text+'\n' return contentThe last thing is to write the file and save it ok
f = open(file_name+'.txt','w') print 'running w txt'+file_name+'.txt' f.write(title.text+'\n') f.write(author+'\n') content=get_content(url) f.write(content) f.close()Three functions get the prose from the prose network, but there are The problem, the problem is that I don’t know why some prose is lost. I can only get about 400 articles. This is much different from the articles on Prose.com, but it is indeed obtained page by page. I hope this problem will be solved. Please help me. Maybe we should make the web page inaccessible. Of course, I think it has something to do with the broken network in my dormitory
f = open(file_name+'.txt','w') print 'running w txt'+file_name+'.txt' f.write(title.text+'\n') f.write(author+'\n') content=get_content(url) f.write(content) f.close()Almost forgot about the rendering
The above is the detailed content of Python crawling article example tutorial. For more information, please follow other related articles on the PHP Chinese website!