This article was originally created by MaNong.com – Xiao Hao. Please read the reprint requirements at the end of the article for reprinting. Welcome to participate in our paid contribution plan!
I have always had the habit of watching American TV series. On the one hand, I can practice my English listening skills, and on the other hand, I can pass the time. It used to be possible to watch online on video websites, but since the restriction order imposed by the State Administration of Radio, Film and Television, it seems that imported American and British dramas are no longer updated simultaneously as before. However, as a nerd, how can I be willing to not follow any dramas, so I checked online and found an American drama download website [Tiantian American Dramas] that can be downloaded using Thunder. I can download various resources at will. Recently, I am obsessed with BBC’s High-definition documentary, nature is so beautiful.
Although I have found a resource website that can be downloaded, I have to open the browser every time, enter the URL, find the American drama, and then click the link to download. After a long time, the process becomes very cumbersome, and sometimes the website link cannot be opened, which is a bit troublesome. I happen to have been learning Python crawler, so today I wrote a crawler on a whim to grab all the American drama links on the website and save them in a text document. If you want any drama, just open it and copy the link to You can download it via Thunder.
In fact, I originally planned to write something that finds a URL, uses requests to open it, grabs the download link, and crawls the entire site starting from the homepage. However, there are a lot of repeated links, and the URL of the website is not as regular as I thought. After writing for a long time, I still haven't written the kind of divergent crawler I want. Maybe I am not mature enough, so keep working hard. . .
Later I discovered that the links to the TV series were all in the article, and there was a number behind the article URL, like this http://cn163.net/archives/24016/, So I cleverly used the crawler experience I wrote before. The solution is to automatically generate the URL. Can’t the number behind it be changed? And each drama is unique, so I tried to find out how many articles there are. , and then use the range function to directly generate numbers continuously to construct the url.
But many URLs do not exist, so they will hang up directly. Don’t worry, we are using requests, and its built-in status_code is used to determine the status returned by the request, so as long as it is the returned status We skip all those with code 404, and crawl the other links, which solves the URL problem.
The following is the implementation code of the above steps.
def get_urls(self): try: for i in range(2015,25000): base_url='http://cn163.net/archives/' url=base_url+str(i)+'/' if requests.get(url).status_code == 404: continue else: self.save_links(url) except Exception,e: pass
The rest went very smoothly. I found a similar crawler written by someone before on the Internet, but it only crawled one article, so I borrowed its regular expressions. I used BeautifulSoup but the effect was not as good as the regular method, so I gave up decisively. There is no limit to my learning. However, the effect is not so ideal. About half of the links cannot be crawled correctly and need to continue to be optimized.
# -*- coding:utf-8 -*- import requests import re import sys import threading import time reload(sys) sys.setdefaultencoding('utf-8') class Archives(object): def save_links(self,url): try: data=requests.get(url,timeout=3) content=data.text link_pat='"(ed2k://\|file\|[^"]+?\.(S\d+)(E\d+)[^"]+?1024X\d{3}[^"]+?)"' name_pat=re.compile(r'<h2 id="">(.*?)</h2>',re.S) links = set(re.findall(link_pat,content)) name=re.findall(name_pat,content) links_dict = {} count=len(links) except Exception,e: pass for i in links: links_dict[int(i[1][1:3]) * 100 + int(i[2][1:3])] = i#把剧集按s和e提取编号 try: with open(name[0].replace('/',' ')+'.txt','w') as f: print name[0] for i in sorted(list(links_dict.keys())):#按季数+集数排序顺序写入 f.write(links_dict[i][0] + '\n') print "Get links ... ", name[0], count except Exception,e: pass def get_urls(self): try: for i in range(2015,25000): base_url='http://cn163.net/archives/' url=base_url+str(i)+'/' if requests.get(url).status_code == 404: continue else: self.save_links(url) except Exception,e: pass def main(self): thread1=threading.Thread(target=self.get_urls()) thread1.start() thread1.join() if __name__ == '__main__': start=time.time() a=Archives() a.main() end=time.time() print end-start
The full version of the code also uses multi-threading, but it seems useless. It may be because of Python's GIL. There seem to be more than 20,000 dramas. I thought it would take a long time to complete the crawling. , but excluding URL errors and unmatched URLs, the total crawling time is less than 20 minutes. I originally wanted to use Redis to crawl on two Linux machines, but after a lot of fussing, I felt it was unnecessary, so I left it at that and will do it later when I need more data.
Another problem that tortured me during the process was the saving of file names. I must complain here. File names in txt text format can have spaces, but they cannot have slashes, backslashes, brackets etc. This is the problem. I spent the whole morning on this. At first I thought it was an error in crawling the data. After checking for a long time, I found out that the crawled drama title had a slash in it. This made me miserable. .
The above is the detailed content of Python crawler obtains websites of American dramas. For more information, please follow other related articles on the PHP Chinese website!

本篇文章给大家带来了关于Python的相关知识,其中主要介绍了关于Seaborn的相关问题,包括了数据可视化处理的散点图、折线图、条形图等等内容,下面一起来看一下,希望对大家有帮助。

本篇文章给大家带来了关于Python的相关知识,其中主要介绍了关于进程池与进程锁的相关问题,包括进程池的创建模块,进程池函数等等内容,下面一起来看一下,希望对大家有帮助。

本篇文章给大家带来了关于Python的相关知识,其中主要介绍了关于简历筛选的相关问题,包括了定义 ReadDoc 类用以读取 word 文件以及定义 search_word 函数用以筛选的相关内容,下面一起来看一下,希望对大家有帮助。

VS Code的确是一款非常热门、有强大用户基础的一款开发工具。本文给大家介绍一下10款高效、好用的插件,能够让原本单薄的VS Code如虎添翼,开发效率顿时提升到一个新的阶段。

本篇文章给大家带来了关于Python的相关知识,其中主要介绍了关于数据类型之字符串、数字的相关问题,下面一起来看一下,希望对大家有帮助。

本篇文章给大家带来了关于Python的相关知识,其中主要介绍了关于numpy模块的相关问题,Numpy是Numerical Python extensions的缩写,字面意思是Python数值计算扩展,下面一起来看一下,希望对大家有帮助。

pythn的中文意思是巨蟒、蟒蛇。1989年圣诞节期间,Guido van Rossum在家闲的没事干,为了跟朋友庆祝圣诞节,决定发明一种全新的脚本语言。他很喜欢一个肥皂剧叫Monty Python,所以便把这门语言叫做python。


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Dreamweaver CS6
Visual web development tools

Dreamweaver Mac version
Visual web development tools

mPDF
mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

Notepad++7.3.1
Easy-to-use and free code editor

Zend Studio 13.0.1
Powerful PHP integrated development environment
