search
HomeBackend DevelopmentPython TutorialPython crawler obtains websites of American dramas

This article was originally created by MaNong.com – Xiao Hao. Please read the reprint requirements at the end of the article for reprinting. Welcome to participate in our paid contribution plan!

I have always had the habit of watching American TV series. On the one hand, I can practice my English listening skills, and on the other hand, I can pass the time. It used to be possible to watch online on video websites, but since the restriction order imposed by the State Administration of Radio, Film and Television, it seems that imported American and British dramas are no longer updated simultaneously as before. However, as a nerd, how can I be willing to not follow any dramas, so I checked online and found an American drama download website [Tiantian American Dramas] that can be downloaded using Thunder. I can download various resources at will. Recently, I am obsessed with BBC’s High-definition documentary, nature is so beautiful.

Although I have found a resource website that can be downloaded, I have to open the browser every time, enter the URL, find the American drama, and then click the link to download. After a long time, the process becomes very cumbersome, and sometimes the website link cannot be opened, which is a bit troublesome. I happen to have been learning Python crawler, so today I wrote a crawler on a whim to grab all the American drama links on the website and save them in a text document. If you want any drama, just open it and copy the link to You can download it via Thunder.

In fact, I originally planned to write something that finds a URL, uses requests to open it, grabs the download link, and crawls the entire site starting from the homepage. However, there are a lot of repeated links, and the URL of the website is not as regular as I thought. After writing for a long time, I still haven't written the kind of divergent crawler I want. Maybe I am not mature enough, so keep working hard. . .

Later I discovered that the links to the TV series were all in the article, and there was a number behind the article URL, like this http://cn163.net/archives/24016/, So I cleverly used the crawler experience I wrote before. The solution is to automatically generate the URL. Can’t the number behind it be changed? And each drama is unique, so I tried to find out how many articles there are. , and then use the range function to directly generate numbers continuously to construct the url.

But many URLs do not exist, so they will hang up directly. Don’t worry, we are using requests, and its built-in status_code is used to determine the status returned by the request, so as long as it is the returned status We skip all those with code 404, and crawl the other links, which solves the URL problem.

The following is the implementation code of the above steps.

def get_urls(self):
    try:
        for i in range(2015,25000):
            base_url='http://cn163.net/archives/'
            url=base_url+str(i)+'/'
            if requests.get(url).status_code == 404:
                continue
            else:
                self.save_links(url)
    except Exception,e:
        pass

The rest went very smoothly. I found a similar crawler written by someone before on the Internet, but it only crawled one article, so I borrowed its regular expressions. I used BeautifulSoup but the effect was not as good as the regular method, so I gave up decisively. There is no limit to my learning. However, the effect is not so ideal. About half of the links cannot be crawled correctly and need to continue to be optimized.

#  -*- coding:utf-8 -*-
import requests 
import re
import sys
import threading
import time
reload(sys)
sys.setdefaultencoding('utf-8')
class Archives(object):

    def save_links(self,url):
        try:

            data=requests.get(url,timeout=3)
            content=data.text
            link_pat='"(ed2k://\|file\|[^"]+?\.(S\d+)(E\d+)[^"]+?1024X\d{3}[^"]+?)"'
            name_pat=re.compile(r&#39;<h2 id="">(.*?)</h2>&#39;,re.S)
            links = set(re.findall(link_pat,content))
            name=re.findall(name_pat,content)
            links_dict = {}
            count=len(links)
        except Exception,e:
            pass
        for i in links:
            links_dict[int(i[1][1:3]) * 100 + int(i[2][1:3])] = i#把剧集按s和e提取编号
        try:
            with open(name[0].replace(&#39;/&#39;,&#39; &#39;)+&#39;.txt&#39;,&#39;w&#39;) as f:
                print name[0]
                for i in sorted(list(links_dict.keys())):#按季数+集数排序顺序写入
                    f.write(links_dict[i][0] + &#39;\n&#39;)
            print "Get links ... ", name[0], count
        except Exception,e:
            pass

    def get_urls(self):
        try:
            for i in range(2015,25000):
                base_url=&#39;http://cn163.net/archives/&#39;
                url=base_url+str(i)+&#39;/&#39;
                if requests.get(url).status_code == 404:
                    continue
                else:
                    self.save_links(url)
        except Exception,e:
            pass
    def main(self):
        thread1=threading.Thread(target=self.get_urls())
        thread1.start()
        thread1.join()
    if __name__ == &#39;__main__&#39;:
    start=time.time()
    a=Archives()
    a.main()
    end=time.time()
    print end-start

The full version of the code also uses multi-threading, but it seems useless. It may be because of Python's GIL. There seem to be more than 20,000 dramas. I thought it would take a long time to complete the crawling. , but excluding URL errors and unmatched URLs, the total crawling time is less than 20 minutes. I originally wanted to use Redis to crawl on two Linux machines, but after a lot of fussing, I felt it was unnecessary, so I left it at that and will do it later when I need more data.

Another problem that tortured me during the process was the saving of file names. I must complain here. File names in txt text format can have spaces, but they cannot have slashes, backslashes, brackets etc. This is the problem. I spent the whole morning on this. At first I thought it was an error in crawling the data. After checking for a long time, I found out that the crawled drama title had a slash in it. This made me miserable. .

The above is the detailed content of Python crawler obtains websites of American dramas. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
How do you slice a Python array?How do you slice a Python array?May 01, 2025 am 12:18 AM

The basic syntax for Python list slicing is list[start:stop:step]. 1.start is the first element index included, 2.stop is the first element index excluded, and 3.step determines the step size between elements. Slices are not only used to extract data, but also to modify and invert lists.

Under what circumstances might lists perform better than arrays?Under what circumstances might lists perform better than arrays?May 01, 2025 am 12:06 AM

Listsoutperformarraysin:1)dynamicsizingandfrequentinsertions/deletions,2)storingheterogeneousdata,and3)memoryefficiencyforsparsedata,butmayhaveslightperformancecostsincertainoperations.

How can you convert a Python array to a Python list?How can you convert a Python array to a Python list?May 01, 2025 am 12:05 AM

ToconvertaPythonarraytoalist,usethelist()constructororageneratorexpression.1)Importthearraymoduleandcreateanarray.2)Uselist(arr)or[xforxinarr]toconvertittoalist,consideringperformanceandmemoryefficiencyforlargedatasets.

What is the purpose of using arrays when lists exist in Python?What is the purpose of using arrays when lists exist in Python?May 01, 2025 am 12:04 AM

ChoosearraysoverlistsinPythonforbetterperformanceandmemoryefficiencyinspecificscenarios.1)Largenumericaldatasets:Arraysreducememoryusage.2)Performance-criticaloperations:Arraysofferspeedboostsfortaskslikeappendingorsearching.3)Typesafety:Arraysenforc

Explain how to iterate through the elements of a list and an array.Explain how to iterate through the elements of a list and an array.May 01, 2025 am 12:01 AM

In Python, you can use for loops, enumerate and list comprehensions to traverse lists; in Java, you can use traditional for loops and enhanced for loops to traverse arrays. 1. Python list traversal methods include: for loop, enumerate and list comprehension. 2. Java array traversal methods include: traditional for loop and enhanced for loop.

What is Python Switch Statement?What is Python Switch Statement?Apr 30, 2025 pm 02:08 PM

The article discusses Python's new "match" statement introduced in version 3.10, which serves as an equivalent to switch statements in other languages. It enhances code readability and offers performance benefits over traditional if-elif-el

What are Exception Groups in Python?What are Exception Groups in Python?Apr 30, 2025 pm 02:07 PM

Exception Groups in Python 3.11 allow handling multiple exceptions simultaneously, improving error management in concurrent scenarios and complex operations.

What are Function Annotations in Python?What are Function Annotations in Python?Apr 30, 2025 pm 02:06 PM

Function annotations in Python add metadata to functions for type checking, documentation, and IDE support. They enhance code readability, maintenance, and are crucial in API development, data science, and library creation.

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

SecLists

SecLists

SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

EditPlus Chinese cracked version

EditPlus Chinese cracked version

Small size, syntax highlighting, does not support code prompt function

SublimeText3 Linux new version

SublimeText3 Linux new version

SublimeText3 Linux latest version

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment