


Generate Project
Scrapy provides a tool to generate projects. Some files are preset in the generated project, and users need to add their own code to these files.
Open the command line and execute: scrapy startproject tutorial. The generated project has a structure similar to the following
tutorial/
scrapy.cfg
tutorial/
__init__.py
items.py
pipelines.py
settings .py
spiders/
The name attribute is important , different spiders cannot use the same name
start_urls is the starting point for spiders to crawl web pages, and can include multiple URLs
parse method is the callback called by default after spider captures a web page, avoid using this name to define your own method .
When the spider gets the content of the url, it will call the parse method and pass it a response parameter. The response contains the content of the captured web page. In the parse method, you can parse the data from the captured web page. The code above simply saves the web page content to a file.
Start crawlingYou can open the command line, enter the generated project root directory tutorial/, and execute scrapy crawl dmoz, where dmoz is the name of the spider.from scrapy.spider import BaseSpider class DmozSpider(BaseSpider): name = "dmoz" allowed_domains = ["dmoz.org"] start_urls = [ "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/", "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/" ] def parse(self, response): filename = response.url.split("/")[-2] open(filename, 'wb').write(response.body)HtmlXPathSelector uses Xpath to parse data
//ul/li means to select all ul tags The li tag below
a/@href means selecting the href attribute of all a tags
a/text() means selecting the a tag text
a[@href="abc"] means selecting all a whose href attribute is abc Tag
We can save the parsed data in an object that scrapy can use, and then scrapy can help us save these objects without having to save the data to a file ourselves. We need to add some classes to items.py, which are used to describe the data we want to save
from scrapy.spider import BaseSpider from scrapy.selector import HtmlXPathSelector class DmozSpider(BaseSpider): name = "dmoz" allowed_domains = ["dmoz.org"] start_urls = [ "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/", "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/" ] def parse(self, response): hxs = HtmlXPathSelector(response) sites = hxs.select('//ul/li') for site in sites: title = site.select('a/text()').extract() link = site.select('a/@href').extract() desc = site.select('text()').extract() print title, link, desc
When executing scrapy on the command line, we can add two parameters to let scrapy output the items returned by the parse method to json In the file
scrapy crawl dmoz -o items.json -t json
items.json will be placed in the root directory of the project
Let scrapy automatically crawl all links on the webpageIn the example above, scrapy Only the contents of the two URLs in start_urls are crawled, but usually what we want to achieve is for scrapy to automatically discover all the links on a web page, and then crawl the contents of these links. In order to achieve this, we can extract the links we need in the parse method, then construct some Request objects and return them. Scrapy will automatically crawl these links. The code is similar:from scrapy.item import Item, Field class DmozItem(Item): title = Field() link = Field() desc = Field() 然后在spider的parse方法中,我们把解析出来的数据保存在DomzItem对象中。 from scrapy.spider import BaseSpider from scrapy.selector import HtmlXPathSelector from tutorial.items import DmozItem class DmozSpider(BaseSpider): name = "dmoz" allowed_domains = ["dmoz.org"] start_urls = [ "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/", "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/" ] def parse(self, response): hxs = HtmlXPathSelector(response) sites = hxs.select('//ul/li') items = [] for site in sites: item = DmozItem() item['title'] = site.select('a/text()').extract() item['link'] = site.select('a/@href').extract() item['desc'] = site.select('text()').extract() items.append(item) return itemsparse is the default callback, which returns a Request list. Scrapy automatically crawls web pages based on this list. Whenever a web page is captured, parse_item will be called, and parse_item will also return a list. Scrapy will The web page will be crawled based on this list, and parse_details will be called after crawling
In order to make such work easier, scrapy provides another spider base class, using which we can easily implement automatic crawling of links. We need to use CrawlSpider
class MySpider(BaseSpider): name = 'myspider' start_urls = ( 'http://example.com/page1', 'http://example.com/page2', ) def parse(self, response): # collect `item_urls` for item_url in item_urls: yield Request(url=item_url, callback=self.parse_item) def parse_item(self, response): item = MyItem() # populate `item` fields yield Request(url=item_details_url, meta={'item': item}, callback=self.parse_details) def parse_details(self, response): item = response.meta['item'] # populate more `item` fields return item
Compared with BaseSpider, the new class has an additional rules attribute. This attribute is a list, which can contain multiple Rules. Each Rule describes which links need to be crawled and which do not. This is the documentation for the Rule class http://doc.scrapy.org/en/latest/topics/spiders.html#scrapy.contrib.spiders.Rule
These rules can have callbacks or not, when there is no callback , scrapy simply follows all these links.
Usage of pipelines.py
In pipelines.py we can add some classes to filter out the items we don’t want and save the items to the database.
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor class MininovaSpider(CrawlSpider): name = 'mininova.org' allowed_domains = ['mininova.org'] start_urls = ['http://www.mininova.org/today'] rules = [Rule(SgmlLinkExtractor(allow=['/tor/\d+'])), Rule(SgmlLinkExtractor(allow=['/abc/\d+']), 'parse_torrent')] def parse_torrent(self, response): x = HtmlXPathSelector(response) torrent = TorrentItem() torrent['url'] = response.url torrent['name'] = x.select("//h1/text()").extract() torrent['description'] = x.select("//div[@id='description']").extract() torrent['size'] = x.select("//div[@id='info-left']/p[2]/text()[2]").extract() return torrent
If the item does not meet the requirements, then an exception will be thrown and the item will not be output to the json file.
To use pipelines, we also need to modify settings.py
Add a line
ITEM_PIPELINES = ['dirbot.pipelines.FilterWordsPipeline']
Now execute scrapy crawl dmoz -o items.json -t json, which does not meet the requirements The item was filtered out

This tutorial demonstrates how to use Python to process the statistical concept of Zipf's law and demonstrates the efficiency of Python's reading and sorting large text files when processing the law. You may be wondering what the term Zipf distribution means. To understand this term, we first need to define Zipf's law. Don't worry, I'll try to simplify the instructions. Zipf's Law Zipf's law simply means: in a large natural language corpus, the most frequently occurring words appear about twice as frequently as the second frequent words, three times as the third frequent words, four times as the fourth frequent words, and so on. Let's look at an example. If you look at the Brown corpus in American English, you will notice that the most frequent word is "th

This article explains how to use Beautiful Soup, a Python library, to parse HTML. It details common methods like find(), find_all(), select(), and get_text() for data extraction, handling of diverse HTML structures and errors, and alternatives (Sel

This article compares TensorFlow and PyTorch for deep learning. It details the steps involved: data preparation, model building, training, evaluation, and deployment. Key differences between the frameworks, particularly regarding computational grap

Python's statistics module provides powerful data statistical analysis capabilities to help us quickly understand the overall characteristics of data, such as biostatistics and business analysis. Instead of looking at data points one by one, just look at statistics such as mean or variance to discover trends and features in the original data that may be ignored, and compare large datasets more easily and effectively. This tutorial will explain how to calculate the mean and measure the degree of dispersion of the dataset. Unless otherwise stated, all functions in this module support the calculation of the mean() function instead of simply summing the average. Floating point numbers can also be used. import random import statistics from fracti

Serialization and deserialization of Python objects are key aspects of any non-trivial program. If you save something to a Python file, you do object serialization and deserialization if you read the configuration file, or if you respond to an HTTP request. In a sense, serialization and deserialization are the most boring things in the world. Who cares about all these formats and protocols? You want to persist or stream some Python objects and retrieve them in full at a later time. This is a great way to see the world on a conceptual level. However, on a practical level, the serialization scheme, format or protocol you choose may determine the speed, security, freedom of maintenance status, and other aspects of the program

The article discusses popular Python libraries like NumPy, Pandas, Matplotlib, Scikit-learn, TensorFlow, Django, Flask, and Requests, detailing their uses in scientific computing, data analysis, visualization, machine learning, web development, and H

In this tutorial you'll learn how to handle error conditions in Python from a whole system point of view. Error handling is a critical aspect of design, and it crosses from the lowest levels (sometimes the hardware) all the way to the end users. If y

This tutorial builds upon the previous introduction to Beautiful Soup, focusing on DOM manipulation beyond simple tree navigation. We'll explore efficient search methods and techniques for modifying HTML structure. One common DOM search method is ex


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

SublimeText3 Linux new version
SublimeText3 Linux latest version

SublimeText3 Chinese version
Chinese version, very easy to use

EditPlus Chinese cracked version
Small size, syntax highlighting, does not support code prompt function

WebStorm Mac version
Useful JavaScript development tools

Notepad++7.3.1
Easy-to-use and free code editor
