搜尋

首頁  >  問答  >  主體

python - scrapy url去重

请问scrapy是url自动去重的吗?比如下面这段代码,为什么运行时start_urls里面的重复url会重复爬取了?

class TestSpider(scrapy.Spider):
    name = "test"
    allowed_domains = ["baidu.com"]
    start_urls = ['http://baike.baidu.com/fenlei/%E5%A8%B1%E4%B9%90%E4%BA%BA%E7%89%A9',
                  'http://baike.baidu.com/fenlei/%E5%A8%B1%E4%B9%90%E4%BA%BA%E7%89%A9',
                  'http://baike.baidu.com/fenlei/%E5%A8%B1%E4%B9%90%E4%BA%BA%E7%89%A9',]

    def parse(self, response):
        for sel in response.xpath('//p[@class="grid-list grid-list-spot"]/ul/li'):
            item = TestspiderItem()
            item['title'] = sel.xpath('p[@class="list"]/a/text()')[0].extract()
            item['link'] = sel.xpath('p[@class="list"]/a/@href')[0].extract()
            yield item
阿神阿神2787 天前803

全部回覆(2)我來回復

  • 迷茫

    迷茫2017-04-18 10:29:19

    建造一個Url管理器,就不會重複抓取了

    回覆
    0
  • PHP中文网

    PHP中文网2017-04-18 10:29:19

    知道了,改成這樣就可以了。

    def start_requests(self):

    yield scrapy.Request('http://baike.baidu.com/fenlei/%E5%A8%B1%E4%B9%90%E4%BA%BA%E7%89%A9', self.parse)
    yield scrapy.Request('http://baike.baidu.com/fenlei/%E5%A8%B1%E4%B9%90%E4%BA%BA%E7%89%A9', self.parse)
    yield scrapy.Request('http://baike.baidu.com/fenlei/%E5%A8%B1%E4%B9%90%E4%BA%BA%E7%89%A9', self.parse)

    回覆
    0
  • 取消回覆