찾다

 >  Q&A  >  본문

python - scrapy爬虫不能循环运行?

scrapy只能爬取一个页面上的链接,不能持续运行爬完全站,以下是代码,初学求指导。

class DbbookSpider(scrapy.Spider):
    name = "imufe"
    allowed_domains = ['http://www.imufe.edu.cn/']
    start_urls=('http://www.imufe.edu.cn/main/dtxw/201704/t20170414_127035.html')
    def parse(self, response):
        item = DoubanbookItem()
        selector = scrapy.Selector(response)
        print(selector)
        books = selector.xpath('//a/@href').extract()
        link=[]
        for each in books:
            each=urljoin(response.url,each)
            link.append(each)
        for each in link:  
            item['link'] = each
            yield item
        i = random.randint(0,len(link)-1)
        nextPage = link[i]
        yield scrapy.http.Request(nextPage,callback=self.parse)
PHP中文网PHP中文网2788일 전532

모든 응답(1)나는 대답할 것이다

  • 大家讲道理

    大家讲道理2017-04-18 10:36:45

    너무 빨리 올라가서 금지됐나요?

    회신하다
    0
  • 취소회신하다