search

Home  >  Q&A  >  body text

node.js - How does nodejs crawler control the number of requests?

When using nodejs to crawl web page content, if there are too many requests, sometimes an exception will be thrown, prompting errors such as too many connections. Does nodejs have such keywords or class libraries for thread locking? Or how to handle it better? First, thank you!

滿天的星座滿天的星座2750 days ago1074

reply all(2)I'll reply

  • typecho

    typecho2017-06-27 09:21:30

    nodejs does not have functions such as sleep.
    I usually use event to match

    const EventEmitter = require('events').EventEmitter;
    const ee = new EventEmitter();
    
    ee.on('next',(数据)=>{
        // 爬网站
    });
    
    // 每秒执行一次
    setInterval(()=>ee.emit('next','数据'),1000);

    reply
    0
  • ringa_lee

    ringa_lee2017-06-27 09:21:30

    async ?

    reply
    0
  • Cancelreply