为什么感觉 pyspider 爬取的速度好慢(比自己直接用 requests 和 bs慢多了),是不是因为有些网页会 retry,不过成功率倒是比自己爬取的高多了。是不是我的使用方法不对?求解释
迷茫2017-04-18 10:33:39
Here you can set the rate/burst parameters on the web ui console to adjust the speed. Rate is the number of crawls per second, and burst is the number of concurrencies. The default is 1/3, so it is relatively slow. I still don’t know enough about this tool
高洛峰2017-04-18 10:33:39
I have never used a framework to write crawlers, but when I write crawlers myself, the more complex the model (such as thread control, thread status monitoring), the lower the efficiency when writing concurrently.