如题。
我用的是requests的session来登陆的。
每次运行完都会用close()来关闭掉。
因为我在测试一些东西,所以我经常待程序运行完又马上运行。
用POST来登陆的时候没有带其他信息,就只带了要POST的数据。
这是不是网站的一种反爬虫机制?为什么浏览器多次访问就可以?要不要带上头部?
ringa_lee2017-04-17 17:36:11
It is recommended to use the Archer Cloud Crawler Platform to develop crawlers and support automatic collection in the cloud.
A few lines of javascript can implement complex crawlers and provide many functional functions: anti-anti-crawlers, js rendering, data publishing, chart analysis, anti-leeching, etc., which are often encountered in the process of developing crawlers The problem will be solved by the Archer for you.