search

Home  >  Q&A  >  body text

python - 爬虫模拟登陆网站的次数多了,一阵子变得访问不了

如题。
我用的是requests的session来登陆的。
每次运行完都会用close()来关闭掉。
因为我在测试一些东西,所以我经常待程序运行完又马上运行。
用POST来登陆的时候没有带其他信息,就只带了要POST的数据。

这是不是网站的一种反爬虫机制?为什么浏览器多次访问就可以?要不要带上头部?

PHPzPHPz2887 days ago538

reply all(3)I'll reply

  • PHP中文网

    PHP中文网2017-04-17 17:36:11

    Just save the cookie

    reply
    0
  • PHP中文网

    PHP中文网2017-04-17 17:36:11

    You can add User Agent to the request to simulate browser access.

    reply
    0
  • ringa_lee

    ringa_lee2017-04-17 17:36:11

    It is recommended to use the Archer Cloud Crawler Platform to develop crawlers and support automatic collection in the cloud.
    A few lines of javascript can implement complex crawlers and provide many functional functions: anti-anti-crawlers, js rendering, data publishing, chart analysis, anti-leeching, etc., which are often encountered in the process of developing crawlers The problem will be solved by the Archer for you.

    reply
    0
  • Cancelreply