Home >Backend Development >Python Tutorial >Python uses Phantomjs to crawl the web page after rendering JS
I recently needed to crawl a website, but unfortunately the pages were all generated after JS rendering. Ordinary crawler frameworks couldn't handle it, so I thought of using Phantomjs to build a proxy.
There seems to be no ready-made third-party library for Python to call Phantomjs (if there is, please inform Xiao2). After taking a walk, I found that only pyspider provides a ready-made solution.
After a brief trial, I feel that pyspider is more like a crawler tool built for novices, like an old lady, sometimes meticulous, sometimes chatty.
Lightweight gadgets should be more popular. With a little selfishness, I can use my favorite BeautifulSoup together without having to learn PyQuery (pyspider is used to parse HTML), let alone endure the browser writing Python Bad experience (laughing).
So I spent an afternoon taking out the part of pyspider that implements the Phantomjs agent and turning it into a small crawler module. I hope everyone will like it (thanks binux!).
Preparation
Of course you need Phantomjs, nonsense! (It is best to use supervisord to guard under Linux. Phantomjs must be kept open when crawling)
Start with phantomjs_fetcher.js in the project path: phantomjs phantomjs_fetcher.js [port]
Install tornado dependencies (using tornado httpclient module)
The call is super simple
from tornado_fetcher import Fetcher # 创建一个爬虫 >>> fetcher=Fetcher( user_agent='phantomjs', # 模拟浏览器的User-Agent phantomjs_proxy='http://localhost:12306', # phantomjs的地址 poolsize=10, # 最大的httpclient数量 async=False # 同步还是异步 ) # 开始连接Phantomjs的代理,可以渲染JS! >>> fetcher.phantomjs_fetch(url) # 渲染成功后执行额外的JS脚本(注意用function包起来!) >>> fetcher.phantomjs_fetch(url, js_script='function(){setTimeout("window.scrollTo(0,100000)}", 1000)')