Home > Article > Backend Development > Which one is faster, python crawler or octopus?
Octopus has some advantages, such as low learning cost, visual process, and rapid construction of collection system. Can directly export excel files and export to database. To reduce collection costs, cloud collection provides 10 nodes, which can also save a lot of trouble.
Octopus Collector also provides cloud collection services, which can be completed in a short time. You may need a few days. time to collect workload. (Recommended learning: Python video tutorial)
The bad thing is that even though it seems very simple, and there is a more fool-proof smart mode, there are pitfalls inside Only those who have used it a lot will understand.
First of all, the loops inside are all xpath element positioning. If you use simple click positioning, it will be very rigid, and it is easy to make mistakes when collecting pages in large batches. In addition, there are too many newbies who use this tool because of its convenience. People ask common questions all day long. They don’t know the page structure and don’t understand xpath. It is easy to cause problems such as incomplete collection and infinite page turning.
But Octopus Collector’s ajax loading, simulating mobile phone pages, filtering ads, scrolling to the bottom of the page and other functions are amazing tools and can be done with just one check. Writing code is very troublesome, and implementing these functions is laborious.
Octopus is just a tool after all, and its degree of freedom will definitely defeat programming. The advantage is convenience, speed and low cost.
The Octopus Judgment Quotes are weak and cannot make complex judgments or execute complex logic. Also, only the enterprise version of Octopus can solve the verification code problem, and the general version cannot access the coding platform.
Another point is that there is no OCR function. The phone numbers collected by 58.com and Ganji.com are all in image format. Python can be solved by using the open source image recognition library, and it can be connected and recognized.
The data collection needs determine what tool is ultimately used. If I have a large amount of data collection needs, crawlers must be inevitable because the code has a higher degree of freedom. I think the goal of Octopus is not to replace Python, but to achieve the goal of a collector that everyone can use.
Another point is that python is easy to learn, simple to deploy, open source and free. Even if you only learn scrapy, you can solve some problems. However, the trouble is that some functions that can be achieved by simple selection in some tools must be written by yourself or copied from other people's code. If you are not a full-time crawler writer, you will soon be able to solve it. I just want to go from getting started to giving up...
For more Python-related technical articles, please visit the Python Tutorial column to learn!
The above is the detailed content of Which one is faster, python crawler or octopus?. For more information, please follow other related articles on the PHP Chinese website!