


php implements the development of simple crawlers, php implements crawlers_PHP tutorial
PHP implements the development of simple crawlers, PHP implements crawlers
Sometimes because of work and our own needs, we will browse different websites to obtain the data we need, so crawlers should be used Now, the following is my process and problems encountered in developing a simple crawler.
To develop a crawler, first you need to know what your crawler is going to be used for. I want to use it to find articles with specific keywords on different websites and get their links so that I can read them quickly.
According to personal habits, I first need to write an interface and clarify my ideas.
1. Go to different websites. Then we need a url input box.
2. Find articles with specific keywords. Then we need an article title input box.
3. Get the article link. Then we need a display container for search results.
<div class="jumbotron" id="mainJumbotron"> <div class="panel panel-default"> <div class="panel-heading">文章URL抓取</div> <div class="panel-body"> <div class="form-group"> <label for="article_title">文章标题</label> <input type="text" class="form-control" id="article_title" placeholder="文章标题"> </div> <div class="form-group"> <label for="website_url">网站URL</label> <input type="text" class="form-control" id="website_url" placeholder="网站URL"> </div> <button type="submit" class="btn btn-default">抓取</button> </div> </div> <div class="panel panel-default"> <div class="panel-heading">文章URL</div> <div class="panel-body"> <h3></h3> </div> </div> </div>
Add the code directly, then add some style adjustments of your own, and the interface is complete:
Then the next step is to implement the function. I use PHP to write it. The first step is to obtain the html code of the website. There are many ways to obtain the html code. I will not introduce them one by one. Curl is used here. Get, pass in the website url and you will get the html code:
private function get_html($url){ $ch = curl_init(); $timeout = 10; curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_ENCODING, 'gzip'); curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.131 Safari/537.36'); curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, $timeout); $html = curl_exec($ch); return $html; }
Although you get the html code, you will soon encounter a problem, that is, the encoding problem, which may make your next step of matching in vain. Here we uniformly convert the obtained html content into utf8 encoding:
$coding = mb_detect_encoding($html); if ($coding != "UTF-8" || !mb_check_encoding($html, "UTF-8")) $html = mb_convert_encoding($html, 'utf-8', 'GBK,UTF-8,ASCII');
Get the html of the website and get the url of the article. Then the next step is to match all a tags under the web page. You need to use regular expressions. After many tests, you finally get a more reliable regular expression. , no matter how complex the structure under the a tag is, as long as it is a tag, it will not be missed: (the most critical step)
$pattern = '|<a[^>]*>(.*)</a>|isU'; preg_match_all($pattern, $html, $matches);
The matching result is in $matches, which is probably a multi-dimensional group like this:
array(2) { [0]=> array(*) { [0]=> string(*) "完整的a标签" . . . } [1]=> array(*) { [0]=> string(*) "与上面下标相对应的a标签中的内容" } }
As long as you can get this data, everything else is completely operable. You can traverse this element group, find the a tag you want, and then get the corresponding attributes of the a tag. You can do whatever you want. Here is a class recommended , making it easier for you to operate a tag:
$dom = new DOMDocument(); @$dom->loadHTML($a);//$a是上面得到的一些a标签 $url = new DOMXPath($dom); $hrefs = $url->evaluate('//a'); for ($i = 0; $i < $hrefs->length; $i++) { $href = $hrefs->item($i); $url = $href->getAttribute('href'); //这里获取a标签的href属性 }
Of course, this is just one way. You can also use regular expressions to match the information you want and play new tricks with the data.
Get and match the results you want. The next step is of course to send them back to the front end to display them. Write the interface, then use js to get the data on the front end, and use jquery to dynamically add the content and display it:
var website_url = '你的接口地址'; $.getJSON(website_url,function(data){ if(data){ if(data.text == ''){ $('#article_url').html('<div><p>暂无该文章链接</p></div>'); return; } var string = ''; var list = data.text; for (var j in list) { var content = list[j].url_content; for (var i in content) { if (content[i].title != '') { string += '<div class="item">' + '<em>[<a href="http://' + list[j].website.web_url + '" target="_blank">' + list[j].website.web_name + '</a>]</em>' + '<a href=" ' + content[i].url + '" target="_blank" class="web_url">' + content[i].title + '</a>' + '</div>'; } } } $('#article_url').html(string); });
The final rendering:
Articles you may be interested in:
- php IIS log analysis search engine crawler recording program
- php displays different content to visitors and crawlers
- a A lightweight simple crawler implemented in PHP
- How to implement a simple crawler in PHP
- PHP code to implement crawler records - super useful
- Millions of PHP crawler Zhihu users Data crawling and analysis
- PHP HTML JavaScript Css to implement simple crawler development

APHPDependencyInjectionContainerisatoolthatmanagesclassdependencies,enhancingcodemodularity,testability,andmaintainability.Itactsasacentralhubforcreatingandinjectingdependencies,thusreducingtightcouplingandeasingunittesting.

Select DependencyInjection (DI) for large applications, ServiceLocator is suitable for small projects or prototypes. 1) DI improves the testability and modularity of the code through constructor injection. 2) ServiceLocator obtains services through center registration, which is convenient but may lead to an increase in code coupling.

PHPapplicationscanbeoptimizedforspeedandefficiencyby:1)enablingopcacheinphp.ini,2)usingpreparedstatementswithPDOfordatabasequeries,3)replacingloopswitharray_filterandarray_mapfordataprocessing,4)configuringNginxasareverseproxy,5)implementingcachingwi

PHPemailvalidationinvolvesthreesteps:1)Formatvalidationusingregularexpressionstochecktheemailformat;2)DNSvalidationtoensurethedomainhasavalidMXrecord;3)SMTPvalidation,themostthoroughmethod,whichchecksifthemailboxexistsbyconnectingtotheSMTPserver.Impl

TomakePHPapplicationsfaster,followthesesteps:1)UseOpcodeCachinglikeOPcachetostoreprecompiledscriptbytecode.2)MinimizeDatabaseQueriesbyusingquerycachingandefficientindexing.3)LeveragePHP7 Featuresforbettercodeefficiency.4)ImplementCachingStrategiessuc

ToimprovePHPapplicationspeed,followthesesteps:1)EnableopcodecachingwithAPCutoreducescriptexecutiontime.2)ImplementdatabasequerycachingusingPDOtominimizedatabasehits.3)UseHTTP/2tomultiplexrequestsandreduceconnectionoverhead.4)Limitsessionusagebyclosin

Dependency injection (DI) significantly improves the testability of PHP code by explicitly transitive dependencies. 1) DI decoupling classes and specific implementations make testing and maintenance more flexible. 2) Among the three types, the constructor injects explicit expression dependencies to keep the state consistent. 3) Use DI containers to manage complex dependencies to improve code quality and development efficiency.

DatabasequeryoptimizationinPHPinvolvesseveralstrategiestoenhanceperformance.1)Selectonlynecessarycolumnstoreducedatatransfer.2)Useindexingtospeedupdataretrieval.3)Implementquerycachingtostoreresultsoffrequentqueries.4)Utilizepreparedstatementsforeffi


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Dreamweaver Mac version
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

EditPlus Chinese cracked version
Small size, syntax highlighting, does not support code prompt function

MinGW - Minimalist GNU for Windows
This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

SecLists
SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.
