Home > Article > Backend Development > php curl cannot crawl
How to solve the problem that php curl cannot crawl data
With the rapid development of the Internet, crawler technology has become more and more mature. When developing crawlers, php curl is a classic crawler tool. However, some developers may encounter a situation where data cannot be captured when using php curl. What should they do in this case? This article will introduce some common reasons and solutions for why php curl cannot capture data.
1. No header information added
Almost all websites will check the http request. If the header information is missing, access is likely to be denied by the server. The solution is to set header information in php curl. You can use the curl_setopt function to set it, as follows:
$header = array( 'User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3' ); curl_setopt($ch, CURLOPT_HTTPHEADER, $header);
2. Unprocessed jump
When using php curl to crawl web pages, some websites will jump, and curl will terminate the operation by default. . The solution is to add the CURLOPT_FOLLOWLOCATION option, as follows:
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
3. Unprocessed cookies
Many websites use cookies to record user behavior. If cookies are not processed, the captured content may problem appear. The solution is to use the curl_setopt function to set the CURLOPT_COOKIEFILE and CURLOPT_COOKIEJAR options, as follows:
curl_setopt($ch, CURLOPT_COOKIEFILE, $cookie); curl_setopt($ch, CURLOPT_COOKIEJAR, $cookie);
Among them, $cookie is a file path used to store unexpired cookies.
4. The timeout is not set
When crawling a web page, if the server response time is too long, it may cause php curl to be in a waiting state. To avoid this situation, you can use the curl_setopt function to set the CURLOPT_TIMEOUT and CURLOPT_CONNECTTIMEOUT options, as follows:
curl_setopt($ch, CURLOPT_TIMEOUT, 30); curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 10);
Among them, the CURLOPT_TIMEOUT option represents the timeout of the entire request, in seconds; the CURLOPT_CONNECTTIMEOUT option represents the timeout for connecting to the server, The unit is seconds.
5. Not using the correct proxy
In order to prevent crawler access, some websites will restrict requests from the same IP. The solution is to use a proxy. Use the curl_setopt function to set the CURLOPT_PROXY option and CURLOPT_PROXYPORT option, as follows:
curl_setopt($ch, CURLOPT_PROXY, '代理服务器地址'); curl_setopt($ch, CURLOPT_PROXYPORT, '代理服务器端口');
6. SSL verification is not turned on
Some websites need to use the SSL encryption protocol for data transmission. If SSL verification is not turned on, php curl Data will not be captured. The solution is to use the curl_setopt function to set the CURLOPT_SSL_VERIFYPEER option and CURLOPT_SSL_VERIFYHOST option, as follows:
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, false);
Among them, the CURLOPT_SSL_VERIFYPEER option indicates whether to verify the peer certificate, using false indicates not to verify; the CURLOPT_SSL_VERIFYHOST option indicates whether to check the common name in the certificate and Whether the uri matches, use false to indicate no checking.
The above are some common reasons and solutions for why php curl cannot capture data. When we encounter a crawling failure, we need to troubleshoot the problem step by step and use a variety of methods to solve the problem. I believe that as long as we continue to work hard, we can master the php curl crawler technology and successfully complete our crawler development tasks.
The above is the detailed content of php curl cannot crawl. For more information, please follow other related articles on the PHP Chinese website!