Home  >  Article  >  Backend Development  >  The solution to crawling garbled web pages using curl and file_get_contents

The solution to crawling garbled web pages using curl and file_get_contents

巴扎黑
巴扎黑Original
2016-11-09 11:23:401221browse

When I used the curl_init function to crawl Sohu's web pages today, I found that the collected web pages were garbled. After analysis, I found that the server turned on the gzip compression function. Just add multiple options CURLOPT_ENCODING to the function curl_setopt to parse gzip and you can decode it correctly.


Also, if the captured web page is encoded in GBK, but the script is indeed encoded in utf-8, the captured web page must be converted using the function mb_convert_encoding.

<?php
    $tmp = sys_get_temp_dir();
    $cookieDump = tempnam($tmp, &#39;cookies&#39;);
    $url = &#39;http://tv.sohu.com&#39;;
    $ch = curl_init();
    curl_setopt ($ch, CURLOPT_URL, $url);
    curl_setopt ($ch, CURLOPT_HEADER, 1);// 显示返回的Header区域内容
    curl_setopt ($ch, CURLOPT_FOLLOWLOCATION, 1); // 使用自动跳转
    curl_setopt ($ch, CURLOPT_TIMEOUT, 10);// 设置超时限制
    curl_setopt ($ch, CURLOPT_RETURNTRANSFER, 1); // 获取的信息以文件流的形式返回
    curl_setopt ($ch, CURLOPT_CONNECTTIMEOUT,10);// 链接超时限制
    curl_setopt ($ch, CURLOPT_HTTPHEADER,array(&#39;Accept-Encoding: gzip, deflate&#39;));//设置 http 头信息
    curl_setopt ($ch, CURLOPT_ENCODING, &#39;gzip,deflate&#39;);//添加 gzip 解码的选项,即使网页没启用 gzip 也没关系
    curl_setopt ($ch, CURLOPT_COOKIEJAR, $cookieDump);  // 存放Cookie信息的文件名称
    $content = curl_exec($ch);
    // 把抓取的网页由 GBK 转换成 UTF-8 
    $content = mb_convert_encoding($content,"UTF-8","GBK");
?>
rrree


Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn