Home  >  Article  >  Java  >  Write Java Zhihu crawler from scratch to obtain content recommended by Zhihu editors

Write Java Zhihu crawler from scratch to obtain content recommended by Zhihu editors

黄舟
黄舟Original
2016-12-24 11:18:551631browse

First spend three to five minutes designing a Logo =. =As a programmer, I have always wanted to be an artist!
Write Java Zhihu crawler from scratch to obtain content recommended by Zhihu editors

Okay, it’s a bit small to make, so I’ll make do with it for now.

Next, we start making Zhihu’s crawler.

First of all, determine the first goal: editor recommendation.

Webpage link: http://www.zhihu.com/explore/recommendations

We have slightly modified the last code to obtain the content of the page:

import java.io.*;
import java.net.*;
import java.util.regex.*;
public class Main {
 static String SendGet(String url) {
  // 定义一个字符串用来存储网页内容
  String result = "";
  // 定义一个缓冲字符输入流
  BufferedReader in = null;
  try {
   // 将string转成url对象
   URL realUrl = new URL(url);
   // 初始化一个链接到那个url的连接
   URLConnection connection = realUrl.openConnection();
   // 开始实际的连接
   connection.connect();
   // 初始化 BufferedReader输入流来读取URL的响应
   in = new BufferedReader(new InputStreamReader(
     connection.getInputStream()));
   // 用来临时存储抓取到的每一行的数据
   String line;
   while ((line = in.readLine()) != null) {
    // 遍历抓取到的每一行并将其存储到result里面
    result += line;
   }
  } catch (Exception e) {
   System.out.println("发送GET请求出现异常!" + e);
   e.printStackTrace();
  }
  // 使用finally来关闭输入流
  finally {
   try {
    if (in != null) {
     in.close();
    }
   } catch (Exception e2) {
    e2.printStackTrace();
   }
  }
  return result;
 }
 static String RegexString(String targetStr, String patternStr) {
  // 定义一个样式模板,此中使用正则表达式,括号中是要抓的内容
  // 相当于埋好了陷阱匹配的地方就会掉下去
  Pattern pattern = Pattern.compile(patternStr);
  // 定义一个matcher用来做匹配
  Matcher matcher = pattern.matcher(targetStr);
  // 如果找到了
  if (matcher.find()) {
   // 打印出结果
   return matcher.group(1);
  }
  return "Nothing";
 }
 public static void main(String[] args) {
  // 定义即将访问的链接
  String url = "http://www.zhihu.com/explore/recommendations";
  // 访问链接并获取页面内容
  String result = SendGet(url);
  // 使用正则匹配图片的src内容
  //String imgSrc = RegexString(result, "src=\"(.+?)\"");
  // 打印结果
  System.out.println(result);
 }
}

There will be no problem if you run it, the next step is A regular matching problem.

First, let’s get all the questions on this page.

Right-click the title and inspect the element:

Write Java Zhihu crawler from scratch to obtain content recommended by Zhihu editors

Aha, you can see that the title is actually an a tag, which is a hyperlink, and the thing that can be distinguished from other hyperlinks should be the class. That is, the class selector.

So our regular statement comes out: question_link.+?href="(.+?)"

Call the RegexString function and pass it parameters:

public static void main(String[] args) {
  // 定义即将访问的链接
  String url = "http://www.zhihu.com/explore/recommendations";
  // 访问链接并获取页面内容
  String result = SendGet(url);
  // 使用正则匹配图片的src内容
  String imgSrc = RegexString(result, "question_link.+?>(.+?)<");
  // 打印结果
  System.out.println(imgSrc);
 }

Aha, you can see that we successfully caught one Title (note, it’s just one):
Write Java Zhihu crawler from scratch to obtain content recommended by Zhihu editors

Wait a minute, what is this mess? !

Don’t be nervous=. =It's just garbled characters.

For encoding issues, please refer to: HTML character set

Generally speaking, the mainstream encodings with better support for Chinese are UTF-8, GB2312 and GBK encoding.



The above is the content of writing Java Zhihu crawler with zero foundation to obtain content recommended by Zhihu editors. For more related content, please pay attention to the PHP Chinese website (www.php.cn)!



Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn