In the last episode, we talked about the need to use Java to make a Zhihu crawler, so this time, we will study how to use code to obtain the content of the web page.
First of all, if you have no experience with HTML, CSS, JS and AJAX, it is recommended to go to W3C (click me, click me) to learn a little bit.
Speaking of HTML, there is an issue of GET access and POST access.
If you lack understanding of this aspect, you can read this article from W3C: "GET vs. POST".
Aha, I won’t go into details here.
Then, next we need to use Java to crawl the content of a web page.
At this time, our Baidu will come in handy.
Yes, he is no longer the unknown internet speed tester, he is about to become our reptile guinea pig! ~
Let’s take a look at Baidu’s homepage first:
I believe everyone knows that a page like this now is the result of the joint work of HTML and CSS.
We right-click the page in the browser and select "View page source code":
Yes, it's something like this. This is the source code of the Baidu page.
Our next task is to use our crawler to get the same thing.
Let’s take a look at a simple source code first:
import java.io.*; import java.net.*; public class Main { public static void main(String[] args) { // 定义即将访问的链接 String url = "http://www.baidu.com"; // 定义一个字符串用来存储网页内容 String result = ""; // 定义一个缓冲字符输入流 BufferedReader in = null; try { // 将string转成url对象 URL realUrl = new URL(url); // 初始化一个链接到那个url的连接 URLConnection connection = realUrl.openConnection(); // 开始实际的连接 connection.connect(); // 初始化 BufferedReader输入流来读取URL的响应 in = new BufferedReader(new InputStreamReader( connection.getInputStream())); // 用来临时存储抓取到的每一行的数据 String line; while ((line = in.readLine()) != null) { //遍历抓取到的每一行并将其存储到result里面 result += line; } } catch (Exception e) { System.out.println("发送GET请求出现异常!" + e); e.printStackTrace(); } // 使用finally来关闭输入流 finally { try { if (in != null) { in.close(); } } catch (Exception e2) { e2.printStackTrace(); } } System.out.println(result); } }
The above is the Main method of Java simulating Get access to Baidu.
You can run it and see the result:
Aha, it is exactly the same as what we saw with the browser before. At this point, the simplest crawler is ready.
But such a big pile of things may not all be what I want. How can I grab what I want from it?
Take Baidu’s big paw logo as an example.
Temporary need:
Get the picture link of the big paw of Baidu Logo.
First let’s talk about the browser viewing method.
Right-click the image and select Inspect Elements (Firefox, Google, and IE11 all have this function, but the names are different):
Aha, you can see that under the siege of a lot of divs The poor img tag.
This src is the link to the image.
So how do we do it in java?
Please note in advance that in order to facilitate the demonstration of the code, all codes are not encapsulated by classes, please understand.
Let’s first encapsulate the previous code into a sendGet function:
import java.io.*; import java.net.*; public class Main { static String sendGet(String url) { // 定义一个字符串用来存储网页内容 String result = ""; // 定义一个缓冲字符输入流 BufferedReader in = null; try { // 将string转成url对象 URL realUrl = new URL(url); // 初始化一个链接到那个url的连接 URLConnection connection = realUrl.openConnection(); // 开始实际的连接 connection.connect(); // 初始化 BufferedReader输入流来读取URL的响应 in = new BufferedReader(new InputStreamReader( connection.getInputStream())); // 用来临时存储抓取到的每一行的数据 String line; while ((line = in.readLine()) != null) { // 遍历抓取到的每一行并将其存储到result里面 result += line; } } catch (Exception e) { System.out.println("发送GET请求出现异常!" + e); e.printStackTrace(); } // 使用finally来关闭输入流 finally { try { if (in != null) { in.close(); } } catch (Exception e2) { e2.printStackTrace(); } } return result; } public static void main(String[] args) { // 定义即将访问的链接 String url = "http://www.baidu.com"; // 访问链接并获取页面内容 String result = sendGet(url); System.out.println(result); } }
This looks a little neater, please forgive me for my obsessive-compulsive disorder.
The next task is to find the link to the picture from a lot of things obtained.
The first method we can think of is to use the indexof function to search for String substrings in the string result of the page source code.
Yes, this method can slowly solve this problem, such as directly indexOf("src") to find the starting serial number, and then get the ending serial number in a hurry.
But we can’t use this method all the time. After all, straw sandals are only suitable for walking around. Later, we still need to cut off the prosthetic leg to hold the head.
Please forgive my intrusion and continue.
So how do we find the src of this picture?
Yes, as the audience below said, regular matching.
If any students are not sure about regular expressions, you can refer to this article: [Python] Web Crawler (7): Regular Expressions Tutorial in Python.
Simply put, regular expression is like matching.
For example, three fat men are standing here, wearing red clothes, blue clothes, and green clothes.
The rule is: catch the one in green!
Then the fat green man was caught alone.
It’s that simple.
However, the regular grammar is still extensive and profound. It is inevitable that you are a little confused when you first come into contact with it.
I recommend a regular online testing tool to everyone: regular expression online test.
With regularity as a magic weapon, how to use regularity in java?
The above is the content of writing Java Zhihu crawler with zero foundation. First, practice with Baidu homepage. For more related content, please pay attention to the PHP Chinese website (www.php.cn)!