Home  >  Article  >  Backend Development  >  How to write golang crawler

How to write golang crawler

王林
王林Original
2023-05-10 11:12:07626browse

Golang is a modern programming language suitable for writing efficient and concurrent web crawlers. Golang's high concurrency feature can greatly speed up crawling, and its syntax is concise and easy to learn and understand. This article will introduce in detail how to write a simple web crawler using Golang.

  1. Installing Golang

First, you need to install Golang. You can download and install the binary files of the corresponding operating system from the official website (https://golang.org/dl/). After installation, you need to set environment variables. On Linux and Mac, you can edit the ~/.bashrc file and add the following at the end of the file:

export GOPATH=$HOME/go
export PATH=$PATH:$GOPATH/bin

On Windows, you can edit the environment variables and add GOPATH to the environment variables and add %GOPATH% in to the PATH.

  1. Use Go Modules to manage dependencies

In Golang 1.13 and above, Go Modules is officially recognized as the official dependency management tool. We can use it to manage our project dependencies. Enter the project root directory and execute the following command:

go mod init spider

will create a go.mod file, which contains information about the spider project.

  1. Build an HTTP client

To write an HTTP client, you need to use the net/http package that comes with Golang. This package implements all details of the HTTP protocol, including parsing of HTTP requests and responses.

First, we create a new HTTP client:

func newHTTPClient(timeout time.Duration) *http.Client {

return &http.Client{
    Timeout: timeout,
}

}

We can use this client to send an HTTP GET request:

func fetch(url string) (string, error) {

client := newHTTPClient(time.Second * 5)
resp, err := client.Get(url)
if err != nil {
    return "", err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
    return "", fmt.Errorf("status code error: %d %s", resp.StatusCode, resp.Status)
}
bodyBytes, _ := ioutil.ReadAll(resp.Body)
return string(bodyBytes), nil

}

The fetch function will return the requested web page content and any errors. We use the defer keyword to ensure that the response body is closed when the function returns.

  1. Parsing HTML

Once we successfully obtain the source code of the web page, we need to parse the HTML to obtain the required information. We can use the standard packages of Go language html/template (HTML template) and html/parse (HTML parser).

func parse(htmlContent string) {

doc, err := html.Parse(strings.NewReader(htmlContent))
if err != nil {
    log.Fatal(err)
}
// Do something with doc...

}

We can use the html.Parse function to parse the HTML source code and return it as a tree structure. We can obtain the required information by recursively traversing this tree structure.

  1. Using regular expressions

Sometimes, we need to extract specific information from the HTML source code, such as a URL link or a piece of text. In this case we can use regular expressions. Golang has very good support for regular expressions, and we can use the regexp package to implement regular expressions.

For example, if we need to extract the links of all a tags from the HTML source code, we can use the following code:

func extractLinks(htmlContent string) []string {

linkRegex := regexp.MustCompile(`href="(.*?)"`)
matches := linkRegex.FindAllStringSubmatch(htmlContent, -1)
var links []string
for _, match := range matches {
    links = append(links, match[1])
}
return links

}

Regular expressionhref="(.*?)" Matches all links and returns a string array.

  1. Complete code

The following is a complete crawler code, which obtains all a tag links on a website page:

package main

import (

"fmt"
"log"
"net/http"
"regexp"
"strings"
"time"

"golang.org/x/net/html"

)

const (

url = "https://example.com"

)

func main() {

htmlContent, err := fetch(url)
if err != nil {
    log.Fatal(err)
}
links := extractLinks(htmlContent)
for _, link := range links {
    fmt.Println(link)
}

}

func newHTTPClient(timeout time.Duration) *http.Client {

return &http.Client{
    Timeout: timeout,
}

}

func fetch(url string) (string, error) {

client := newHTTPClient(time.Second * 5)
resp, err := client.Get(url)
if err != nil {
    return "", err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
    return "", fmt.Errorf("status code error: %d %s", resp.StatusCode, resp.Status)
}
bodyBytes, _ := ioutil.ReadAll(resp.Body)
return string(bodyBytes), nil

}

func extractLinks(htmlContent string) []string {

linkRegex := regexp.MustCompile(`href="(.*?)"`)
matches := linkRegex.FindAllStringSubmatch(htmlContent, -1)
var links []string
for _, match := range matches {
    links = append(links, match[1])
}
return links

}

func parse(htmlContent string) {

doc, err := html.Parse(strings.NewReader(htmlContent))
if err != nil {
    log.Fatal(err)
}
// Do something with doc...

}

Summary

Using Golang to write web crawlers can greatly improve the crawling speed, and using a powerful language like Golang to write crawler code can achieve higher maintainability and scalability. This article describes how to write a simple crawler using Golang. I hope this article can help readers who want to learn web crawlers and developers who use Golang.

The above is the detailed content of How to write golang crawler. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn