Home  >  Article  >  Backend Development  >  How to use the concurrent function in Go language to crawl multiple web pages in parallel?

How to use the concurrent function in Go language to crawl multiple web pages in parallel?

WBOY
WBOYOriginal
2023-07-29 19:13:121233browse

How to use the concurrent function in Go language to achieve parallel crawling of multiple web pages?

In modern web development, it is often necessary to crawl data from multiple web pages. The general approach is to initiate network requests one by one and wait for responses, which is less efficient. The Go language provides powerful concurrency functions that can improve efficiency by crawling multiple web pages in parallel. This article will introduce how to use the concurrent function of Go language to achieve parallel crawling of multiple web pages, as well as some precautions.

First, we need to create concurrent tasks using the go keyword built into the Go language. By adding the go keyword before the function call, the Go language will wrap the function call into a concurrent task, and then immediately return control to the main program to continue executing subsequent code. This can achieve the effect of crawling multiple web pages in parallel.

The following is a simple sample code:

package main

import (
    "fmt"
    "io/ioutil"
    "net/http"
)

// 并发抓取网页的函数
func fetch(url string, ch chan<- string) {
    resp, err := http.Get(url)
    if err != nil {
        ch <- fmt.Sprintf("fetch %s failed: %v", url, err)
        return
    }
    defer resp.Body.Close()

    body, err := ioutil.ReadAll(resp.Body)
    if err != nil {
        ch <- fmt.Sprintf("read %s failed: %v", url, err)
        return
    }

    ch <- fmt.Sprintf("fetch %s success: %d bytes", url, len(body))
}

func main() {
    urls := []string{"http://www.example.com", "http://www.google.com", "http://www.microsoft.com"}

    ch := make(chan string)

    for _, url := range urls {
        go fetch(url, ch)
    }

    for range urls {
        fmt.Println(<-ch)
    }
}

In the above code, we define a fetch function to crawl a single web page. The fetch function initiates a network request through http.Get and sends the request result to a chan type channel ch. In the main program, we create a channel ch and a slice urls containing multiple web page URLs. Then, loop through the urls slices by for and call the fetch function on each URL. Each time the fetch function is called, a concurrent task will be created using the go keyword so that multiple tasks can be executed at the same time.

Finally, we traverse the urls slice once through the for loop, receive the crawl results from the channel ch and print the output. Because the read operation of the channel will block, the program will wait for all concurrent tasks to complete before outputting.

It should be noted that the execution order of concurrent tasks is uncertain, so the order of the final output results is also uncertain. If you need to maintain the order of results, you can use sync.WaitGroup to wait for the completion of concurrent tasks and then process the results in order.

In addition, it should be noted that concurrently crawling web pages may cause greater pressure on the target website. In order to avoid being blocked by the target website or affecting service quality, you can reasonably adjust the number of concurrent tasks, increase the crawl interval and other strategies.

In short, by using the concurrency function of Go language, we can easily achieve parallel crawling of multiple web pages. This can not only improve the crawling efficiency, but also better cope with large-scale data collection needs. At the same time, using concurrent tasks can also improve the scalability and parallel computing capabilities of the program.

The above is the detailed content of How to use the concurrent function in Go language to crawl multiple web pages in parallel?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Related articles

See more