Home > Article > Backend Development > golang concurrent requests
In modern web applications, network requests are a crucial part. With network requests we can easily get and send data. However, as the size of the application continues to increase, the number of requests will also increase. In this case, how to ensure the stability and efficiency of the system becomes particularly important.
Go language is an efficient concurrent programming language with good memory management and concurrency control, so it is excellent at handling high concurrent requests. This article introduces how to use Go language to handle concurrent requests.
Generally speaking, a network request consists of three steps: establishing a connection, sending a request and receiving a response. In a traditional application, each request proceeds through these three steps. However, in highly concurrent applications, this approach is inefficient because each request needs to wait for the previous request to complete before it can start execution.
Here is a different approach. We can use the concurrency feature of the Go language to execute multiple requests at the same time, so that the application can handle multiple requests at the same time.
Here is a simple sample code:
func main() { urls := []string{ "http://example.com", "http://example.net", "http://example.org", } ch := make(chan string) for _, url := range urls { go fetch(url, ch) } for range urls { fmt.Println(<-ch) } } func fetch(url string, ch chan<- string) { resp, err := http.Get(url) if err != nil { ch <- fmt.Sprint(err) return } defer resp.Body.Close() text, err := ioutil.ReadAll(resp.Body) if err != nil { ch <- fmt.Sprint(err) return } ch <- fmt.Sprintf("url:%s, body:%s", url, text[:100]) }
In this example, we define a slice that contains multiple URLs. We then created a buffered channel and used the go
keyword to start a goroutine to handle each request simultaneously. In the goroutine, we perform the same steps as handling a single request and use a channel to send the result back to the main program. Finally, we use a simple for range
loop to wait for all requests to complete and print the results.
In the above example, we used goroutine to process multiple requests concurrently. However, doing so may cause the system to be blocked by too many requests and crash. To avoid this situation, we need to control the amount of concurrency.
In the Go language, the WaitGroup
structure in the sync
package can solve this problem very well. This structure allows us to increase the amount of concurrency in a block of code and wait for all tasks to complete before continuing. Here is a simple sample code:
func main() { urls := []string{ "http://example.com", "http://example.net", "http://example.org", } var wg sync.WaitGroup for _, url := range urls { wg.Add(1) go func(url string) { defer wg.Done() resp, err := http.Get(url) if err != nil { fmt.Println(err) return } defer resp.Body.Close() body, err := ioutil.ReadAll(resp.Body) if err != nil { fmt.Println(err) return } fmt.Printf("url:%s, body:%s", url, body[:20]) }(url) } wg.Wait() }
In this example, we first define a WaitGroup
variable. In the loop, we use the Add
method to increase the concurrent count. Then, we start a goroutine to handle each request. Finally, we use the Wait
method to wait for all goroutines to complete and resume execution.
When processing multiple requests, we not only need to control the amount of concurrency, but also need to process concurrent results. Generally speaking, we need to collect the results of all requests into an array or other data structure and process them after all requests are completed.
In Go language, we can use the Mutex
structure in the sync
package to organize multiple goroutines' access to data structures. Mutex
can prevent multiple goroutines from modifying shared resources at the same time and ensure that only one goroutine can access it at the same time.
The following is a sample code:
type Result struct { url string body []byte err error } func fetch(url string, ch chan<- Result) { resp, err := http.Get(url) if err != nil { ch <- Result{url: url, err: err} return } defer resp.Body.Close() body, err := ioutil.ReadAll(resp.Body) if err != nil { ch <- Result{url: url, err: err} return } ch <- Result{url: url, body: body} } func main() { urls := []string{ "http://example.com", "http://example.net", "http://example.org", } var results []Result ch := make(chan Result) for _, url := range urls { go fetch(url, ch) } for range urls { results = append(results, <-ch) } for _, result := range results { if result.err != nil { fmt.Println(result.err) continue } fmt.Printf("url:%s, body:%s", result.url, result.body[:20]) } }
In this example, we define a Result
structure to save the return value of each request. We then created a buffered channel and used a goroutine to execute each request concurrently. In goroutine, we use Mutex
to ensure that shared resources are not accessed by multiple goroutines at the same time. Finally, we use a loop to wait for all requests to complete and collect the results into an array. Finally, we iterate through the results array and print the return value of each request.
Summary
Using Go language to handle concurrent requests can greatly improve the efficiency and reliability of the system. When the application needs to handle a large number of requests, we should use goroutines and channels to execute requests concurrently, and use WaitGroup
and Mutex
to control the amount of concurrency and protect shared resources. In this way, we can handle large numbers of requests simply and efficiently, improving application performance and stability.
The above is the detailed content of golang concurrent requests. For more information, please follow other related articles on the PHP Chinese website!