在尝试 Go 的性能时,您可能会在尝试并发执行大量 HTTP 请求时遇到限制。本文探讨了所面临的挑战,并提供了实现最大并发性的解决方案。
您最初的方法涉及启动大量 goroutine 并行发送 HTTP 请求,期望它们利用所有可用的 CPU。但是,由于文件描述符限制,您会遇到错误。
要克服这些限制,请考虑以下方法:
以下是包含这些优化的代码的修改版本:
package main import ( "fmt" "net/http" "runtime" "sync" "time" ) var ( reqs int concurrent int work chan *http.Request results chan *http.Response ) func init() { reqs = 1000000 concurrent = 200 } func main() { runtime.GOMAXPROCS(runtime.NumCPU()) work = make(chan *http.Request, concurrent) results = make(chan *http.Response) start := time.Now() // Create a semaphore channel to limit concurrency sem := make(chan struct{}, concurrent) // Create a dispatcher to populate the work channel go func() { for i := 0; i < reqs; i++ { req, _ := http.NewRequest("GET", "http://localhost/", nil) work <- req } close(work) // Signal to workers that no more requests are incoming }() // Create a worker pool to process requests for i := 0; i < concurrent; i++ { go func() { for req := range work { resp, err := http.DefaultClient.Do(req) if err != nil { fmt.Println(err) } results <- resp // Release semaphore token to allow another worker to proceed <-sem } }() } // Consume responses from worker pool var ( conns int64 totalSize int64 wg sync.WaitGroup ) wg.Add(1) go func() { defer wg.Done() for { select { case resp, ok := <-results: if ok { conns++ totalSize += resp.ContentLength resp.Body.Close() } else { return } } } }() // Block until all responses are processed wg.Wait() elapsed := time.Since(start) fmt.Printf("Connections:\t%d\nConcurrent:\t%d\nTotal size:\t%d bytes\nElapsed:\t%s\n", conns, concurrent, totalSize, elapsed) }
通过调整并发变量并观察结果,您可以确定系统的最佳并发级别,“最大化”其并发 HTTP 的能力请求。
以上是如何在 Go 中最大化并发 HTTP 请求?的详细内容。更多信息请关注PHP中文网其他相关文章!