在嘗試 Go 的效能時,您可能會在嘗試並發執行大量 HTTP 請求時遇到限制。本文探討了所面臨的挑戰,並提供了最大並發性的解決方案。
您最初的方法涉及啟動大量 goroutine 並行發送 HTTP 請求,期望它們利用所有可用的 CPU。但是,由於檔案描述符限制,您會遇到錯誤。
要克服這些限制,請考慮以下方法:
以下是包含這些最佳化的程式碼的修改版本:
package main import ( "fmt" "net/http" "runtime" "sync" "time" ) var ( reqs int concurrent int work chan *http.Request results chan *http.Response ) func init() { reqs = 1000000 concurrent = 200 } func main() { runtime.GOMAXPROCS(runtime.NumCPU()) work = make(chan *http.Request, concurrent) results = make(chan *http.Response) start := time.Now() // Create a semaphore channel to limit concurrency sem := make(chan struct{}, concurrent) // Create a dispatcher to populate the work channel go func() { for i := 0; i < reqs; i++ { req, _ := http.NewRequest("GET", "http://localhost/", nil) work <- req } close(work) // Signal to workers that no more requests are incoming }() // Create a worker pool to process requests for i := 0; i < concurrent; i++ { go func() { for req := range work { resp, err := http.DefaultClient.Do(req) if err != nil { fmt.Println(err) } results <- resp // Release semaphore token to allow another worker to proceed <-sem } }() } // Consume responses from worker pool var ( conns int64 totalSize int64 wg sync.WaitGroup ) wg.Add(1) go func() { defer wg.Done() for { select { case resp, ok := <-results: if ok { conns++ totalSize += resp.ContentLength resp.Body.Close() } else { return } } } }() // Block until all responses are processed wg.Wait() elapsed := time.Since(start) fmt.Printf("Connections:\t%d\nConcurrent:\t%d\nTotal size:\t%d bytes\nElapsed:\t%s\n", conns, concurrent, totalSize, elapsed) }
透過調整並發變數並觀察結果,您可以確定係統的最佳並發級別,「最大化」其並發HTTP 的能力請求。
以上是如何在 Go 中最大化並發 HTTP 請求?的詳細內容。更多資訊請關注PHP中文網其他相關文章!