Home  >  Article  >  Backend Development  >  Go programs slow down when the number of goroutines increases

Go programs slow down when the number of goroutines increases

WBOY
WBOYforward
2024-02-09 22:10:101289browse

当 goroutine 数量增加时,Go 程序会变慢

Go programs will slow down when the number of goroutines increases. This is because goroutine scheduling and switching will introduce additional overhead, resulting in reduced program performance. Although goroutines are excellent at providing concurrency performance, too many goroutines can lead to thread competition and resource contention, thereby affecting the execution efficiency of the program. In order to avoid this situation from happening, we need to reasonably manage and control the number of goroutines to ensure that the program can run efficiently. In this article, PHP editor Youzi will introduce you to some methods and techniques for optimizing goroutine performance to help you improve the execution efficiency of Go programs.

Question content

I'm working on a small project for my parallelism course and I've tried using buffered channels, unbuffered channels, channels without using pointers to slices, etc. Also, tried to optimize it as much as possible (not the current state) but I still get the same result: increasing the number of goroutines (even by 1) slows down the entire program. Can someone tell me what I'm doing wrong? Is it even possible to enhance parallelism in this case?

This is part of the code:

func main() {

    rand.seed(time.now().unixmicro())

    numagents := 2

    fmt.println("please pick a number of goroutines: ")
    fmt.scanf("%d", &numagents)

    numfiles := 4
    fmt.println("how many files do you want?")
    fmt.scanf("%d", &numfiles)
    start := time.now()

    numassist := numfiles
    channel := make(chan []file, numagents)
    files := make([]file, 0)

    for i := 0; i < numagents; i++ {
        if i == numagents-1 {
            go generatefiles(numassist, channel)
        } else {
            go generatefiles(numfiles/numagents, channel)
            numassist -= numfiles / numagents
        }
    }

    for i := 0; i < numagents; i++ {
        files = append(files, <-channel...)
    }

    elapsed := time.since(start)
    fmt.printf("function took %s\n", elapsed)
}
func generatefiles(numfiles int, channel chan []file) {
    magicnumbersmap := getmap()
    files := make([]file, 0)

    for i := 0; i < numfiles; i++ {
        content := randelementfrommap(&magicnumbersmap)

        length := rand.intn(400) + 100
        hexslice := gethex()

        for j := 0; j < length; j++ {
            content = content + hexslice[rand.intn(len(hexslice))]
        }

        hash := getsha1hash([]byte(content))

        file := file{
            content: content,
            hash:    hash,
        }

        files = append(files, file)
    }

    channel <- files

}

It is expected that by increasing goroutines, the program will run faster, but reaching a certain number of goroutines, at which point by increasing goroutines, I will get the same execution time or slightly slower.

EDIT: All functions used:

import (
    "crypto/sha1"
    "encoding/base64"
    "fmt"
    "math/rand"
    "time"
)

type File struct {
    content string
    hash    string
}

func getMap() map[string]string {
    return map[string]string{
        "D4C3B2A1": "Libcap file format",
        "EDABEEDB": "RedHat Package Manager (RPM) package",
        "4C5A4950": "lzip compressed file",
    }
}

func getHex() []string {
    return []string{
        "0", "1", "2", "3", "4", "5",
        "6", "7", "8", "9", "A", "B",
        "C", "D", "E", "F",
    }
}

func randElementFromMap(m *map[string]string) string {
    x := rand.Intn(len(*m))
    for k := range *m {
        if x == 0 {
            return k
        }
        x--
    }
    return "Error"
}

func getSHA1Hash(content []byte) string {
    h := sha1.New()
    h.Write(content)
    return base64.URLEncoding.EncodeToString(h.Sum(nil))
}

Solution

In simple terms - the file generation code is not complex enough to justify parallel execution. All context switching and moving data through channels consumes all the benefits of parallel processing.

If you add something like time.sleep(time.millisecond * 10) in the loop of the generatefiles function, it will appear as if it is doing something more complex, you You'll see what you expect - more goroutines work faster. But again, only up to a certain point does the extra work of parallel processing pay off.

Please also note that the execution time of the last bit of the program:

for i := 0; i < numAgents; i++ {
    files = append(files, <-channel...)
}

Directly depends on the number of goroutines. Since all goroutines finish at about the same time, the loop rarely executes in parallel with your worker threads, and the time it takes to run is just added to the total time.

Next, when you append to the files slice multiple times, it must grow several times and copy the data to the new location. You can avoid this by initially creating a slice that fills all the result elements (luckily you know exactly how many elements are needed).

The above is the detailed content of Go programs slow down when the number of goroutines increases. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:stackoverflow.com. If there is any infringement, please contact admin@php.cn delete