Home >Backend Development >Golang >Go&#s Concurrency Decoded: Goroutine Scheduling

Go&#s Concurrency Decoded: Goroutine Scheduling

Barbara Streisand
Barbara StreisandOriginal
2025-01-14 22:08:45724browse

Go

I. Goroutines: A Deep Dive into Go's Concurrency Model

Goroutines are a cornerstone of Go's design, providing a powerful mechanism for concurrent programming. As lightweight coroutines, they simplify parallel task execution. Launching a goroutine is straightforward: simply prefix a function call with the go keyword, initiating asynchronous execution. The main program continues without waiting for the goroutine's completion.

<code class="language-go">go func() { // Launch a goroutine using the 'go' keyword
    // ... code to be executed concurrently ...
}()</code>

II. Understanding Goroutine's Internal Mechanics

Conceptual Foundations

Concurrency vs. Parallelism

  • Concurrency: The ability to manage multiple tasks seemingly simultaneously on a single CPU. The CPU rapidly switches between tasks, creating the illusion of parallel execution. While microscopically sequential, macroscopically it appears concurrent.

  • Parallelism: True simultaneous execution of multiple tasks across multiple CPUs, eliminating CPU resource contention.

Processes and Threads

  • Process: A self-contained execution environment with its own resources (memory, files, etc.). Switching between processes is resource-intensive, requiring kernel-level intervention.

  • Thread: A lightweight unit of execution within a process, sharing the process's resources. Thread switching is less overhead than process switching.

Coroutines

Coroutines maintain their own register context and stack. Switching between coroutines involves saving and restoring this state, allowing them to resume execution from where they left off. Unlike processes and threads, coroutine management is handled within the user program, not the operating system. Goroutines are a specific type of coroutine.

The GPM Scheduling Model

Go's efficient concurrency relies on the GPM scheduling model. Four key components are involved: M, P, G, and Sched (Sched is not depicted in the diagrams).

  • M (Machine): A kernel-level thread. Goroutines run on Ms.

  • G (Goroutine): A single goroutine. Each G has its own stack, instruction pointer, and other scheduling-related information (e.g., channels it's waiting on).

  • P (Processor): A logical processor that manages and executes goroutines. It maintains a run queue of ready goroutines.

  • Sched (Scheduler): The central scheduler, managing M and G queues and ensuring efficient resource allocation.

Scheduling in Action

Go

The diagram shows two OS threads (M), each with a processor (P) executing a goroutine.

  • GOMAXPROCS() controls the number of Ps (and thus the true concurrency level).

  • The gray Gs are ready but not yet running. P manages this run queue.

  • Launching a goroutine adds it to P's run queue.

Go

If an M0 is blocked, P switches to M1 (which might be retrieved from a thread cache).

Go

If a P completes its tasks quickly, it might steal work from other Ps to maintain efficiency.

III. Working with Goroutines

Basic Usage

Set the number of CPUs for goroutine execution (the default setting in recent Go versions is usually sufficient):

<code class="language-go">go func() { // Launch a goroutine using the 'go' keyword
    // ... code to be executed concurrently ...
}()</code>

Practical Examples

Example 1: Simple Goroutine Calculation

<code class="language-go">num := runtime.NumCPU() // Get the number of logical CPUs
runtime.GOMAXPROCS(num) // Set the maximum number of concurrently running goroutines</code>

Goroutine Error Handling

Unhandled exceptions in a goroutine can terminate the entire program. Use recover() within a defer statement to handle panics:

<code class="language-go">package main

import (
    "fmt"
    "runtime"
)

func cal(a, b int) {
    c := a + b
    fmt.Printf("%d + %d = %d\n", a, b, c)
}

func main() {
    runtime.GOMAXPROCS(runtime.NumCPU())
    for i := 0; i < 10; i++ {
        go cal(i, i+1)
    }
    //Note:  The main function exits before goroutines complete in this example.  See later sections for synchronization.
}</code>

Synchronizing Goroutines

Since goroutines run asynchronously, the main program might exit before they complete. Use sync.WaitGroup or channels for synchronization:

Example 1: Using sync.WaitGroup

<code class="language-go">package main

import (
    "fmt"
)

func addele(a []int, i int) {
    defer func() {
        if r := recover(); r != nil {
            fmt.Println("Error in addele:", r)
        }
    }()
    a[i] = i // Potential out-of-bounds error if i is too large
    fmt.Println(a)
}

func main() {
    a := make([]int, 4)
    for i := 0; i < 5; i++ {
        go addele(a, i)
    }
    // ... (add synchronization to wait for goroutines to finish) ...
}</code>

Example 2: Using Channels for Synchronization

<code class="language-go">package main

import (
    "fmt"
    "sync"
)

func cal(a, b int, wg *sync.WaitGroup) {
    defer wg.Done()
    c := a + b
    fmt.Printf("%d + %d = %d\n", a, b, c)
}

func main() {
    var wg sync.WaitGroup
    for i := 0; i < 10; i++ {
        wg.Add(1)
        go cal(i, i+1, &wg)
    }
    wg.Wait()
}</code>

Inter-Goroutine Communication

Channels facilitate communication and data sharing between goroutines. Global variables can also be used, but channels are generally preferred for better concurrency control.

Example: Producer-Consumer Pattern

<code class="language-go">package main

import (
    "fmt"
)

func cal(a, b int, ch chan bool) {
    c := a + b
    fmt.Printf("%d + %d = %d\n", a, b, c)
    ch <- true // Signal completion
}

func main() {
    ch := make(chan bool, 10) // Buffered channel to avoid blocking
    for i := 0; i < 10; i++ {
        go cal(i, i+1, ch)
    }
    for i := 0; i < 10; i++ {
        <-ch // Wait for each goroutine to finish
    }
}</code>

Leapcell: A Serverless Platform for Go

Leapcell is a recommended platform for deploying Go services.

Go

Key Features:

  1. Multi-Language Support: JavaScript, Python, Go, Rust.
  2. Free Unlimited Projects: Pay-as-you-go pricing.
  3. Cost-Effective: No idle charges.
  4. Developer-Friendly: Intuitive UI, automated CI/CD, real-time metrics.
  5. Scalable and High-Performance: Auto-scaling, zero operational overhead.

Go

Learn more in the documentation!

Leapcell Twitter: https://www.php.cn/link/7884effb9452a6d7a7a79499ef854afd

The above is the detailed content of Go&#s Concurrency Decoded: Goroutine Scheduling. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn