Concurrent Go Language Foundation

1.1 Concurrent and Parallel

Concurrent: Perform multiple tasks at the same time (chat with multiple friends using WeChat)
Parallel: Perform multiple tasks at the same time (360 in windows is antivirus, and you're also writing code)
The concurrency of the Go language is achieved through goroutine.Goroutines are similar to threads and are user-friendly threads, and we can create thousands of goroutines to work concurrently as needed.
goroutine is scheduled by the run-time (runtime) of the Go language, and threads are scheduled by the operating system.
The Go language also provides channels to communicate between multiple goroutines.goroutine and channel are the important foundations for the implementation of the CSP (Communication Sequential Process) concurrency model that GoLanguage adheres to.

1.2goroutine

In java/Python, when we implement concurrent programming, we usually need to maintain a pool of threads by ourselves, wrap tasks one after another, and schedule threads to execute tasks and maintain context switches by ourselves, which can be very costly.
Goroutines in the Go language are similar to threads, but they are dispatched and managed by the runtime of the Go.The Go program can only reasonably allocate tasks from the goroutine to each CPU.The reason Go is called a modern language is that Go already has a mechanism for scheduling and context switching built into it at the language level.
In Go Language programming, you don't need to write processes, threads, and protocols yourself. You only have one skill, goroutine.

1.2.1 Use goroutine

Using goroutine in the Go language is very simple. You can create a goroutine for a function by simply preceding the call with the "go" keyword.
A goroutine must correspond to a function, and multiple goroutines can be created to execute the same function.

1.2.2 Start a single goroutine

Programs run sequentially when goroutine is not used.

//

package main

import (
    "fmt"
)

func hello()  {
    fmt.Println("Hello Goroutine!")
}
func main() {
    hello()
    fmt.Println("main goroutine done!")
}

//Result:
Hello Goroutine!
main goroutine done!

Process finished with exit code 0

Use go keyword

Mac System experiments
package main

import (
    "fmt"
)

func hello()  {
    fmt.Println("Hello Goroutine!")
}
func main() {
    go hello()
    fmt.Println("main goroutine done!")
}

//Result 1:
main goroutine done!

Process finished with exit code 0

//Result 2:
main goroutine done!
Hello Goroutine!

Process finished with exit code 0

//Result 3:
Hello Goroutine!
main goroutine done!

Process finished with exit code 0
You will find that only the main goroutine done is printed because the main function is also a goroutine, and when the main function is finished, the whole program is finished.

1.2.3 Start multiple goroutine s

Concurrency in the Go language is as simple as starting multiple goroutine s.
sync.WaitGroup is used here to synchronize goroutine s.
package main

import (
    "fmt"
    "sync"
)
var wg sync.WaitGroup

func hello(i interface{})  {
    defer wg.Done() //Register-1 at the end of goroutine
    fmt.Println("Hello Goroutine! i:",i)
}
func main() {
    for i:=0;i<10;i++{
        wg.Add(1) //Start a goroutine to register+1
        go hello(i)
    }
    wg.Wait()//Waiting for all levels of goroutine s to end
}

//Result:
Hello Goroutine! i: 9
Hello Goroutine! i: 7
Hello Goroutine! i: 2
Hello Goroutine! i: 0
Hello Goroutine! i: 3
Hello Goroutine! i: 5
Hello Goroutine! i: 1
Hello Goroutine! i: 6
Hello Goroutine! i: 4
Hello Goroutine! i: 8

Process finished with exit code 0
If you execute the code above several times, you will find that the number order is different for each print.This is because 10 goroutines are executed concurrently and goroutine's dispatch room is random.

1.3goroutine and threads

1.3.1 Growing Stack

OS threads (operating system threads) generally have fixed stack memory (usually 2 MB), a goroutine stack has only a small stack at the beginning of its life cycle (typically 2 KB), and a goroutine stack is not fixed. It can grow and shrink as needed, and the goroutine stack size limit can reach 1 GB, although it is rarely used.

1.3.2 goroutine scheduling

GPM is the implementation of GoLanguage runtime and a scheduling system implemented by the go language itself.It is different from operating system dispatch OS threads.

G understands that it is a goroutine, which stores not only the goroutine information, but also the binding information to P.
P manages a set of goroutine queues, which store the context (function pointers, stack addresses, address boundaries) in which the current goroutine runs. P does some scheduling on its own managed goroutine queues (such as pausing goroutines that take up a lot of CPU time, running subsequent goroutines, etc.) When its own queues are consumed, it fetches them in the global queue, if globalThe queue also consumed up the queue that would go to another P to grab jobs.
M (machine) is the Go runtime virtual to the operating system kernel threads. M and the kernel threads are generally one-to-one mapping relationship, a groutine is ultimately to be executed on M;
P and M generally correspond one to one.Their relationship is that P manages a set of G mounts running on M.When a G is blocked on an M for a long time, runtime creates a new M, and the P where the blocked G is located mounts the other G on the new M.Recycle the old M when the old G blockage is completed or it is considered dead.

The number of P is set through runtime.GOMAXPROCS (maximum 256), which defaults to the number of physical threads after Go1.5.When the concurrency is large, some P and M will be added, but not too many, and switching too often will not pay off.

In terms of thread scheduling alone, the advantage of the Go language over other languages is that the OS threads are dispatched by the OS kernel, while the goroutine is dispatched by the GoRuntime's own dispatcher, which uses a technology called m:n scheduling (reusing/dispatching mgoroutines to n OS threads).
One of its main features is that goroutine scheduling is done in the user state, does not involve frequent switching between the kernel state and the user state, including memory allocation and release. It maintains a large memory pool in the user state, does not directly call the malloc function of the system (unless the memory pool needs to be changed), and costs are much lower than dispatching OS threads. 
On the other hand, the multi-core hardware resources are fully utilized, approximately dividing several goroutines among physical threads, together with the ultra-lightweight goroutines themselves, all of which ensure the performance of go scheduling.

1.3.3GOMAXPROCS

The dispatcher of the Go runtime uses the GOMAXPROCS parameter to determine how many OS threads are required to execute the Go code simultaneously.The default value is the number of CPU cores on the machine.
For example, on a machine with eight CPU s, the scheduler will schedule Go Code on eight OS threads at the same time (GOMAXPROCS is n in m:n)
The Go language uses the runtime.GOMAXPROCS() function to set the number of CPU logical cores occupied by the current program when it is concurrent.
We can achieve a parallel effect by assigning tasks to different CPU logical cores, for example:
Set up GOMAXPROCS=1,goroutine Start two tasks, when one task finishes executing another

package main

import (
    "fmt"
    "runtime"
    "time"
)
func a() {
    for i := 1; i < 10; i++ {
        fmt.Println("A:", i)
    }
}

func b() {
    for i := 1; i < 10; i++ {
        fmt.Println("B:", i)
    }
}

func main() {
    runtime.GOMAXPROCS(1)
    go a()
    go b()
    time.Sleep(time.Second)
}

//Result:
A: 1
A: 2
A: 3
A: 4
A: 5
A: 6
A: 7
A: 8
A: 9
B: 1
B: 2
B: 3
B: 4
B: 5
B: 6
B: 7
B: 8
B: 9

Process finished with exit code 0
Set up GOMAXPROCS=2,goroutine Start two tasks, two tasks are executed at the same time, two tasks appear to print interactively. To try more times, you need more than one notebook. CPU OhI am here mac The above test was successful.
package main

import (
    "fmt"
    "runtime"
    "time"
)
func a() {
    for i := 1; i < 10; i++ {
        fmt.Println("A:", i)
    }
}

func b() {
    for i := 1; i < 10; i++ {
        fmt.Println("B:", i)
    }
}

func main() {
    runtime.GOMAXPROCS(2)
    go a()
    go b()
    time.Sleep(time.Second)
}

//Result:
A: 1
B: 1
B: 2
B: 3
B: 4
B: 5
B: 6
B: 7
B: 8
B: 9
A: 2
A: 3
A: 4
A: 5
A: 6
A: 7
A: 8
A: 9

Process finished with exit code 0
//The relationship between operating system threads and goroutine s in the Go language:
1. An operating system thread corresponds to multiple goroutine s in the user state.
2.go programs can use multiple operating system threads at the same time.
3.goroutine and OS threads are many-to-many relationships, that is, m:n.

1.4channel

Simple concurrent execution of functions is pointless.Exchange of data between functions is necessary to reflect the meaning of concurrent execution of functions.

Although shared memory can be used for data exchange, it is prone to race problems in different goroutine s.To ensure the correctness of data exchange, memory must be locked using mutexes, which can cause performance problems.

The concurrency model for the Go language is CSP (Communicating Sequential Processes), which advocates communication through shared memory rather than through shared memory.

If goroutines are concurrent executions of Go programs, channels are connections between them.channel is a communication mechanism that allows a goroutine to send specific values to another goroutine.

Channels in the Go language are a special type.Channels, like a conveyor belt or queue, always follow First In First Out rules to ensure the order in which data is sent and received.Each channel is a specific type of conduit for which an element type needs to be specified when declaring a channel.

1.4.1 channel type

Channel is a type, a reference type.The format of the life channel type is as follows:
var variable chan element type
package main

import (
    "fmt"
)

func main() {
    var ch1 chan int   // Declare a channel for passing integers
    var ch2 chan bool  // Declare a channel that passes Boolean
    var ch3 chan []int // Declare a channel for passing int slices

    fmt.Printf("v:%v type:%T\n",ch1,ch1)
    fmt.Printf("v:%v type:%T\n",ch2,ch2)
    fmt.Printf("v:%v type:%T\n",ch3,ch3)
}

//Result:
v:<nil> type:chan int
v:<nil> type:chan bool
v:<nil> type:chan []int

Process finished with exit code 0

1.4.2 Create channel

Channels are reference types, and null values for channel types are nil.
var ch chan int
fmt.Println(ch) // <nil>

Declared channels need to be initialized with the make function before they can be used.
The format for creating a channel is as follows:
make(chan element type, [buffer size])
The buffer size of the channel is optional.
package main

import (
    "fmt"
)

func main() {
    ch4 := make(chan int)
    ch5 := make(chan bool)
    ch6 := make(chan []int)

    fmt.Printf("v:%v type:%T\n",ch4,ch4)
    fmt.Printf("v:%v type:%T\n",ch5,ch5)
    fmt.Printf("v:%v type:%T\n",ch6,ch6)
}

//Result:
v:0xc000012060 type:chan int
v:0xc0000120c0 type:chan bool
v:0xc000012120 type:chan []int

Process finished with exit code 0

1.4.3 channel operation

Channels have three operations: send, receive and close.
Send and receive using <-
Define channel: ch: = make (chan int)

Send: Send a value to the channel.
           Ch <-10 //Send 10 to channel
 Receive: Receive values from a channel.
           A: = <-ch //Receive a value from CH and assign it to a
                  <-ch //Receive values from ch, ignore results
 Close: Close the channel.
           close(ch)
                     Be careful:
                         1. The channel needs to be closed only when the receiver is notified that all data for the goroutine has been sent.
                         2. Channels can be recycled by the garbage collection mechanism. Unlike closing files, files must be closed after the end of a file operation, but channels are not closed.
                     The closed channel has the following characteristics:
                         1. Resending a value to a closed channel results in a panic.
                         2. Receive a value for a closed channel until the channel is empty.
                         3. A receive operation on a closed channel that is not worth receiving will result in a corresponding type of zero value.
                         4. Closing a closed channel will result in a panic.

1.4.4 Unbuffered Channels

A buffer-free channel is called a blocking channel.A buffer-free channel must have a value received while sending data, or it will be blocked until an error occurs.
//No buffer channel, deadlock error occurs only when sending values without receiving values.

package main

import "fmt"

func main() {
    ch := make(chan int)
    ch <- 10
    fmt.Printf("Send successfully!")
}

//Result:
fatal error: all goroutines are asleep - deadlock!

goroutine 1 [chan send]:
main.main()
        /Users/tongchao/Desktop/gopath/src/test/test.go:7 +0x54

Process finished with exit code 2

//Because we use ch: = make (chan int) to create a buffer-free channel, a buffer-free channel can only send a value if someone receives it.
//Buying tweets can block ch <- 10, which can create a deadlock.
Solution: Use goroutine to receive values
package main

import (
    "fmt"
    "sync"
)
var wg sync.WaitGroup
func recv(ch chan int)  {
    defer wg.Done()
    i := <- ch
    fmt.Println("Received value is:"), i)
}
func main() {
    ch := make(chan int)
    wg.Add(1)
    go recv(ch)

    ch <- 10
    wg.Wait()

    fmt.Printf("Send Successfully!\n')
}

Result:
Received value is:10
 Send successfully!

Process finished with exit code 0

Send operations on a buffer-free channel will block until another goroutine performs a receive operation on that channel, and then the two goroutines will continue to execute.
If the receive operation is restrictive, the receiver's goroutine will be blocked until another goroutine sends a value on the channel.
//Using a buffered channel for communication will result in goroutine synchronization of both send and receive.Therefore, a buffer-free channel is also known as a synchronous channel.

1.4.5 Buffered Channels

Another way to solve the above problem is to use a buffered channel.We can determine the capacity of a channel when it is initialized using the make function. As long as the capacity of a channel is greater than zero, it is a buffered channel. The capacity of a channel represents the number of elements that can be stored in the channel.
You can use len() to get the number of elements in a channel and the cap function to get the capacity of a channel.
package main

import (
    "fmt"
)

func main() {
    ch := make(chan int,1) //Create a buffered channel with a capacity of 1

    ch <- 10

    fmt.Printf("Send successfully!\n")
    fmt.Println("len(ch):",len(ch))
    fmt.Println("cap(ch)",cap(ch))
}

//Result:
//Send successfully!
len(ch): 1
cap(ch) 1

Process finished with exit code 0

1.4.6 for range values from channel loops

When the data is sent to the channel, we can close the channel through the close function.
When a channel is closed, sending a value to it causes a panic, from which the received value has always been of type 0.So how can you tell if a channel is closed?
Method 1:
    I, ok: = <-ch1 // Value ok=false after channel is closed
 Method 2:
    For range traverses the channel and exits for range when it is closed.
package main

import "fmt"

func main() {
    ch1 := make(chan int)
    ch2 := make(chan int)
    //Open goroutine to send 0-100 numbers to ch1
    go func() {
        for i:=0;i<101;i++{
            ch1 <- i
        }
        close(ch1)
    }()

    //Open goroutine to receive a value from ch1 and send its square to ch2
    go func() {
        for{
            i,ok := <- ch1 //Value ok=false after channel is closed
            if !ok{
                break
            }
            ch2 <- i*i
        }
        close(ch2)
    }()

    //Receive value printing from ch2 in the main goroutine
    for i:= range ch2{//Exit for range loop after channel closes
        fmt.Println(i)

    }
}

//Result:
0
1
4
9
16
25
...
9604
9801
10000

Process finished with exit code 0

1.4.7 One-way Channel

Sometimes we pass channels as parameters between multiple task functions, and many times we restrict the use of channels in different task functions, such as restricting channels to send or receive only in functions.
Chan <- int is a write-only one-way channel (it can only write int type values) that can be sent but not received.
<-chan int is a read-only one-way channel (it can only read int type values from the channel) to which receive operations can be performed but not send operations.
In a function pass-through and any assignment operation, a two-way channel can be converted to a one-way channel, but the opposite is not possible.
package main

import "fmt"

func counter(out chan <- int)  {
    for i:=0;i<101;i++{
        out <- i
    }
    close(out)
}
func squarer(out chan <- int,in <- chan int)  {
    for i:= range in{
        out <- i*i
    }
    close(out)
}
func printer(in <- chan int)  {
    for i:= range in{
        fmt.Println(i)
    }
}
func main() {
    ch1 := make(chan int)
    ch2 := make(chan int)
    go counter(ch1)
    go squarer(ch2, ch1)
    printer(ch2)
}

//Result:
0
1
4
9
16
25
...
9604
9801
10000

Process finished with exit code 0

1.4.8 Channel Summary

1.5 worker pool (go routine pool)

In our work, we usually use the go routine number-worker pool mode, which can be specified to start, to control the number of go routines and prevent them from leaking and surging.
A simple work pool sample code is as follows:
package main

import (
    "fmt"
    "time"
)

func worker(id int,jobs <- chan int,results chan <- int )  {
    for j:= range jobs{
        fmt.Printf("worker:%d start job:%d\n", id, j)
        time.Sleep(time.Second)
        fmt.Printf("worker:%d end job:%d\n", id, j)
        results <- j * 2
    }
}
func main() {
    jobs := make(chan int,100)
    results := make(chan int,100)

    //Open 3 goroutine s
    for w:=1;w<=3;w++{
        go worker(w,jobs,results)
    }
    //5 Tasks
    for j:=1;j<=5;j++{
        jobs <- j
    }
    close(jobs)
    // Output Results
    for a := 1; a <= 5; a++ {
        <-results
    }
}

//Result:
worker:1 start job:2
worker:3 start job:1
worker:2 start job:3
worker:1 end job:2
worker:3 end job:1
worker:3 start job:4
worker:2 end job:3
worker:1 start job:5
worker:1 end job:5
worker:3 end job:4

Process finished with exit code 0

1.6 select multiplexing

In some scenarios, we need to receive data from multiple channels at the same time.When a channel receives data, it will be blocked if there is no data to receive.
//You can use traversal to get data from multiple channels at once
package main

import (
    "fmt"
)

var ch1 chan int
var ch2 chan int

func main() {
    ch1 = make(chan int, 100)
    ch2 = make(chan int, 100)

    go func() {
        ch1 <- 10
        close(ch1)
    }()
    go func() {
        ch2 <- 11
        close(ch2)
    }()

    for{
        //Receive value from ch1
        c1,ok := <- ch1
        if !ok{
            fmt.Println("ch1 Finished data")

        }
        if c1!=0{
            fmt.Println(c1)
        }

        //Receive value from ch2
        c2,ok := <- ch2
        if !ok{
            fmt.Println("ch2 Finished data")
            break
        }
        fmt.Println(c2)
    }
    fmt.Println("Operation complete!")
}

//Result:
10
11
ch1 Finished data
ch2 Finished data
//Operation complete!

Process finished with exit code 0
//You can use goroutine to receive data from multiple channels simultaneously
package main

import (
    "fmt"
    "sync"
)
var wg sync.WaitGroup
var ch1 chan int
var ch2 chan int

func getFromCh1()  {
    defer wg.Done()
    c1 := <- ch1
    fmt.Println(c1)
}
func getFromCh2()  {
    defer wg.Done()
    c2 := <- ch2
    fmt.Println(c2)
}
func main() {
    ch1 = make(chan int, 100)
    ch2 = make(chan int, 100)
    wg.Add(2)
    go getFromCh1()
    go getFromCh2()
    go func() {
        ch1 <- 10
    }()
    go func() {
        ch2 <- 11
    }()

    wg.Wait()
    fmt.Println("Operation complete!")
}

//Result:
11
10
//Operation complete!

Process finished with exit code 0
Use the select keyword to fulfill the need for multiple channels to receive values.
The use of select is similar to a switch statement in that it has a series of case branches and a default branch.Each case corresponds to a communication (receive or send) process for a channel.Select waits until a case's communication operation is complete, and the case's corresponding statement is executed.
The format is as follows:
select{
    case <-ch1:
        ...
    case data := <-ch2:
        ...
    case ch3<-data:
        ...
    default:
        Default action
}

The select statement improves the readability of the code.
1. Can handle one or more channel send/receive operations.
2. If multiple case s are satisfied at the same time, select will randomly select one.
3. For a select without a case, it waits all the time and can be used to block the main function.
package main

import "fmt"

func main() {
    ch1 := make(chan int,1)
    ch2 := make(chan int,1)
    for i:=0;i<10;i++{
        select {
        case x1 := <- ch1:
            fmt.Printf("Loop%d, ch1 takes out%d:\n, i, x1)
        case ch1 <- i:
            fmt.Printf("Loop%d, ch1 saved I n:%d\n, i, i)
        case x2 := <- ch2:
            fmt.Printf("Loop%d, ch2 takes out%d:\n, i, x2)
        case ch2 <- i:
            fmt.Printf("Loop%d, ch2 saved I n:%d\n, i, i)
        }
    }

}

Result:
Loop 0, ch1 saved in:0
 For the first time in the cycle, ch1 takes out 0:
Loop 2, ch1 saved in:2
 For the third time, ch1 takes out 2:
Loop 4, ch1 saved in:4
 For the fifth cycle, ch1 takes out 4:
For the sixth time in the loop, ch1 is stored in:6
 For the 7th cycle, ch2 is stored in:7
 For the 8th cycle, ch2 takes out 7:
For the ninth cycle, ch1 takes out 6:

Process finished with exit code 0

//The results above perfectly reflect that when multiple case s are satisfied at the same time, select randomly selects one execution.

1.7 Concurrent Security and Locks

Sometimes there are multiple goroutine s in the Go code that operate on a resource at the same time (critical zone), which can cause a real-world problem (data real-world).
package main

import (
    "fmt"
    "sync"
)

var wg sync.WaitGroup
var x int64
func add()  {
    for i:=0;i<5000;i++{
        x=x+1
    }
    wg.Done()
}
func main() {
    wg.Add(2)
    go add()
    go add()
    wg.Wait()
    fmt.Println(x)

}

//Result 1:
7281

Process finished with exit code 0

//Result 2:
10000

Process finished with exit code 0

//In the code above, we have started two goroutines to accumulate the value of the variable x. These two goroutines compete for data when accessing and modifying the X variable, resulting in a different result than expected.

1.8 Mutex

Mutex is a common method of controlling access to shared resources, which ensures that only one goroutine can access the shared resources at the same time.
The Mutex type of the sync package is used in the Go language to implement mutex.Use mutex to fix the problem with the code above:
package main

import (
    "fmt"
    "sync"
)

var wg sync.WaitGroup
var lock sync.Mutex
var x int64
func add()  {
    for i:=0;i<5000;i++{
        lock.Lock()//Locking
        x=x+1
        lock.Unlock()//Unlock
    }
    wg.Done()
}
func main() {
    wg.Add(2)
    go add()
    go add()
    wg.Wait()
    fmt.Println(x)

}

//Result:
10000

Process finished with exit code 0

//Using mutex locks ensures that only one goroutine enters the critical zone at the same time, while the other goroutines wait for locks.
//Wake-up strategies are random when multiple goroutine s wait for a lock at the same time.

1.9 Read-Write Mutex

Mutex locks are completely mutually exclusive, but there are many real scenarios in which read is more than write, and there is no need to lock when we concurrently read a resource without modifying it.
A better option for using read-write locks in this scenario.

There are two types of read and write locks: read and write.
When one goroutine acquires a read lock, the other goroutines can continue to acquire a read lock, and the acquisition of a write lock will wait;
When one goroutine acquires a write lock, the other goroutines acquire a read lock, and the write lock waits.
Read-write locks are suitable for scenarios with more reads and fewer writes. If the amount of read-write operations is not different, the advantages of read-write locks will not be brought into play.
package main

import (
    "fmt"
    "sync"
    "time"
)

var (
    x      int64
    wg     sync.WaitGroup
    lock   sync.Mutex
    rwlock sync.RWMutex
)

func write() {
    lock.Lock()   // Mutex
    //rwlock.Lock()//write lock
    x = x + 1
    time.Sleep(10 * time.Millisecond) // Assume that the read operation takes 10 milliseconds
    //rwlock.Unlock()//Unlock
    lock.Unlock()                     // Unlock Mutex
    wg.Done()
}

func read() {
    lock.Lock()                  // Mutex
    //rwlock.RLock()//Read Lock
    time.Sleep(time.Millisecond) // Assume that the read operation takes 1 millisecond
    //rwlock.RUnlock()//Unlock
    lock.Unlock()                // Unlock Mutex
    wg.Done()
}

func main() {
    start := time.Now()
    for i := 0; i < 10; i++ {
        wg.Add(1)
        go write()
    }

    for i := 0; i < 1000; i++ {
        wg.Add(1)
        go read()
    }

    wg.Wait()
    end := time.Now()
    fmt.Println(end.Sub(start))
}

//Mutex lock time:
1.404974744s

Process finished with exit code 0

//Read and write mutex time:
109.371376ms

Process finished with exit code 0

1.10sync.WaitGroup

Hard use of time.Sleep in code is inappropriate; sync.WaitGoup can be used in the Go language to synchronize concurrent tasks.
sync.WaitGroup has the following methods:

A counter is maintained inside sync.WaitGroup, whose values can be increased or decreased.
for example
 When we start N concurrent tasks, we use Add(N) to increase the counter value by N.
When each task completes, call Done(), which subtracts the counter by one.
Call Wait() to wait for the concurrent task to finish executing.
A counter value of 0 indicates that all concurrent tasks have been completed.

sync.WaitGroup is a structure that passes a pointer when passed.
var wg sync.WaitGroup

func hello() {
    defer wg.Done()
    fmt.Println("Hello Goroutine!")
}
func main() {
    wg.Add(1)
    go hello() // Start another goroutine to execute the hello function
    fmt.Println("main goroutine done!")
    wg.Wait()
}

1.11sync.Once

In many scenarios of programming, we need to ensure that certain operations are performed only once in highly concurrent scenarios, such as loading configuration files only once, closing channels only once, and so on.
The sync package in the Go language provides a solution for scenarios that execute only once - sync.Once.
sync.Onece has only one Do method.
func (o *Once) Do(f func()) {}

If the function f to be executed needs to pass parameters, it needs to be used with closures.

1.11.1 Load Profile Example

Delaying an expensive initialization operation to execute when it is actually used is a good practice.
Since pre-initializing a variable, such as completing the initialization in the Init function, increases the startup time of the program, and it is possible that the variable will not be used during the actual execution, this initialization is not necessary.
Take the following example:
var icons map[string]image.Image

func loadIcons() {
    icons = map[string]image.Image{
        "left":  loadIcon("left.png"),
        "up":    loadIcon("up.png"),
        "right": loadIcon("right.png"),
        "down":  loadIcon("down.png"),
    }
}

// Icon is not concurrently secure when called by multiple goroutine s
func Icon(name string) image.Image {
    if icons == nil {
        loadIcons()
    }
    return icons[name]
}

//Multiple goroutines calling Icon functions concurrently are not concurrently secure. Modern compilers and CPU s are free to rearrange the order of access to memory on the basis of ensuring that each goroutine meets serial consistency.
loadIcons Functions may be rearranged to the following results:
func loadIcons() {
    icons = make(map[string]image.Image)
    icons["left"] = loadIcon("left.png")
    icons["up"] = loadIcon("up.png")
    icons["right"] = loadIcon("right.png")
    icons["down"] = loadIcon("down.png")
}
//In this case, even if it is determined that icons are not nil, it does not mean that the variable initialization is complete.
//Given this situation, one way to think about it is to add a mutex; the other is to use sync.Once.
var icons map[string]image.Image

var loadIconsOnce sync.Once

func loadIcons() {
    icons = map[string]image.Image{
        "left":  loadIcon("left.png"),
        "up":    loadIcon("up.png"),
        "right": loadIcon("right.png"),
        "down":  loadIcon("down.png"),
    }
}

// Icon is concurrently secure
func Icon(name string) image.Image {
    loadIconsOnce.Do(loadIcons)
    return icons[name]
}

1.11.2 Singleton mode for concurrent security

package singleton

import (
    "sync"
)

type singleton struct {}

var instance *singleton
var once sync.Once

func GetInstance() *singleton {
    once.Do(func() {
        instance = &singleton{}
    })
    return instance
}
Sync.One actually contains a mutex and a Boolean value inside. Mutex locks guarantee the security of Boolean values and data, while Boolean values record whether initialization is complete or not.
This design ensures that the initialization operation is concurrently secure and that the initialization operation will not be performed multiple times.

1.12sync.Map

Go Built-in in language map Not concurrently secure.

package main

import (
    "fmt"
    "strconv"
    "sync"
)

var m = make(map[string]int)

func get(key string)int  {
    return m[key]
}
func set(key string,value int)  {
    m[key] = value
}
func main() {
    wg := sync.WaitGroup{}
    for i:=0;i<20;i++{
        wg.Add(1)
        go func(n int) {
            key := strconv.Itoa(i)
            set(key,i)
            fmt.Printf("k=:%v,v:=%v\n", key, get(key))
            wg.Done()
        }(i)
    }
    wg.Wait()
}

//Result:
fatal error: concurrent map writes

goroutine 6 [running]:

        /usr/local/go/src/runtime/panic.go:617 +0x72 fp=0xc0000326b8 sp=0xc000032688 pc=0x1028282
runtime.mapassign_faststr(0x10aca40, 0xc000060180, 0x10cd3a2, 0x1, 0x0)
        /usr/local/go/src/runtime/map_faststr.go:211 +0x42a fp=0xc000032720 sp=0xc0000326b8 pc=0x101031a
main.set(...)
        /Users/tongchao/Desktop/gopath/src/test/test.go:15
main.main.func1(0xc000014080, 0xc000014070, 0x2)
        /Users/tongchao/Desktop/gopath/src/test/test.go:23 +0x8e fp=0xc0000327c8 sp=0xc000032720 pc=0x1094fee
runtime.goexit()
        /usr/local/go/src/runtime/asm_amd64.s:1337 +0x1 fp=0xc0000327d0 sp=0xc0000327c8 pc=0x1051451
created by main.main
        /Users/tongchao/Desktop/gopath/src/test/test.go:21 +0xa2

goroutine 1 [runnable]:
sync.(*WaitGroup).Add(0xc000014070, 0x1)
        /usr/local/go/src/sync/waitgroup.go:53 +0x13c
main.main()
        /Users/tongchao/Desktop/gopath/src/test/test.go:20 +0x6e

goroutine 4 [runnable]:
main.get(...)
        /Users/tongchao/Desktop/gopath/src/test/test.go:12
main.main.func1(0xc000014080, 0xc000014070, 0x0)
        /Users/tongchao/Desktop/gopath/src/test/test.go:24 +0xcc
created by main.main
        /Users/tongchao/Desktop/gopath/src/test/test.go:21 +0xa2

goroutine 5 [runnable]:
main.main.func1(0xc000014080, 0xc000014070, 0x1)
        /Users/tongchao/Desktop/gopath/src/test/test.go:21
created by main.main
        /Users/tongchao/Desktop/gopath/src/test/test.go:21 +0xa2

goroutine 7 [runnable]:
main.main.func1(0xc000014080, 0xc000014070, 0x3)
        /Users/tongchao/Desktop/gopath/src/test/test.go:21
created by main.main
        /Users/tongchao/Desktop/gopath/src/test/test.go:21 +0xa2

goroutine 8 [runnable]:
main.main.func1(0xc000014080, 0xc000014070, 0x4)
        /Users/tongchao/Desktop/gopath/src/test/test.go:21
created by main.main
        /Users/tongchao/Desktop/gopath/src/test/test.go:21 +0xa2

goroutine 9 [runnable]:
main.main.func1(0xc000014080, 0xc000014070, 0x5)
        /Users/tongchao/Desktop/gopath/src/test/test.go:21
created by main.main
        /Users/tongchao/Desktop/gopath/src/test/test.go:21 +0xa2

goroutine 10 [runnable]:
main.main.func1(0xc000014080, 0xc000014070, 0x6)
        /Users/tongchao/Desktop/gopath/src/test/test.go:21
created by main.main
        /Users/tongchao/Desktop/gopath/src/test/test.go:21 +0xa2

goroutine 11 [runnable]:
main.main.func1(0xc000014080, 0xc000014070, 0x7)
        /Users/tongchao/Desktop/gopath/src/test/test.go:21
created by main.main
        /Users/tongchao/Desktop/gopath/src/test/test.go:21 +0xa2

goroutine 12 [runnable]:
main.main.func1(0xc000014080, 0xc000014070, 0x8)
        /Users/tongchao/Desktop/gopath/src/test/test.go:21
created by main.main
        /Users/tongchao/Desktop/gopath/src/test/test.go:21 +0xa2

goroutine 13 [runnable]:
main.main.func1(0xc000014080, 0xc000014070, 0x9)
        /Users/tongchao/Desktop/gopath/src/test/test.go:21
created by main.main
        /Users/tongchao/Desktop/gopath/src/test/test.go:21 +0xa2

goroutine 14 [runnable]:
main.main.func1(0xc000014080, 0xc000014070, 0xa)
        /Users/tongchao/Desktop/gopath/src/test/test.go:21
created by main.main
        /Users/tongchao/Desktop/gopath/src/test/test.go:21 +0xa2

goroutine 15 [runnable]:
main.main.func1(0xc000014080, 0xc000014070, 0xb)
        /Users/tongchao/Desktop/gopath/src/test/test.go:21
created by main.main
        /Users/tongchao/Desktop/gopath/src/test/test.go:21 +0xa2

goroutine 16 [runnable]:
main.main.func1(0xc000014080, 0xc000014070, 0xc)
        /Users/tongchao/Desktop/gopath/src/test/test.go:21
created by main.main
        /Users/tongchao/Desktop/gopath/src/test/test.go:21 +0xa2

Process finished with exit code 2
The Go language sync package provides an out-of-the-box, concurrently secure map-sync.Map.
Out-of-the-box means that you can use it directly without using the make function initialization like the built-in map.
sync.Map also has built-in operations such as Store, Load, LoadOrStore, Delete, Range, and so on.
package main

import (
    "fmt"
    "strconv"
    "sync"
)
var m = sync.Map{}

func main() {
    wg := sync.WaitGroup{}
    for i:=0;i<20;i++{
        wg.Add(1)
        go func() {
            key := strconv.Itoa(i)
            m.Store(key,i)
            value,_ := m.Load(key)
            fmt.Printf("k=:%v,v:=%v\n", key, value)
            wg.Done()
        }()
    }
    wg.Wait()
}

//Result:
k=:8,v:=8
k=:20,v:=20
k=:20,v:=20
k=:20,v:=20
k=:8,v:=20
k=:20,v:=20
k=:20,v:=20
k=:20,v:=20
k=:20,v:=20
k=:20,v:=20
k=:20,v:=20
k=:20,v:=20
k=:20,v:=20
k=:8,v:=8
k=:20,v:=20
k=:20,v:=20
k=:20,v:=20
k=:20,v:=20
k=:20,v:=20
k=:20,v:=20

Process finished with exit code 0

1.13 Atomic Operation

Locking operations in code can be time consuming and costly because context switching involving the kernel state is involved.
For Basic Data Types, we can use atomic operations to ensure concurrent security, because atomic operations are a method provided by the Go language and can be done in user mode, so the performance is better than locking operations.
Atomic operations in the Go language are provided by the built-in standard library sync/atomic.
The atomic package provides the underlying atomic-level memory operations and is useful for the implementation of synchronization algorithms.These functions must be used with care.In addition to some special underlying applications, it is better to synchronize using functions/types of channels or sync packages.


An example compares the performance of lower mutex and atomic operations.

package main

import (
    "fmt"
    "sync"
    "sync/atomic"
    "time"
)

type Counter interface {
    Inc()
    Load() int64
}

// Normal Edition
type CommonCounter struct {
    counter int64
}

func (c CommonCounter) Inc() {
    c.counter++
}

func (c CommonCounter) Load() int64 {
    return c.counter
}

// Mutex Lock Edition
type MutexCounter struct {
    counter int64
    lock    sync.Mutex
}

func (m *MutexCounter) Inc() {
    m.lock.Lock()
    defer m.lock.Unlock()
    m.counter++
}

func (m *MutexCounter) Load() int64 {
    m.lock.Lock()
    defer m.lock.Unlock()
    return m.counter
}

// Atomic Operations Edition
type AtomicCounter struct {
    counter int64
}

func (a *AtomicCounter) Inc() {
    atomic.AddInt64(&a.counter, 1)
}

func (a *AtomicCounter) Load() int64 {
    return atomic.LoadInt64(&a.counter)
}

func test(c Counter) {
    var wg sync.WaitGroup
    start := time.Now()
    for i := 0; i < 1000; i++ {
        wg.Add(1)
        go func() {
            c.Inc()
            wg.Done()
        }()
    }
    wg.Wait()
    end := time.Now()
    fmt.Println(c.Load(), end.Sub(start))
}

func main() {
    c1 := CommonCounter{} // Non-concurrent security
    test(c1)
    c2 := MutexCounter{} // Use mutex for concurrent security
    test(&c2)
    c3 := AtomicCounter{} // Concurrency is secure and more efficient than mutex
    test(&c3)
}

//Result:
0 1.099595ms
1000 907.118µs
1000 456.326µs

Process finished with exit code 0

Keywords: Go Programming Mac Windows

Added by Thethug on Sat, 08 Feb 2020 18:34:44 +0200