sync.Mutex / RWMutex
When multiple goroutines access the same data simultaneously, a race condition can occur. sync.Mutex provides mutual exclusion so that only one goroutine can enter the critical section at a time. If reads are more frequent than writes, sync.RWMutex is more efficient.
Syntax
import "sync" // Mutex (mutual exclusion lock) var mu sync.Mutex mu.Lock() // Acquires the lock (blocks if another goroutine holds it). defer mu.Unlock() // Releases the lock (use defer to ensure it is always released). // RWMutex (reader/writer lock) var rwmu sync.RWMutex // Exclusive lock for writes (blocks all reads and writes) rwmu.Lock() defer rwmu.Unlock() // Shared lock for reads (allows concurrent reads, blocks writes) rwmu.RLock() defer rwmu.RUnlock()
Method List
| Method | Description |
|---|---|
| Lock() | Locks the mutex. Blocks if another goroutine already holds the lock. |
| Unlock() | Unlocks the mutex. Must be called by the goroutine that acquired the lock. |
| TryLock() | Attempts to acquire the lock (Go 1.18+). Returns false if the lock is unavailable. |
| RLock() | Acquires a read lock (RWMutex only). Multiple goroutines can hold a read lock simultaneously. |
| RUnlock() | Releases a read lock (RWMutex only). |
Sample Code
package main
import (
"fmt"
"sync"
)
// A counter protected by a Mutex.
type SafeCounter struct {
mu sync.Mutex
count int
}
func (c *SafeCounter) Increment() {
c.mu.Lock()
defer c.mu.Unlock() // Ensures the lock is always released.
c.count++
}
func (c *SafeCounter) Value() int {
c.mu.Lock()
defer c.mu.Unlock()
return c.count
}
// A cache protected by an RWMutex (efficient when reads are frequent).
type Cache struct {
rwmu sync.RWMutex
data map[string]string
}
func (c *Cache) Set(key, value string) {
c.rwmu.Lock() // Write operations use an exclusive lock.
defer c.rwmu.Unlock()
c.data[key] = value
}
func (c *Cache) Get(key string) (string, bool) {
c.rwmu.RLock() // Read operations use a shared lock (multiple goroutines can read simultaneously).
defer c.rwmu.RUnlock()
v, ok := c.data[key]
return v, ok
}
func main() {
// Concurrent updates to SafeCounter
counter := &SafeCounter{}
var wg sync.WaitGroup
for i := 0; i < 1000; i++ {
wg.Add(1)
go func() {
defer wg.Done()
counter.Increment()
}()
}
wg.Wait()
fmt.Println("Counter (should be exactly 1000):", counter.Value())
// Concurrent reads and writes to Cache
cache := &Cache{data: make(map[string]string)}
cache.Set("language", "Go")
cache.Set("version", "1.22")
if v, ok := cache.Get("language"); ok {
fmt.Println("Cache hit:", v)
}
}
Notes
Race conditions can be detected with Go's built-in race detector using go test -race. Because using a Mutex adds overhead to goroutine performance, the preferred approach is to avoid shared state altogether by passing data through channels. This reflects the Go proverb: "Do not communicate by sharing memory; instead, share memory by communicating."
Never copy a Mutex. Always pass structs that contain a Mutex or RWMutex by pointer. Also, forgetting to call Unlock() after Lock() will cause a deadlock. It is strongly recommended to use the defer mu.Unlock() pattern to guarantee the lock is always released.
For the basics of goroutines, see 'goroutine'. To wait for goroutines to finish, see 'sync.WaitGroup'.
If you find any errors or copyright issues, please contact us.