Go Concurrency Patterns (Deep Dive)
Lesson, slides, and applied problem sets.
View SlidesLesson
Go Concurrency Patterns (Deep Dive)
This module focuses on structured concurrency, backpressure, safe publication, and contention control. The goal is to build systems that stay correct and fast under real load, not just pass toy examples.
Lesson 1: Structured Concurrency and Ownership
Rule: Every goroutine must have an owner and a shutdown path.
Why it matters:
- Leaked goroutines are a slow memory leak.
- Unbounded background work becomes invisible load.
Pattern (errgroup):
ctx, cancel := context.WithCancel(parent)
defer cancel()
g, ctx := errgroup.WithContext(ctx)
g.Go(func() error { return workerA(ctx) })
g.Go(func() error { return workerB(ctx) })
if err := g.Wait(); err != nil {
return err
}
Anti‑pattern (fire‑and‑forget):
func Handle(req *Request) {
go doWork(req) // no owner, no cancellation, no wait
}
Fix: return a result, or attach it to a context or a parent goroutine.
Lesson 2: Cancellation that actually stops work
Cancellation only works if goroutines observe it.
Checklist:
- Check
ctx.Done()in loops. - On blocking send/receive, include
ctx.Done()in theselect. - When fan‑out stops, always drain or stop downstream.
Pattern (send with cancellation):
select {
case out <- item:
case <-ctx.Done():
return ctx.Err()
}
Lesson 3: Backpressure and Load Shedding
Backpressure is how you keep latency stable under load.
Techniques:
- Bounded queues:
make(chan Job, N) - Semaphore channel: limit in‑flight work
- Drop or shed when full (explicitly!)
Bounded queue with drop:
select {
case jobs <- j:
// accepted
default:
// shed load (count it)
}
Semaphore (limit concurrency):
sem := make(chan struct{}, max)
acquire := func() { sem <- struct{}{} }
release := func() { <-sem }
Lesson 4: Worker Pools that don’t leak
Pools should be bounded and shut down cleanly.
Pattern:
jobs := make(chan Job, 1024)
results := make(chan Result, 1024)
wg := sync.WaitGroup{}
for i := 0; i < workers; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for j := range jobs {
results <- handle(j)
}
}()
}
go func() {
wg.Wait()
close(results)
}()
Key: the producer closes jobs, only one goroutine closes it.
Lesson 5: Fan‑out / Fan‑in with ordering
If you need ordering, encode it explicitly.
Pattern:
type item struct { idx int; val int }
// workers send item{idx, result}
// aggregator collects and sorts or places by idx
Rule: a single aggregator goroutine is the only writer to the output slice.
Lesson 6: Safe Publication and Memory Visibility
The Go memory model requires happens‑before for visibility.
Safe publication patterns:
- Close a channel after init
- Protect with a mutex
- Use
atomic.Valuefor immutable snapshots
Example (atomic swap):
var cfg atomic.Value
cfg.Store(loadConfig())
// readers:
current := cfg.Load().(Config)
Lesson 7: Contention Hotspots
Contended locks kill scalability.
Strategies:
- Shard state: N maps + N locks
- Batch updates: reduce lock frequency
- Immutable + swap: update a copy, then atomically replace
Lesson 8: Timers and Timeouts (leak traps)
time.After allocates a timer every call; in loops it leaks until GC.
Prefer:
t := time.NewTimer(d)
defer t.Stop()
for {
t.Reset(d)
select {
case <-t.C:
case <-ctx.Done():
return
}
}
Lesson 9: sync.Pool and buffer reuse
sync.Pool is not a cache. The GC can drop pooled items at any time.
Guidelines:
- Use it for short‑lived, alloc‑heavy objects.
- Keep buffers reasonably sized (don’t pool huge slices).
Lesson 10: When not to use concurrency
- CPU‑bound and small: concurrency can slow you down.
- Heavy synchronization: serial may be faster and simpler.
- Complexity risk: correctness beats parallelism.
Lesson 11: Testing concurrency
- Always run
-race. - Stress with
-count=100and randomized delays. - Add cancellation tests to ensure goroutines exit.