Synchronization and Deadlocks
Lesson, slides, and applied problem sets.
View SlidesLesson
Synchronization and Deadlocks
Why this module exists
Concurrency is where OS reasoning breaks. The hard part is not the mutex API; it is understanding who is waiting on whom, and whether progress is possible. This module focuses on explicit modeling of waiting and detecting deadlock cycles.
1) Semaphores as a primitive
A counting semaphore has two operations:
- P (wait): decrement if > 0, otherwise block
- V (signal): increment and wake one waiter
Semaphores are powerful but easy to misuse. Correct behavior depends on queueing rules and strict ordering.
2) FIFO vs LIFO wakeup
The wakeup order (FIFO vs LIFO) changes fairness. Many OS designs choose FIFO for predictability, but this is a policy choice that affects latency.
3) Deadlocks
A deadlock happens when there is a cycle in the wait-for graph:
- each process waits on a resource held by another
- no one can make progress
Detection requires explicit graph analysis, not just checking current counts.
4) Prevention vs detection
- Prevention: impose an ordering to avoid cycles
- Detection: allow cycles but detect and resolve
We focus on detection to make the graph structure explicit.
What you will build
- A semaphore simulator with FIFO wakeup
- A deadlock detector for wait-for graphs
Key takeaways
- Blocking and wakeup are state transitions, not just API calls.
- Deadlocks are graph cycles.
- Fairness is a policy, not a guarantee.