CPU Scheduling

Lesson, slides, and applied problem sets.

View Slides

Lesson

CPU Scheduling

Why this module exists

Scheduling decides who runs next. It is where fairness, latency, and throughput trade off against each other. A correct scheduler is not just policy; it must obey precise invariants and handle edge cases like arrival order, time slices, and context switch overhead.


1) Metrics

Common metrics for schedulers:

  • Turnaround time: completion time - arrival time
  • Waiting time: time in READY state
  • Response time: time to first CPU execution
  • CPU utilization: fraction of time doing useful work

You cannot optimize all at once. Different workloads prefer different tradeoffs.


2) Round robin (RR)

Round robin cycles through ready tasks with a fixed quantum:

  • fairness improves
  • response time improves
  • context switch overhead increases

RR is the simplest preemptive scheduler and a baseline for understanding time-slicing.


3) Context switch overhead

Every preemption costs time that does no useful work. If the quantum is too small, overhead dominates and utilization drops. If it is too large, latency increases.


4) Arrival and idle time

Schedulers must handle tasks arriving while the CPU is idle, or while another task is running. These details change completion times and are a common source of bugs.


What you will build

  • A round robin simulator that computes completion times and total elapsed time.

Key takeaways

  • Scheduling is about explicit tradeoffs, not just fairness.
  • Context switch costs matter in real systems.
  • Correctness requires precise handling of arrival order and idle CPU time.

Module Items