Semaphores: The Traffic Light of Concurrency
March 18, 2026 · 5 min read
What a semaphore actually is, when to use it over a mutex, and how it prevents resource exhaustion in real systems.
A semaphore is a concurrency primitive that controls access to a shared resource by maintaining a counter. When the counter hits zero, any goroutine or thread trying to acquire it blocks until another releases it. Simple idea — surprisingly powerful in practice.
Counting vs Binary Semaphores
A binary semaphore (value 0 or 1) behaves like a mutex. A counting semaphore can be initialized to any N — meaning up to N concurrent holders. This is the key distinction: a mutex is ownership-based (one thread owns it), a semaphore is signal-based (no concept of ownership).
The Classic Use Case: Connection Pools
Say your database accepts 20 connections. You initialize a semaphore to 20. Every time a worker wants a connection, it acquires the semaphore (decrement). When done, it releases (increment). The 21st worker simply waits. No crashes, no rejected connections.
use tokio::sync::Semaphore;
use std::sync::Arc;
let sem = Arc::new(Semaphore::new(20)); // max 20 concurrent DB connections
let permit = sem.acquire().await.unwrap();
// do work with the connection
drop(permit); // release — next waiter unblocksSemaphore vs Mutex: When to Use Which
- ▸Use a mutex when protecting shared mutable state — exactly one owner at a time
- ▸Use a semaphore when rate-limiting access — N concurrent holders is fine
- ▸Semaphores are great for throttling outbound API calls or limiting parallelism
- ▸Mutexes carry ownership semantics; semaphores are purely about signaling
- ▸A semaphore can be released by a different thread than acquired it — a mutex cannot
Producer-Consumer Coordination
Two semaphores can elegantly solve the producer-consumer problem. One tracks empty slots (starts at buffer size), one tracks filled slots (starts at 0). Producers wait on empty, signal filled. Consumers do the opposite. No busy-waiting, no missed signals.