UNIT-III
Synchronization Tools: The Critical
Section Problem, Peterson’s Solution, Mutex Locks, Semaphores, Monitors,
Classic problems of Synchronization. Deadlocks: system Model, Deadlock
characterization, Methods for handling Deadlocks, Deadlock prevention, Deadlock
avoidance, Deadlock detection, Recovery from Deadlock.
SYNCHRONIZATION TOOLS:
Process Synchronization
is a mechanism in operating systems used to manage the execution of multiple
processes that access shared resources. Its main purpose is to ensure data
consistency, prevent race conditions and avoid deadlocks in a multi-process
environment.
On the basis of
synchronization, processes are categorized as one of the following
two types:
- Independent Process: The execution of one
process does not affect the execution of other processes.
- Cooperative Process: A process that can
affect or be affected by other processes executing in the system.
Process Synchronization
is the coordination of multiple cooperating processes in a system to ensure
controlled access to shared resources, thereby preventing race conditions and
other synchronization problems.
Improper Synchronization
in Inter Process Communication Environment leads to following problems:
- Inconsistency: When two or more
processes access shared data at the same time without proper
synchronization. This can lead to conflicting changes, where one process’s
update is overwritten by another, causing the data to become unreliable
and incorrect.
- Loss of Data: Loss of data occurs
when multiple processes try to write or modify the same shared resource
without coordination. If one process overwrites the data before another
process finishes, important information can be lost, leading to incomplete
or corrupted data.
- Deadlock: Lack of proper
Synchronization leads to Deadlock which
means that two or more processes get stuck, each waiting for the other to
release a resource. Because none of the processes can continue, the system
becomes unresponsive and none of the processes can complete their tasks.
Role of Synchronization
in IPC
- Preventing Race Conditions: Ensures
processes don’t access shared data at the same time, avoiding inconsistent
results.
- Mutual Exclusion: Allows only one
process in the critical section at a time.
- Process Coordination: Lets processes
wait for specific conditions (e.g., producer-consumer).
- Deadlock Prevention: Avoids circular
waits and indefinite blocking by using proper resource handling.
- Safe Communication: Ensures
data/messages between processes are sent, received and processed in order.
- Fairness: Prevents starvation by
giving all processes fair access to resources.
Types of Process
Synchronization
The two primary type of
process Synchronization in an Operating System are:
- Competitive: Two or more processes
are said to be in Competitive Synchronization if and only if they compete
for the accessibility of a shared resource.
Lack of Proper Synchronization among Competing process may lead to either Inconsistency or Data loss.
- Cooperative: Two or more processes
are said to be in Cooperative Synchronization if and only if they get
affected by each other i.e. execution of one process affects the other
process.
Lack of Proper Synchronization among Cooperating process may lead to Deadlock.
A critical section is a part of a program where shared
resources (like memory, files, or variables) are accessed by multiple processes
or threads. To avoid problems such as race conditions and data inconsistency,
only one process/thread should execute the critical section at a time using
synchronization techniques. This ensures that operations on shared resources
are performed safely and predictably.
Structure of a Critical Section
1. Entry Section
- The
process requests permission to enter the critical section.
- Synchronization
tools (e.g., mutex, semaphore) are used to control access.
2. Critical Section: The actual code where shared resources are accessed or
modified.
3. Exit
Section: The process releases the lock or semaphore, allowing other
processes to enter the critical section.
4. Remainder Section: The rest of the
program that does not involve shared resource access.
Critical Section Problem
Shared Resources and Race Conditions
- Shared
resources include memory, global variables, files, and databases.
- A race
condition occurs when two or more processes attempt to update
shared data at the same time, leading to unexpected results. Example: Two
bank transactions modifying the same account balance simultaneously without
synchronization may lead to incorrect final balance.
It could be visualized using the pseudo-code below
do{
flag=1;
while(flag); // (entry section)
// critical section
if (!flag)
// remainder section
} while(true);
Requirements of a Solution
A good critical section solution must ensure:
- Correctness - Shared data should remain consistent.
- Efficiency - Should minimize waiting and maximize CPU
utilization.
- Fairness - No process should be unfairly delayed or
starved.
Requirements of Critical Section Solutions
1. Mutual Exclusion
- At
most one process can be inside the critical section at a time.
- Prevents
conflicts by ensuring no two processes update the shared resource
simultaneously.
2. Progress
- If
no process is in the critical section, and some processes want to enter,
the choice of who enters next should not be postponed indefinitely.
- Ensures
that the system continues to make progress rather than getting stuck.
3. Bounded Waiting
- There
must be a limit on how long a process waits before it gets a chance to
enter the critical section.
- Prevents starvation,
where one process is repeatedly bypassed while others get to execute.
Example Use Case: Older
operating systems or embedded systems where simplicity and reliability outweigh
responsiveness.
Solution to Critical Section Problem :
A simple solution to the critical section can be
thought of as shown below,
acquireLock();
Process Critical Section
releaseLock();
A thread must acquire a lock prior to executing a
critical section. The lock can be acquired by only one thread. There are
various ways to implement locks in the above pseudo-code.
Examples of critical sections in real-world
applications
Banking System (ATM or Online Banking)
- Critical
Section: Updating an
account balance during a deposit or withdrawal.
- Issue
if not handled: Two
simultaneous withdrawals could result in an incorrect final balance due to
race conditions.
Ticket Booking System (Airlines, Movies, Trains)
- Critical
Section: Reserving
the last available seat.
- Issue
if not handled: Two users
may be shown the same available seat and both may book it, leading to
overbooking.
Print Spooler in a Networked Printer
- Critical
Section: Sending
print jobs to the printer queue.
- Issue
if not handled: Print jobs
may get mixed up or skipped if multiple users send jobs simultaneously.
File Editing in Shared Documents (e.g., Google Docs,
MS Word with shared access)
- Critical
Section: Saving or
writing to the shared document.
- Issue
if not handled: Simultaneous
edits could lead to conflicting versions or data loss.
PETERSON’S SOLUTION
Peterson’s Algorithm is a classic software-based
solution for the critical section problem in operating systems. It ensures
mutual exclusion between two processes, meaning only one process can access a
shared resource at a time, thus preventing race conditions.
The algorithm uses two shared variables:
- flag[i]: shows
whether process i wants to enter the critical section.
- turn: indicates whose turn
it is to enter if both processes want to access the critical section at
the same time.
The Algorithm
For process Pi:
do {
flag[i] = true; // Pi wants to enter
turn = j; // Give turn to Pj
while (flag[j] && turn == j); // Wait if Pj also wants to enter
// Critical Section
flag[i] = false; // Pi leaves critical section
// Remainder Section
} while (true);
For process Pj:
do {
flag[j] = true;
turn = i;
while (flag[i] && turn == i);
// Critical Section
flag[j] = false;
// Remainder Section
} while (true);
Step-by-Step Explanation
- Intent to Enter: A process sets its flag to true when it wants to enter the
critical section.
- Turn Assignment: It sets the turn variable to the other process, giving the
other process the chance to enter first if it also wants to.
- Waiting Condition: A process waits if the other process also wants to enter and
it is the other’s turn.
- Critical Section: Once the condition is false, the process enters the critical
section safely.
- Exit: On leaving, the
process resets its flag to false, allowing the other process to proceed.
These guarantees:
- Mutual Exclusion: Only one process enters CS.
- Progress: A
process will eventually enter CS if no other is inside.
- Bounded Waiting; No process waits indefinitely.
Example Use Cases
Peterson’s Algorithm can be used (theoretically)
in:
- Accessing a shared printer: Peterson's solution ensures that only one process can access the
printer at a time when two processes are trying to print documents.
- Reading and writing to a shared file: It can be used when two processes need to read from and write to
the same file, preventing concurrent access issues.
- Competing for a shared resource: When two processes are competing for a limited resource, such as
a network connection or critical hardware, Peterson’s solution ensures
mutual exclusion to avoid conflicts.
MUTEX LOCKS
A Mutex (short
for Mutual Exclusion) lock is a fundamental synchronization primitive
used in operating systems to manage access to shared resources by multiple
threads or processes. It ensures that only one thread can access a specific
"critical section" of code at a time, preventing race conditions and
data corruption.
Key Characteristics
·
Ownership: A mutex has a
strict concept of ownership. Only the thread that successfully
"locks" the mutex is permitted to "unlock" it.
·
Binary State: It operates
like a Boolean flag with two states: locked (unavailable)
and unlocked (available).
·
Blocking: If a thread
attempts to acquire a mutex that is already held by another thread, it is
typically put to sleep (blocked) by the OS until the lock becomes
available.
How
Mutex Locks Work
The
typical lifecycle of a mutex involves four main steps:
- Initialization: The mutex is
created and set to an initial "unlocked" state.
- Acquisition
(Lock): Before
entering a critical section, a thread calls a lock function (e.g., pthread_mutex_lock).
If available, the thread gains ownership; if not, it waits.
- Critical Section: The owning
thread performs its operations on the shared resource exclusively.
- Release (Unlock): Once
finished, the thread calls an unlock function (e.g., pthread_mutex_unlock),
which may wake up one of the waiting threads to take over.
Types
of Mutexes
Different
scenarios require specific mutex behaviors:
- Recursive Mutex: Allows the
same thread to acquire the same lock multiple times without deadlocking
itself.
- Error-Checking
Mutex: Returns
an error if a thread tries to lock a mutex it already owns or unlock one
it doesn't own.
- Timed Mutex: Allows a
thread to try acquiring a lock for a specific duration before giving up.
- Priority
Inheritance Mutex: Temporarily raises a low-priority thread's priority if it
holds a lock needed by a high-priority thread to prevent "priority
inversion".
Mutex
vs. Semaphore
While
both are synchronization tools, they serve different primary purposes:
- Mechanism: Mutex is
a locking mechanism; Semaphore is a signaling mechanism.
- Ownership: Mutexes have
owners; Semaphores do not (any thread can signal a semaphore).
- Capacity: A mutex
allows only one thread at a time; a counting semaphore
can allow multiple threads (up to a defined limit).
Common
Issues
- Deadlock: Occurs when
two threads are each waiting for a lock held by the other, causing both to
be blocked indefinitely.
- Starvation: Happens when
a thread is perpetually denied access to a lock because other threads keep
jumping ahead or holding it.
- Priority
Inversion: A high-priority thread is forced to wait for a low-priority
thread that holds a required lock.
SEMAPHORES
A semaphore is a
synchronization primitive used to manage concurrent access to shared resources
by multiple processes. It is essentially a protected integer variable that acts
as a signal to coordinate process execution and prevent race conditions.
Core Operations
Semaphores are accessed only through two standard
atomic operations, originally proposed by Edsger Dijkstra:
- Wait (P Operation): Also known as down(). This operation decrements the
semaphore value. If the value is greater than zero, the process continues;
if it is zero or negative, the process is blocked until the resource
becomes available.
- Signal (V Operation): Also known as up(). This operation increments the
semaphore value. If there are processes blocked and waiting on that
semaphore, one of them is unblocked.
Types of Semaphores
- Binary Semaphores: These can only take the values 0 or 1.
They are primarily used to provide mutual exclusion (mutex), ensuring only
one process can enter a critical section at a time.
- Counting Semaphores: These can take any non-negative integer value. They are used
to manage access to a resource pool with multiple identical instances
(e.g., a set of available printers). The initial value usually represents
the total number of available resources.
Key Benefits
- Mutual Exclusion: Prevents multiple processes from accessing
the same critical section simultaneously.
- Resource Management: Efficiently tracks the number of
available units for shared resources.
- Process Synchronization: Coordinates the order of execution
between dependent processes to ensure data consistency.
- No Busy Waiting: Modern implementations (using block and
wakeup) avoid wasting CPU cycles by putting waiting processes into a sleep
state.
MONITORS
Monitor in an operating
system is one method for achieving process synchronization. Programming
languages help the monitor to accomplish mutual exclusion between different
activities in a system. wait() and notify() constructs are
synchronization functions that are available in the Java programming language.
What is a Monitor in OS?
Monitors are a programming language component that
aids in the regulation of shared data access. The Monitor is a
package that contains shared data structures, operations, and
synchronization between concurrent procedure calls. Therefore, a monitor is
also known as a synchronization tool. Java, C#, Visual Basic, Ada, and concurrent Euclid are among
some of the languages that allow the use of monitors. Processes operating
outside the monitor can't access the monitor's internal variables, but they can
call the monitor's procedures.
For example, synchronization methods like
the wait() and notify() constructs are available in the
Java programming language.
Syntax of Monitor in OS
Monitor in OS has a simple syntax similar to how
we define a class, it is as follows:
Monitor monitorName{
variables_declaration;
condition_variables;
procedure p1{ ... };
procedure p2{ ... };
...
procedure pn{ ... };
{
initializing_code;
}
}
Monitor in an operating system is simply a class containing
variable_declarations, condition_variables, various procedures (functions), and
an initializing_code block
that is used for process synchronization.
Characteristics of Monitors in OS
A monitor in OS has the following characteristics:
- We can only run one program
at a time inside the monitor.
- Monitors in an operating
system are defined as a group of methods and fields that are combined with
a special type of package in the OS.
- A program cannot access the
monitor's internal variable if it is running outside the monitor. However,
a program can call the monitor's functions.
- Monitors were created to
make synchronization problems less complicated.
- Monitors provide a high
level of synchronization between processes.
Components of Monitor in an Operating System
The monitor is made up of four primary parts:
- Initialization: The code for initialization is included in the package, and
we just need it once when creating the monitors.
- Private Data: It is a feature of the monitor in an operating system to make
the data private. It holds all of the monitor's secret data, which
includes private functions that may only be utilized within the monitor.
As a result, private fields and functions are not visible outside of the
monitor.
- Monitor Procedure: Procedures or functions that can be invoked from outside of
the monitor are known as monitor procedures.
- Monitor Entry Queue: Another important component of the monitor is the Monitor
Entry Queue. It contains all of the threads, which are commonly referred
to as procedures only.
Condition Variables
There are two sorts of operations we can perform on the monitor's
condition variables:
- Wait
- Signal
Consider a condition variable (y) is declared in the monitor:
y.wait(): The activity/process that
applies the wait operation on a condition variable will be suspended, and the
suspended process is located in the condition variable's block queue.
y.signal(): If an activity/process
applies the signal action on the condition variable, then one of the blocked
activity/processes in the monitor is given a chance to execute.
CLASSIC PROBLEMS OF
SYNCHRONIZATION.
Classic synchronization problems are standard
problems used to:
✔ understand
process synchronization
✔ test synchronization tools like
semaphores, mutex, monitors
✔ avoid race conditions,
deadlocks, starvation
The four
major classic problems are:
- Producer–Consumer Problem
- Dining Philosophers Problem
- Readers–Writers Problem
- Sleeping Barber Problem
1. PRODUCER–CONSUMER PROBLEM
(Also called Bounded-Buffer Problem)
✔ Concept
- Producer creates items and places them in a buffer.
- Consumer removes items from the buffer.
- If the buffer is full,
producer must wait.
- If the buffer is empty,
consumer must wait.
✔ Shared Resources
- Finite-size buffer (N items) → Critical Section
- Must avoid overwriting or consuming empty buffer
✔ Semaphores Used
|
Semaphore |
Purpose |
|
mutex (binary) |
To enter critical section |
|
empty (counting) |
Number of empty slots |
|
full (counting) |
Number of filled slots |
⭐ Solution Using Semaphores
Producer
do {
produce_item();
wait(empty);
wait(mutex);
insert_item(); // critical section
signal(mutex);
signal(full);
} while(true);
Consumer
do {
wait(full);
wait(mutex);
remove_item(); // critical section
signal(mutex);
signal(empty);
consume_item();
} while(true);
Issues Avoided
- Deadlock
- Race condition
- Overwriting
- Consuming from empty buffer
The producer-consumer problem arises when multiple
threads or processes attempt to share a common buffer or data structure.
Producers produce items and place them in the buffer, while consumers retrieve
items from the buffer and process them.
The challenge lies in coordinating the producers
and consumers efficiently to avoid problems like data inconsistency. This buffer
is known as the critical section. So we need a synchronization method so that
when the producer is producing the data, the consumer shouldn't interfere in
between or vice versa.
The other challenge that can arise is the producer
should not insert data when the buffer is full i.e., buffer overflow condition.
Similarly, the consumer must not remove data when the buffer is empty i.e.,
buffer underflow. As we have been given limited slots buffer where we need to
synchronize, it is also known as the bounded buffer problem.
We use both types of semaphores i.e., binary
semaphore and counting semaphore to get to the solution. So, we take mutex as
m. A mutex is a binary semaphore used to acquire the lock on the buffer.
Next, we choose an empty semaphore which is a
counting semaphore whose initial value is n. It is used to track empty slots.
We take full as another counting semaphore that tracks filled slots. Its
initial value would be 0 at time = 0.
Example in Real World
An example of the producer-consumer problem in
real life can be found in a restaurant scenario, particularly in the context of
order processing:
- Producers: In this case, the producers are the restaurant's
kitchen staff. They are responsible for preparing dishes (items) that
customers have ordered.
- Shared Buffer: The shared buffer is represented by the
restaurant's order queue or order ticket system. When a customer places an
order, it is added to this queue.
- Consumers: The consumers are the restaurant's waitstaff or
servers. They retrieve orders from the queue and deliver them to the
respective tables.
Solution Approach
Here is the step-by-step approach to solving the
producer-consumer problem:
- The mutex here will help us to create mutually exclusive relations
between producer and consumer.
- Whenever the consumer tries to retrieve data when the buffer is
empty, due to the full semaphore, it will go to the waiting stage as it
will not be able to decrease its value.
- Then, the Producer Thread will check if the slots are empty, it
will call the wait() function and the value of empty will decrease by 1.
- The producer will reach the critical section. So initially we are
assuring that the producer will be accessing the slots only.
- After it increases the value of the full semaphore by calling the
signal() function.
- Due to this, the consumer will get unlocked from the waiting stage
and will be able to access the slots.
- If the slots get filled, the empty semaphore becomes 0, and hence
producer will go to the waiting stage.
2. DINING PHILOSOPHERS PROBLEM
✔ Concept
Five philosophers sit at a round table:
- Think
- Eat
- To eat → need two chopsticks (left + right)
Chopsticks are shared resources.
✔ Problem
If all philosophers pick up their left chopstick
at the same time →
deadlock!
No philosopher can pick the right chopstick → all wait forever.
✔ Goal
Avoid:
- Deadlock
- Starvation
- Livelock
✔
Solutions
1. Semaphore-Based Solution
Each chopstick is a semaphore (value = 1).
But careful ordering is needed.
2. Allow Only 4 Philosophers at a Time
wait(room); // room = 4
wait(chopstick[i]);
wait(chopstick[(i+1)%5]);
eat();
signal(chopstick[i]);
signal(chopstick[(i+1)%5]);
signal(room);
3. Odd philosopher picks left first, even
picks right first
Breaks circular wait → no deadlock.
Consider two processes P1 and P2 executing
simultaneously, while trying to access the same resource R1, this raises
the question of who will get the resource and when. This problem is solved
using process synchronization.
The act of synchronizing process
execution such that no two processes have access to the same associated data
and resources are referred to as process synchronization in operating systems.
It's particularly critical in a multi-process
system where multiple processes are executing at the same time and trying to
access the very same shared resource or data.
This could lead to discrepancies in data sharing.
As a result, modifications implemented by one process may or may not be
reflected when the other processes access the same shared data. The processes must
be synchronized with one another to avoid data inconsistency.
And the Dining Philosophers Problem is a
typical example of limitations in process synchronization in systems with
multiple processes and limited resources. According to the Dining
Philosopher Problem, assume there are K philosophers seated around a
circular table, each with one chopstick between them. This means, that a
philosopher can eat only if he/she can pick up both chopsticks next to him/her.
One of the adjacent followers may take up one of the chopsticks, but not both.
For example, let’s
consider P0, P1, P2, P3, and P4 as the
philosophers or processes and C0, C1, C2, C3,
and C4 as the 5 chopsticks or resources between each
philosopher. Now if P0 wants to eat, both resources/chopsticks C0 and C1 must
be free, which would leave P1 and P4 void of the resource
and the process wouldn't be executed, which indicates there are limited
resources(C0,C1..) for multiple processes(P0, P1..), and this problem is
known as the Dining Philosopher Problem.
The Solution of the Dining
Philosophers Problem
The solution to the process synchronization
problem is Semaphores, A semaphore is an integer used
in solving critical sections.
The critical section is a segment of the
program that allows you to access the shared variables or resources. In a
critical section, an atomic action (independently running process) is needed,
which means that only a single process can run in that section at a
time.
Semaphore has 2 atomic
operations: wait() and signal(). If the value of its input S is
positive, the wait() operation decrements, it is used to acquire
resources while entry. No operation is done if S is negative or zero. The value
of the signal() operation's parameter S is increased, it is used to
release the resource once the critical section is executed at exit.
Here's a simple explanation of the solution:
void Philosopher
{
while(1)
{
// Section where the philosopher is using chopstick
wait(use_resource[x]);
wait(use_resource[(x + 1) %
5]);
// Section where the philosopher is thinking
signal(free_resource[x]);
signal(free_resource[(x + 1) %
5]);
}
}
Explanation:
·
The wait() operation is
implemented when the philosopher is using the resources while the others are
thinking. Here, the threads use_resource[x] and use_resource [(x
+ 1) % 5] are being executed.
·
After using the resource,
the signal() operation signifies the philosopher using no resources
and thinking. Here, the
threads free_resource[x] and free_resource [(x + 1) %
5] are being executed.
To model the Dining Philosophers
Problem in a C program we will create an array of philosophers
(processes) and an array of chopsticks (resources). We will initialize the
array of chopsticks with locks to ensure mutual exclusion is satisfied inside
the critical section.
We will run the array of philosophers in parallel
to execute the critical section (dine ()), the critical section consists
of thinking, acquiring two chopsticks, eating and then releasing the
chopsticks.
3. READERS–WRITERS PROBLEM
✔ Concept
A shared database/file
is accessed by:
✔ Readers → Only read, no modification
✔ Writers → Update (write) data
Rules
- Multiple readers can read at the same time
- Writers need exclusive
access
- No reader should read while a writer is writing
Two Types of
Readers–Writers Problems
⭐ First Readers-Writers Problem
“Readers
are given priority”
- No reader should wait unless a writer is writing.
- Writer may starve if readers keep coming.
⭐ Second Readers-Writers Problem
“Writers
are given priority”
- Once a writer is ready, no new readers should start.
- Prevents writer starvation.
⭐ Semaphores
|
Semaphore |
Purpose |
|
mutex |
Protect read_count |
|
rw_mutex |
Lock database for writers |
|
read_count |
Number of readers reading |
⭐ First Readers–Writers Solution
Reader
wait(mutex);
read_count++;
if (read_count == 1)
wait(rw_mutex);
signal(mutex);
read database;
wait(mutex);
read_count--;
if (read_count == 0)
signal(rw_mutex);
signal(mutex);
Writer
wait(rw_mutex);
write database;
signal(rw_mutex);
4. SLEEPING BARBER PROBLEM
✔ Concept
A barbershop has:
- 1 barber
- 1 barber chair
- N waiting chairs’
✔ Rules
- If no customers → barber sleeps
- If customer arrives:
- If barber is sleeping → wake up
- If waiting chairs are free → sit and wait
- If no chairs → customer leaves
- Barber serves customers one by one
✔ Semaphores Used
|
Semaphore |
Meaning |
|
customers |
Number of customers waiting |
|
barber |
Barber is ready |
|
mutex |
Protect shared variables, chairs count |
⭐ Solution
Customer
wait(mutex);
if(chairs > 0) {
chairs--;
signal(customers);
signal(mutex);
wait(barber);
get_haircut();
}
else {
signal(mutex);
leave_shop();
}
Barber
while(true) {
wait(customers);
wait(mutex);
chairs++;
signal(barber);
signal(mutex);
cut_hair();
}
The Sleeping Barber Problem is a classic process synchronization problem
that demonstrates how to manage concurrent processes. It models a barbershop
scenario with a barber chair and a waiting room, where synchronization
mechanisms like semaphores are used to manage access. The rules ensure that
only one customer occupies the barber chair at a time, the barber works
whenever a customer is present, and sleeps when no customers are waiting.
Problem statement
1. One barber, one barber chair, and N waiting chairs.
2. If no customer, the barber sleeps in his chair.
3. A customer who arrives:
- Wakes the barber if he is
sleeping.
- Waits if there are free chairs.
- Leaves if no chairs are
available.
The challenge is to safely synchronize customers and the barber so that:
- Only one customer is in the
barber’s chair at a time.
- Customers either wait or
leave when seats are full.
- The barber works when
customers are available and sleeps otherwise.
We use three semaphores:
- Customers count waiting
customers.
- Barber signals when the
barber is ready.
- Mutex ensures mutual
exclusion when accessing chairs.
Algorithm
Customer
- Tries to sit (if free seats
> 0).
- If seated increments waiting
count and signals the barber.
- If no seat leaves.
Barber
- Sleeps if no customers.
- Wakes up when signaled.
- Serves one customer at a
time.
Pseudocode
Semaphore Customers = 0;
Semaphore Barber = 0;
Mutex Seats = 1;
int FreeSeats = N;
Barber {
while(true) {
down(Customers); // wait for customer
down(Seats);
FreeSeats++; // one waiting chair becomes free
up(Barber); // barber ready
up(Seats);
// cut hair
}
}
Customer {
while(true) {
down(Seats);
if(FreeSeats > 0) {
FreeSeats--; // take a seat
up(Customers); // notify barber
up(Seats);
down(Barber); // wait for barber
// get haircut
} else {
up(Seats); // leave if no seat
}
}
}
|
Problem |
Goal |
Issue |
Tools Used |
|
Producer–Consumer |
Coordinate buffer access |
Overfill, underflow |
Semaphores |
|
Dining Philosophers |
Resource sharing around table |
Deadlock, starvation |
Semaphores, ordering |
|
Readers–Writers |
Read/write database safely |
Starvation |
Semaphores |
|
Sleeping Barber |
Manage waiting customers |
Starvation, synchronization |
Semaphores |