UNIT-III
Synchronization Tools: The Critical
Section Problem, Peterson’s Solution, Mutex Locks, Semaphores, Monitors,
Classic problems of Synchronization. Deadlocks: system Model, Deadlock
characterization, Methods for handling Deadlocks, Deadlock prevention, Deadlock
avoidance, Deadlock detection, Recovery from Deadlock.
SYNCHRONIZATION TOOLS:
Process Synchronization
is a mechanism in operating systems used to manage the execution of multiple
processes that access shared resources. Its main purpose is to ensure data
consistency, prevent race conditions and avoid deadlocks in a multi-process
environment.
On the basis of
synchronization, processes are categorized as one of the following
two types:
- Independent Process: The execution of one
process does not affect the execution of other processes.
- Cooperative Process: A process that can
affect or be affected by other processes executing in the system.
Process Synchronization
is the coordination of multiple cooperating processes in a system to ensure
controlled access to shared resources, thereby preventing race conditions and
other synchronization problems.
Improper Synchronization
in Inter Process Communication Environment leads to following problems:
- Inconsistency: When two or more
processes access shared data at the same time without proper
synchronization. This can lead to conflicting changes, where one process’s
update is overwritten by another, causing the data to become unreliable
and incorrect.
- Loss of Data: Loss of data occurs
when multiple processes try to write or modify the same shared resource
without coordination. If one process overwrites the data before another
process finishes, important information can be lost, leading to incomplete
or corrupted data.
- Deadlock: Lack of proper
Synchronization leads to Deadlock which
means that two or more processes get stuck, each waiting for the other to
release a resource. Because none of the processes can continue, the system
becomes unresponsive and none of the processes can complete their tasks.
Role of Synchronization
in IPC
- Preventing Race Conditions: Ensures
processes don’t access shared data at the same time, avoiding inconsistent
results.
- Mutual Exclusion: Allows only one
process in the critical section at a time.
- Process Coordination: Lets processes
wait for specific conditions (e.g., producer-consumer).
- Deadlock Prevention: Avoids circular
waits and indefinite blocking by using proper resource handling.
- Safe Communication: Ensures
data/messages between processes are sent, received and processed in order.
- Fairness: Prevents starvation by
giving all processes fair access to resources.
Types of Process
Synchronization
The two primary type of
process Synchronization in an Operating System are:
- Competitive: Two or more processes
are said to be in Competitive Synchronization if and only if they compete
for the accessibility of a shared resource.
Lack of Proper Synchronization among Competing process may lead to either Inconsistency or Data loss.
- Cooperative: Two or more processes
are said to be in Cooperative Synchronization if and only if they get
affected by each other i.e. execution of one process affects the other
process.
Lack of Proper Synchronization among Cooperating process may lead to Deadlock.
A critical section is a part of a program where shared
resources (like memory, files, or variables) are accessed by multiple processes
or threads. To avoid problems such as race conditions and data inconsistency,
only one process/thread should execute the critical section at a time using
synchronization techniques. This ensures that operations on shared resources
are performed safely and predictably.
Structure of a Critical Section
1. Entry Section
- The
process requests permission to enter the critical section.
- Synchronization
tools (e.g., mutex, semaphore) are used to control access.
2. Critical Section: The actual code where shared resources are accessed or
modified.
3. Exit
Section: The process releases the lock or semaphore, allowing other
processes to enter the critical section.
4. Remainder Section: The rest of the
program that does not involve shared resource access.
Critical Section Problem
Shared Resources and Race Conditions
- Shared
resources include memory, global variables, files, and databases.
- A race
condition occurs when two or more processes attempt to update
shared data at the same time, leading to unexpected results. Example: Two
bank transactions modifying the same account balance simultaneously without
synchronization may lead to incorrect final balance.
It could be visualized using the pseudo-code below
do{
flag=1;
while(flag); // (entry section)
// critical section
if (!flag)
// remainder section
} while(true);
Requirements of a Solution
A good critical section solution must ensure:
- Correctness - Shared data should remain consistent.
- Efficiency - Should minimize waiting and maximize CPU
utilization.
- Fairness - No process should be unfairly delayed or
starved.
Requirements of Critical Section Solutions
1. Mutual Exclusion
- At
most one process can be inside the critical section at a time.
- Prevents
conflicts by ensuring no two processes update the shared resource
simultaneously.
2. Progress
- If
no process is in the critical section, and some processes want to enter,
the choice of who enters next should not be postponed indefinitely.
- Ensures
that the system continues to make progress rather than getting stuck.
3. Bounded Waiting
- There
must be a limit on how long a process waits before it gets a chance to
enter the critical section.
- Prevents starvation,
where one process is repeatedly bypassed while others get to execute.
Example Use Case: Older
operating systems or embedded systems where simplicity and reliability outweigh
responsiveness.
Solution to Critical Section Problem :
A simple solution to the critical section can be
thought of as shown below,
acquireLock();
Process Critical Section
releaseLock();
A thread must acquire a lock prior to executing a
critical section. The lock can be acquired by only one thread. There are
various ways to implement locks in the above pseudo-code.
Examples of critical sections in real-world
applications
Banking System (ATM or Online Banking)
- Critical
Section: Updating an
account balance during a deposit or withdrawal.
- Issue
if not handled: Two
simultaneous withdrawals could result in an incorrect final balance due to
race conditions.
Ticket Booking System (Airlines, Movies, Trains)
- Critical
Section: Reserving
the last available seat.
- Issue
if not handled: Two users
may be shown the same available seat and both may book it, leading to
overbooking.
Print Spooler in a Networked Printer
- Critical
Section: Sending
print jobs to the printer queue.
- Issue
if not handled: Print jobs
may get mixed up or skipped if multiple users send jobs simultaneously.
File Editing in Shared Documents (e.g., Google Docs,
MS Word with shared access)
- Critical
Section: Saving or
writing to the shared document.
- Issue
if not handled: Simultaneous
edits could lead to conflicting versions or data loss.
PETERSON’S SOLUTION
Peterson’s Algorithm is a classic software-based
solution for the critical section problem in operating systems. It ensures
mutual exclusion between two processes, meaning only one process can access a
shared resource at a time, thus preventing race conditions.
The algorithm uses two shared variables:
- flag[i]: shows
whether process i wants to enter the critical section.
- turn: indicates whose turn
it is to enter if both processes want to access the critical section at
the same time.
The Algorithm
For process Pi:
do {
flag[i] = true; // Pi wants to enter
turn = j; // Give turn to Pj
while (flag[j] && turn == j); // Wait if Pj also wants to enter
// Critical Section
flag[i] = false; // Pi leaves critical section
// Remainder Section
} while (true);
For process Pj:
do {
flag[j] = true;
turn = i;
while (flag[i] && turn == i);
// Critical Section
flag[j] = false;
// Remainder Section
} while (true);
Step-by-Step Explanation
- Intent to Enter: A process sets its flag to true when it wants to enter the
critical section.
- Turn Assignment: It sets the turn variable to the other process, giving the
other process the chance to enter first if it also wants to.
- Waiting Condition: A process waits if the other process also wants to enter and
it is the other’s turn.
- Critical Section: Once the condition is false, the process enters the critical
section safely.
- Exit: On leaving, the
process resets its flag to false, allowing the other process to proceed.
These guarantees:
- Mutual Exclusion: Only one process enters CS.
- Progress: A
process will eventually enter CS if no other is inside.
- Bounded Waiting; No process waits indefinitely.
Example Use Cases
Peterson’s Algorithm can be used (theoretically)
in:
- Accessing a shared printer: Peterson's solution ensures that only one process can access the
printer at a time when two processes are trying to print documents.
- Reading and writing to a shared file: It can be used when two processes need to read from and write to
the same file, preventing concurrent access issues.
- Competing for a shared resource: When two processes are competing for a limited resource, such as
a network connection or critical hardware, Peterson’s solution ensures
mutual exclusion to avoid conflicts.
MUTEX LOCKS
A Mutex (short
for Mutual Exclusion) lock is a fundamental synchronization primitive
used in operating systems to manage access to shared resources by multiple
threads or processes. It ensures that only one thread can access a specific
"critical section" of code at a time, preventing race conditions and
data corruption.
Key Characteristics
·
Ownership: A mutex has a
strict concept of ownership. Only the thread that successfully
"locks" the mutex is permitted to "unlock" it.
·
Binary State: It operates
like a Boolean flag with two states: locked (unavailable)
and unlocked (available).
·
Blocking: If a thread
attempts to acquire a mutex that is already held by another thread, it is
typically put to sleep (blocked) by the OS until the lock becomes
available.
How
Mutex Locks Work
The
typical lifecycle of a mutex involves four main steps:
- Initialization: The mutex is
created and set to an initial "unlocked" state.
- Acquisition
(Lock): Before
entering a critical section, a thread calls a lock function (e.g., pthread_mutex_lock).
If available, the thread gains ownership; if not, it waits.
- Critical Section: The owning
thread performs its operations on the shared resource exclusively.
- Release (Unlock): Once
finished, the thread calls an unlock function (e.g., pthread_mutex_unlock),
which may wake up one of the waiting threads to take over.
Types
of Mutexes
Different
scenarios require specific mutex behaviors:
- Recursive Mutex: Allows the
same thread to acquire the same lock multiple times without deadlocking
itself.
- Error-Checking
Mutex: Returns
an error if a thread tries to lock a mutex it already owns or unlock one
it doesn't own.
- Timed Mutex: Allows a
thread to try acquiring a lock for a specific duration before giving up.
- Priority
Inheritance Mutex: Temporarily raises a low-priority thread's priority if it
holds a lock needed by a high-priority thread to prevent "priority
inversion".
Mutex
vs. Semaphore
While
both are synchronization tools, they serve different primary purposes:
- Mechanism: Mutex is
a locking mechanism; Semaphore is a signaling mechanism.
- Ownership: Mutexes have
owners; Semaphores do not (any thread can signal a semaphore).
- Capacity: A mutex
allows only one thread at a time; a counting semaphore
can allow multiple threads (up to a defined limit).
Common
Issues
- Deadlock: Occurs when
two threads are each waiting for a lock held by the other, causing both to
be blocked indefinitely.
- Starvation: Happens when
a thread is perpetually denied access to a lock because other threads keep
jumping ahead or holding it.
- Priority
Inversion: A high-priority thread is forced to wait for a low-priority
thread that holds a required lock.
SEMAPHORES
A semaphore is a
synchronization primitive used to manage concurrent access to shared resources
by multiple processes. It is essentially a protected integer variable that acts
as a signal to coordinate process execution and prevent race conditions.
Core Operations
Semaphores are accessed only through two standard
atomic operations, originally proposed by Edsger Dijkstra:
- Wait (P Operation): Also known as down(). This operation decrements the
semaphore value. If the value is greater than zero, the process continues;
if it is zero or negative, the process is blocked until the resource
becomes available.
- Signal (V Operation): Also known as up(). This operation increments the
semaphore value. If there are processes blocked and waiting on that
semaphore, one of them is unblocked.
Types of Semaphores
- Binary Semaphores: These can only take the values 0 or 1.
They are primarily used to provide mutual exclusion (mutex), ensuring only
one process can enter a critical section at a time.
- Counting Semaphores: These can take any non-negative integer value. They are used
to manage access to a resource pool with multiple identical instances
(e.g., a set of available printers). The initial value usually represents
the total number of available resources.
Key Benefits
- Mutual Exclusion: Prevents multiple processes from accessing
the same critical section simultaneously.
- Resource Management: Efficiently tracks the number of
available units for shared resources.
- Process Synchronization: Coordinates the order of execution
between dependent processes to ensure data consistency.
- No Busy Waiting: Modern implementations (using block and
wakeup) avoid wasting CPU cycles by putting waiting processes into a sleep
state.
MONITORS
Monitor in an operating
system is one method for achieving process synchronization. Programming
languages help the monitor to accomplish mutual exclusion between different
activities in a system. wait() and notify() constructs are
synchronization functions that are available in the Java programming language.
What is a Monitor in OS?
Monitors are a programming language component that
aids in the regulation of shared data access. The Monitor is a
package that contains shared data structures, operations, and
synchronization between concurrent procedure calls. Therefore, a monitor is
also known as a synchronization tool. Java, C#, Visual Basic, Ada, and concurrent Euclid are among
some of the languages that allow the use of monitors. Processes operating
outside the monitor can't access the monitor's internal variables, but they can
call the monitor's procedures.
For example, synchronization methods like
the wait() and notify() constructs are available in the
Java programming language.
Syntax of Monitor in OS
Monitor in OS has a simple syntax similar to how
we define a class, it is as follows:
Monitor monitorName{
variables_declaration;
condition_variables;
procedure p1{ ... };
procedure p2{ ... };
...
procedure pn{ ... };
{
initializing_code;
}
}
Monitor in an operating system is simply a class containing
variable_declarations, condition_variables, various procedures (functions), and
an initializing_code block
that is used for process synchronization.
Characteristics of Monitors in OS
A monitor in OS has the following characteristics:
- We can only run one program
at a time inside the monitor.
- Monitors in an operating
system are defined as a group of methods and fields that are combined with
a special type of package in the OS.
- A program cannot access the
monitor's internal variable if it is running outside the monitor. However,
a program can call the monitor's functions.
- Monitors were created to
make synchronization problems less complicated.
- Monitors provide a high
level of synchronization between processes.
Components of Monitor in an Operating System
The monitor is made up of four primary parts:
- Initialization: The code for initialization is included in the package, and
we just need it once when creating the monitors.
- Private Data: It is a feature of the monitor in an operating system to make
the data private. It holds all of the monitor's secret data, which
includes private functions that may only be utilized within the monitor.
As a result, private fields and functions are not visible outside of the
monitor.
- Monitor Procedure: Procedures or functions that can be invoked from outside of
the monitor are known as monitor procedures.
- Monitor Entry Queue: Another important component of the monitor is the Monitor
Entry Queue. It contains all of the threads, which are commonly referred
to as procedures only.
Condition Variables
There are two sorts of operations we can perform on the monitor's
condition variables:
- Wait
- Signal
Consider a condition variable (y) is declared in the monitor:
y.wait(): The activity/process that
applies the wait operation on a condition variable will be suspended, and the
suspended process is located in the condition variable's block queue.
y.signal(): If an activity/process
applies the signal action on the condition variable, then one of the blocked
activity/processes in the monitor is given a chance to execute.
CLASSIC PROBLEMS OF SYNCHRONIZATION.
Classic synchronization problems are standard
problems used to:
✔ understand
process synchronization
✔ test synchronization tools like
semaphores, mutex, monitors
✔ avoid race conditions,
deadlocks, starvation
The four
major classic problems are:
- Producer–Consumer Problem
- Dining Philosophers Problem
- Readers–Writers Problem
- Sleeping Barber Problem
1. PRODUCER–CONSUMER PROBLEM
(Also called Bounded-Buffer Problem)
✔ Concept
- Producer creates items and places them in a buffer.
- Consumer removes items from the buffer.
- If the buffer is full,
producer must wait.
- If the buffer is empty,
consumer must wait.
✔ Shared Resources
- Finite-size buffer (N items) → Critical Section
- Must avoid overwriting or consuming empty buffer
✔ Semaphores Used
|
Semaphore |
Purpose |
|
mutex (binary) |
To enter critical section |
|
empty (counting) |
Number of empty slots |
|
full (counting) |
Number of filled slots |
⭐ Solution Using Semaphores
Producer
do {
produce_item();
wait(empty);
wait(mutex);
insert_item(); // critical section
signal(mutex);
signal(full);
} while(true);
Consumer
do {
wait(full);
wait(mutex);
remove_item(); // critical section
signal(mutex);
signal(empty);
consume_item();
} while(true);
Issues Avoided
- Deadlock
- Race condition
- Overwriting
- Consuming from empty buffer
2. DINING PHILOSOPHERS PROBLEM
✔ Concept
Five philosophers sit at a round table:
- Think
- Eat
- To eat → need two chopsticks (left + right)
Chopsticks are shared resources.
✔ Problem
If all philosophers pick up their left chopstick
at the same time →
deadlock!
No philosopher can pick the right chopstick → all wait forever.
✔ Goal
Avoid:
- Deadlock
- Starvation
- Livelock
✔ Solutions
1. Semaphore-Based Solution
Each chopstick is a semaphore (value = 1).
But careful ordering is needed.
2. Allow Only 4 Philosophers at a Time
wait(room); // room = 4
wait(chopstick[i]);
wait(chopstick[(i+1)%5]);
eat();
signal(chopstick[i]);
signal(chopstick[(i+1)%5]);
signal(room);
3. Odd philosopher picks left first, even
picks right first
Breaks circular wait → no deadlock.
No comments:
Post a Comment