Let's first talk about Process Synchronization, and what it actually means. A cooperating process is a process that can affect or be affected by another process in the system. A cooperating process can share a logical address space (code and data in RAM), or only data through files. However, this concurrent access behavior would give rise to data inconsistencies. This problem is solved through various mechanisms to ensure orderly execution of cooperating processes.
Here is an example of a problem faced by improper process synchronization. Suppose that we have two processes that wishes to increment a variable "fish".
Process A:
fish++;
Process B:
fish--;
The operation "fish++" is typically implemented in Machine Language like this:
register1 = fish;
register1 = register1 + 1;
fish = register1;
And the operation "fish--" can be implemented like this:
register2 = fish;
register2 = register2 - 1;
fish = register2;
Now, if the two processes are allowed to execute this fish++ and fish-- at the same time, you may have the instructions interleave like this:
register1 = fish;
register2 = fish;
register1 = register1 + 1;
register2 = register2 - 1;
fish = register2;
fish = register1;
Suppose that "fish" has a value of 3 at first. In the end, "fish" may be either 2, if process 1 finishes first, or 4, if process 2 finishes first.
This problem is known as the Race Condition. In other words, the Race Condition is a problem whereby the final value of the shared data depends on which of the multiple processes accessing it finishes last.
To prevent the Race Condition, we need to make sure that only one process can modify "fish" at one time. In other words, we would have to make sure that processes are performed atomically - An atomic operation is an operation that completes in its entirety without interruption (taken from Seminar slide 5-8).
To solve the Race Condition problem, we would need to employ Process Synchronization and Coordination algorithms. In other words - The concurrent processes need to be Synchronized (taken from Seminar slide 5-12).
The first step to resolve the Race Condition is to implement a Critical Section in a process. The Critical Section is a section of a process code that is typically used for modifying shared data (such as changing common variables, updating a table, writing a file, etc.). The general rule to the Critical Section is that when one process is in its Critical Section, then no other processes can be in the Critical Section.
Thus, the most important rule in the Critical-Section system is that execution of the Critical Section of processes must be Mutually Exclusive.
Before a program can enter into its Critical Section, it must request for it. This code is implemented in the program's Entry Section. When the program is finished, it will go through the Exit Section. Finally, it goes into the Remainder Section typically to modify non-shared data and operations.
From the textbook, each program looks like this:
do
{
sectionEntry();
sectionCritical();
sectionExit();
sectionRemainder();
} while (1);
In total, a critical-section system must satisfy the following three requirements:
1) Mutual Exclusion: If a process is executing in its critical selection, then no other processes can be executing in their critical sections.
In other words, if a process is writing a shared data, then no other process can do that until it is done.
2) Progress: If no process is executing in its critical section, and some processes wish to enter their critical sections, then only those processes that are not executing in their remainder section can participate in the decision on which of those processes would enter its critical section next. This selection cannot be postponed indefinitely.
In other words, processes should be allowed to proceed into the Critical Section if it is ready and no other processes are in the Critical Section.
3) Bounded Waiting: There exists a bound on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.
In other words, a process cannot be permanently waiting for entry into its critical section.
We now look at how the three requirements are met by looking at algorithms designed for two processes.
Assume there are 2 processes, P0 and P1. In the first algorithm, a shared "turn" variable is used by both processes to check whether it is allowed to execute. In this case, the code looks like this:
P0:
do
{
//Entry: Stall while it is not its turn
while (turn=1);
sectionCritical();
//Exit: Change turn to the other side
turn = 1;
sectionRemainder();
} while (1);
P1:
do
{
//Entry: Stall while it is not its turn
while (turn=0);
sectionCritical();
//Exit: Change turn to the other side
turn = 0;
sectionRemainder();
} while (1);
As long as its not the process's turn, it will not execute its critical section, so there is Mutual Exclusion here. However, there is a problem here. If it is Turn 0, and P1 wants to enter its Critical Section, it must wait for P0 to complete its Critical Section before P1 can begin. In this case, both processes MUST execute Critical Sections in an alternating order. Progress is not met here.
The problem with Algorithm 1 is that it does not provide enough information as to whether the other processes are ready for their Critical Sections. So even if it is not ready, other processes would not know. The solution is to create a flag that stores each process's readiness. Now with a "boolean flag[2]", we have Algorithm 2:
P0:
do
{
//Entry: Signal that the process is ready for Critical Section
flag[0]=true;
//Entry: Stall while another process is ready for Critical Section
while (flag[1]);
sectionCritical();
//Exit: Signal that it is done
flag[0]=false;
sectionRemainder();
} while (1);
P1:
do
{
//Entry: Signal that the process is ready for Critical Section
flag[1]=true;
//Entry: Stall while another process is ready for Critical Section
while (flag[0]);
sectionCritical();
//Exit: Signal that it is done
flag[1]=false;
sectionRemainder();
} while (1);
In this case, even though processes now know that they can proceed if the other side is not ready, there exists a condition whereby an indefinite stalling occurs due to both processes having their flag set (flag[0]=flag[1]=true). This satisfies Mutual Exclusion, but Progress is not satisfied due to the Achilles' Heel of the stuck condition.
The third algorithm uses a combination of the first and second, with the "turn" variable being used differently. The third algorithm is as follows:
P0:
do
{
//Entry: Signal that it is ready
flag[0] = true;
//Entry: Race condition to set turn to the other process
turn = 1;
//Entry: Stall while it's not its own turn AND the other side is ready
while (flag[1]&&turn==1);
sectionCritical();
//Exit: Signal that it is done
flag[0] = false;
sectionRemainder();
} while (1);
P1:
do
{
//Entry: Signal that it is ready
flag[1] = true;
//Entry: Race condition to set turn to the other process
turn = 0;
//Entry: Stall while it's not its own turn AND the other side is ready
while (flag[0]&&turn==0);
sectionCritical();
//Exit: Signal that it is done
flag[1] = false;
sectionRemainder();
} while (1);
In this case, even if both processes set their flags to true, the turn variable would only be set to either 0 or 1. Whoever first executes that statement gets to go first. If only P0 is ready to enter, then flag[1]==false and turn==1 so the while statement evaluates to false, thus P0 enters critical section. In this case, only one process can enter so there is Mutual Exclusion.
P0 will enter the critical section if P1 is not ready, which satisfies progress.
Also, if both processes are ready, P0 will not indefinitely starve P1. If both processes are ready, P1 will execute after the most 1 critical section execution by P0, which satisfies Bounded Waiting.
No comments :
Post a Comment