...

Wednesday, February 23, 2011

CAOS 08

Single-Processor SchedulingBackground

Before we begin, let's discuss why we would need Process Scheduling. We learned in lecture that a single-processor system can only execute a single process at any given time. Once the process has finished a certain task, it typically has to wait for some I/O before it further carry out tasks. In a non-Multiprogrammed system, the CPU is idle during this wait and precious clock cycles are wasted.

A Multiprogrammed Operating System ensures that the CPU is optimally utilized. This is done through Process Scheduling. A Multiprogrammed Operating System makes use of Process Scheduling to allow the queuing of multiple processes in memory. The basic idea is that once a process has to wait for further I/O, the CPU switches to another process in memory. This is repeated for all proceses. This way, multiple tasks can be completed quickly with minimal CPU idling.

A process switches between two states: CPU Burst, and I/O Burst. Generally, when a process is executing a task, it is said to be in CPU Burst. CPU Bursts are also known as Execution Cycles. When the process needs to wait for I/O, it is said to be in I/O Burst. An I/O-bound program such as a software calculator would have many short CPU Bursts and long I/O Bursts. A CPU-bound program such as a video converter would have longer CPU Bursts and shorter I/O Bursts. This categorization is important when selecting process Scheduling Algorithms.

Processes are lined up in a queue known as the Ready Queue for execution. When the CPU becomes idle, the next process to be executed is determined by the CPU Scheduler (also known as the Short-Term Scheduler). A Dispatcher is the module that stops a process and start a new one. In other words, it gives control to a process selected by the CPU Scheduler. The time it takes to switch processes is known as the Dispatcher Latency.

Scheduler Types

Schedulers can be categorized into two broad types: Cooperative and Preemptive. Cooperative Schedulers are schedulers that cooperates with the processes. It does not take the initiative to pause a running process for another task. It instead waits for the process to invoke either an I/O request switch to a wait state. Preemption in this case means to temporarily disrupt a certain task for one of a higher priority.

In a Preemptive Scheduler, the CPU responds to interrupts such as when a calculation is done, when a user clicks on a button or when a disk write is completed. Operating systems starting from Windows 95 and Mac OS X all make use of Preemptive Scheduling. Preemptive Scheduling, however, requires multiple mechanisms to ensure integrity of shared data and is thus harder to implement.

Scheduling Criterias

Scheduling Algorithms make use of Criterias to select the next process. Understanding these criterias allow selection of algorithms to fit the correct application. The criterias used include:

-CPU utilization - How loaded a CPU is. Higher is better.

-Throughput - Processes completed per time unit. Higher is better.

-Turnaround time - The time it takes to execute a process. Lower is better.

-Waiting time - The time a process spends waiting in the ready queue. Lower is better.

-Reponse time - The time between the request and the first response (need not be the complete response). Lower is better.

Scheduling Algorithms (Single Processor)

First-Come, First-Served Scheduling

Now we'll be ready to talk about the different Scheduling Algorithms. The first scheduling algorithm is the First-In-First-Out, First-Come-First-Served or the Best Effort algorithm. The first process requesting the CPU would be the first process served. The FCFS algorithm looks like this:



The FCFS algorithm is the simplest of all algorithms. However, as you can see, the larger processes may end up hogging the CPU for extended periods of time, resulting in the "Convoy Effect" where other small processes have to wait for a single big process to complete.

Shortest-Job-First Scheduling

The next scheduling algorithm we're going to talk about is the Shortest-Job-First or the WFQ (Weighted Fair Queuing) algorithm. Just like its name implies, the CPU first evaluates the jobs, then schedules the jobs based on the length of the next CPU burst:



A more appropriate name for this is actually the "Shortest-Next-CPU-Burst-First Scheduling" since the processes are evaluated based on the next CPU burst. The way the system predicts the length of the next CPU burst is through the use of the "Exponential Average" formula. The Shortest-Job-First Algorithm can be Preemptive or non-Preemptive. The Preemptive one would stop a job whenever a smaller one exists, and is called Shortest-Remaining-Time-First Scheduling.

Priority Scheduling

Priority Scheduling allows processes to be associated with a Priority. The highest priority ones are executed first. Those with equal Priority would be executed in a FCFS basis:



A major problem in priority scheduling is the existence of Indefinite Blocking or Starvation. A Blocked Program refers to a program ready to run but is unable to due to the execution of a higher priority process. A large high-priority process can indefinitely hog the CPU, leaving processes unable to run until the system is less loaded. The solution to this problem is the introduction of Aging, which gradually increases the Priority of processes waiting in a queue.

Round-Robin Scheduling

The Round-Robin Scheduling Algorithm is similar to the Time-Division Multiplexing used in telecommunication services. Whereas TDM allocates "Time Slices", Round-Robin scheduling allocates in units of CPU "Time Quantum". Time Quantum is simply a fixed length of time a CPU is allowed to process a task before it has to switch:



Round-Robin Scheduling is typically designed for Time-Sharing systems. It is similar to FCFS, but allows Preemption between processes after each Time Quantum. The ready queue becomes a circular queue where the CPU goes round and round until all processes are completed. This method, also known as Processor Sharing, creates the illusion that there are n processes concurrently running at 1/n the actual processor speed.

Multilevel Queue Scheduling

Finally, we have a Multilevel Queue Scheduling. If you are familiar with OSI Layer 2 and 3 queues, this would also be called CBWFQ (Class-Based Weighted-Fair-Queuing). In a Multilevel Queue Scheduling Algorithm, the ready queue is split into multiple queues. Each process is permanently assigned to one queue. There can be different scheduling algorithms employed for each queue. The queues can either be served on a Highest-Priority-Queue-First basis or on a Round Robin basis where the higher priority queue gets more Time Quantum over the lower priority queues. The Highest-Priority-Queue-First method looks like this:



In this example, the queues are named System Processes, Interactive Processes, Interactive Editing Processes, Batch Processes and Student Processes. It isn't necessarily classified as such on actual real life systems. The queues are classified according to Foreground (Interactive) and Background (Batch) processes, where Foreground processes get the most CPU cycles. In the Highest-Priority-Queue-First method, the lower queues will never be run if there is always data in the upper queues. This may lead to Starvation. The other approach is to use Time-Slicing among the queues as mentioned before.

Another type of queue known as the Multilevel Feedback-Queue is similar to the Multilevel Queue, but allows processes to move between queues according to the characteristics of their CPU bursts. Larger processes (in terms of CPU bursts) are moved to the lower priority queues while smaller processes are moved up. This eventually pushes Interactive processes to the higher priority queues.

Multiple-Processor Scheduling

Background

Multiple-processor systems allow Load Sharing of tasks between processors. However, with added benefits comes added complexity. Our syllabus focuses on Multiple-Processor Scheduling across Homogeneous processors. Homogeneity allows engineers to assume that the processors can all do the same type of tasks, which helps reduce complexity especially when it comes to Load Balancing.

There are two main approaches to Multiprocessor Scheduling. The first approach is Assymetric Multiprocessing (AMP) . In Asymmetric Multiprocessing, a single processor acts as the Master Server, while the other runs only User Code. The Master Server handles all scheduling decisions, I/O processing and other system activities. Since only the Master Server handles access of the system data structures, the mechanism for maintaining data sharing integrity is simplified.

The other method is the Symmetric Multiprocessing (SMP). Each processor acts as its own Master Server, and can share a common ready queue or support a ready queue each. The processors then pick processes from the queue for execution. However, this requires a more complex data sharing integrity mechanism as there is an increased chance for data corruption due to simultaneous access of the data structure. The processes can also be Load Balanced using either Push Migration or Pull Migration. In Push Migration, the load on processors are checked and if there is an imbalance, processes on the higher utilized processor will be pushed to the lower utilized one. Pull Migration is used when a processor pulls a task from another busy processor's waiting queue. Push and Pull Migration can be implemented together in the same Operating System.

Processor Affinity

If a process is to be migrated to another processor, the cached information in the first processor must be invalidated and the cache in the second processor must be repopulated. As this is inefficient, Symmetric Multiprocessing systems typically try not to migrate processes between processors. Instead, each process is associated with a Processor Affinity which keeps it running on a particular processor. If a process will only try to stay on a processor, it is called a Soft Affinity. If a process must stay on a processor, it is bound by a Hard Affinity.

Symmetric Multithreading

Hyperthreading Technology (HTT) on Intel processors is a type of processing known as Symmetric Multithreading. Symmetric Multithreading allows multiple logical processors on the same physical processors. Each logical processor has its own Architecture State, which includes general and machine-state registers. The Operating System would perceive the system as being a Multiprocessor system. As such, each logical processor will handle their own interrupts. However, each logical processor would share the same physical resources such as cache and buses.

Summary

Multiprogrammed Operating Systems use Process Scheduling to ensure maximum CPU usage.

A process toggles between CPU Bursts and I/O Bursts.

In a single processor system, only one process can be in CPU Burst at one time.

A CPU Scheduler retrieves processes from a Ready Queue and a Dispatcher switches the tasks. The time it takes to switch is called the Dispatcher Latency.

Cooperative Schedulers will wait for a program to go into a I/O Request or Wait State, while a Preemptive Scheduler can temporarily stop the execution of a program in favor of another.

Scheduling algorithms make use of these criterias: CPU Utilization, Throughput, Turnaround Time, Waiting Time, Reponse Time.

FCFS Scheduling runs the jobs in the order it is received.

SJF Scheduling runs the smallest job first. It can be preemptive, in which it will be called Shortest-Remaining-Time-First Scheduling.

Priority Scheduling runs the highest priority jobs first, then FCFS for those that are tied.

Round-Robin Scheduling quickly switches between all processes, each time for the time period known as the Time Quantum.

Multilevel Queue Scheduling allows multiple queues of different priorities, and each queue can run a different Scheduling Algorithm.

There are two ways to describe a process: Foreground (Interactive) or Background (Batch). Processes are permanently assigned to a queue and the queues can be run in a Priority or a Round-Robin fashion.

Multilevel Feedback-Queue Scheduling allows processes to move around queues, where the most resource hogging processes go to a lower queue.

Asymmetric Multiprocessing has one processor assigned as the Master Server while the others handle User Code.

Symmetric Multiprocessing are all Master Servers of their own, and can share a Ready Queue or have one each.

Push Migration shifts jobs to another processor to make the utilization balanced.

Pull Migration pulls jobs from another processor's waiting queue.

Processor Affinity can be set on a process to keep it from migrating.

Soft Affinity is not a guaranteed Affinity, while Hard Affinity ensures that a process stays on a processor.

Symmetric Multithreading allows running of multiple Logical Cores in a single Physical Core. They have their own registers, architectural state and handle their own interrupts, but share caches and buses.

No comments :

Post a Comment

<