CPU Scheduling Algorithms

|

First In First Out (FIFO)

This is a Non-Premptive scheduling algorithm. FIFO strategy assigns priority to processes in the order in which they request the processor.The process that requests the CPU first is allocated the CPU first.When a process comes in, add its PCB to the tail of ready queue. When running process terminates, dequeue the process (PCB) at head of ready queue and run it.
Consider the example with P1=24, P2=3, P3=3

Gantt Chart for FCFS : 0 - 24 P1 , 25 - 27 P2 , 28 - 30 P3

Turnaround time for P1 = 24
Turnaround time for P1 = 24 + 3
Turnaround time for P1 = 24 + 3 + 3

Average Turnaround time = (24*3 + 3*2 + 3*1) / 3

In general we have (n*a + (n-1)*b + ....) / n

If we want to minimize this, a should be the smallest, followed by b and
so on.

Comments: While the FIFO algorithm is easy to implement, it ignores the service time request and all other criteria that may influence the performance with respect to turnaround or waiting time.

Problem: One Process can monopolize CPU

Solution: Limit the amount of time a process can run without a context switch. This time is called a time slice.

Round Robin

Round Robin calls for the distribution of the processing time equitably among all processes requesting the processor.Run process for one time slice, then move to back of queue. Each process gets equal share of the CPU. Most systems use some variant of this.

Choosing Time Slice

What happens if the time slice isnt chosen carefully?

  • For example, consider two processes, one doing 1 ms computation followed by 10 ms I/O, the other doing all computation. Suppose we use 20 ms time slice and round-robin scheduling: I/O process runs at 11/21 speed, I/O devices are only utilized 10/21 of time.

  • Suppose we use 1 ms time slice: then compute-bound process gets interrupted 9 times unnecessarily before I/O-bound process is runnable

Problem: Round robin assumes that all processes are equally important; each receives an equal portion of the CPU. This sometimes produces bad results. Consider three processes that start at the same time and each requires three time slices to finish. Using FIFO how long does it take the average job to complete (what is the average response time)? How about using round robin?

* Process A finishes after 3 slices, B 6, and C 9. The average is (3+6+9)/3 = 6 slices.

* Process A finishes after 7 slices, B 8, and C 9, so the average is (7+8+9)/3 = 8 slices.

Round Robin is fair, but uniformly enefficient.

Solution: Introduce priority based scheduling.

Priority Based Scheduling

Run highest-priority processes first, use round-robin among processes of equal priority. Re-insert process in run queue behind all processes of greater or equal priority.

  • Allows CPU to be given preferentially to important processes.
  • Scheduler adjusts dispatcher priorities to achieve the desired overall priorities for the processes, e.g. one process gets 90% of the CPU.

Comments: In priority scheduling, processes are allocated to the CPU on the basis of an externally assigned priority. The key to the performance of priority scheduling is in choosing priorities for the processes.

Problem: Priority scheduling may cause low-priority processes to starve

Solution: (AGING) This starvation can be compensated for if the priorities are internally computed. Suppose one parameter in the priority assignment function is the amount of time the process has been waiting. The longer a process waits, the higher its priority becomes. This strategy tends to eliminate the starvation problem.

Shortest Job First

Maintain the Ready queue in order of increasing job lengths. When a job comes in, insert it in the ready queue based on its length. When current process is done, pick the one at the head of the queue and run it.

This is provably the most optimal in terms of turnaround/response time.

But, how do we find the length of a job?

Make an estimate based on the past behavior.

  Say the estimated time (burst) for a process is E0, suppose the actual
time is measured to be T0.

Update the estimate by taking a weighted sum of these two
ie. E1 = aT0 + (1-a)E0

in general, E(n+1) = aTn + (1-a)En (Exponential average)

if a=0, recent history no weightage
if a=1, past history no weightage.

typically a=1/2.

E(n+1) = aTn + (1-a)aTn-1 + (1-a)^jatn-j + ...

Older information has less weightage

Comments: SJF is proven optimal only when all jobs are available simultaneously.

Problem: SJF minimizes the average wait time because it services small processes before it services large ones. While it minimizes average wiat time, it may penalize processes with high service time requests. If the ready list is saturated, then processes with large service times tend to be left in the ready list while small processes receive service. In extreme case, where the system has little idle time, processes with large service times will never be served. This total starvation of large processes may be a serious liability of this algorithm.

Solution: Multi-Level Feedback Queques

Multi-Level Feedback Queue

Several queues arranged in some priority order.

Each queue could have a different scheduling discipline/ time quantum.

Lower quanta for higher priorities generally.

Defined by:

  • # of queues
  • scheduling algo for each queue
  • when to upgrade a priority
  • when to demote
Attacks both efficiency and response time problems.
  • Give newly runnable process a high priority and a very short time slice. If process uses up the time slice without blocking then decrease priority by 1 and double its next time slice.
  • Often implemented by having a separate queue for each priority.
  • How are priorities raised? By 1 if it doesn't use time slice? What happens to a process that does a lot of computation when it starts, then waits for user input? Need to boost priority a lot, quickly.

0 comments:

Post a Comment