Graph Explanation

|


graph 1
  • Graph 1 shows that it has no deadlock because there is no other cycles going on.
  • R1 has one instance, R2 has 2 instances, R3 has one instance and R4 has 4 instances.
  • P1 is requesting an instance in R1.
  • P2 is holding an instance from R1.
  • P3 is holding an instnace from R3.

graph 2

  • Graph 2 shows that it has a deadlock because there is a cycle and process 3 is requisting an instance.
  • P1 is requesting an instance in R1.
  • P2 is holding an instance from R1.
  • P3 is holding an instance from R3.
  • P3 is requesting an instance in R2.

graph 3

  • Graph 3 shows that there is no deadlock even it has a cycles.
  • P1 is requesting an instance in R1.
  • P2 is holding an instance from R1.
  • P3 is holding an instance from R1.
  • P3 is requesting an instance in R2.
  • P4 is holding an instance from R2.
  • P1 is holding an instance in R2.


graph 4

  • Graph 4 shows that:
  • P1 is hodling an instance from R1.
  • P2 is requesting an instance in R1.
  • P1 and P2 may access in or request in R2.

graph 5

  • Graph 5 shows that:
  • P1 is holding an instance from R1.
  • P2 is requesting an instance in R1.
  • P1 may access or request an instance in R2.
  • P2 is holding an instance from R2.

Resource Allocation Graph

|

-A set of vertices and set of edge E.
  • V is partitioned into two types:
  1. P = {P1, P2,...Pn}, the set consisting of all the processes in the system.
  2. R = {R1, R2,...Rm}, the set consisting of all the resource types in the system.
  • request edge - directed edge P1 - Rj
  • assignment edge - directed edge Rj - Pi

Q: How would you know if there's a deadlock in the reasource allocation graph?

A: You would know if there's a deadlock in the allocation graph if graphs contains a cycle and only if one instnace p[er reource type, then deadlock and only if several intances per resource type, possibility of deadlock.

Deadlock Characterization

|

Deadlock can arise if four conditions hold simultaneously.

  • Mutual exclusion: only one process at a time can use a (non-sharable) resource.
  • Hold and wait: a process holding at least one resource is waiting to acquire additional resources held by other processes.
  • No preemption: a resource can be released only voluntarily by the process holding it, after that process has completed its task.
  • Circular wait: there exists a set {P0, P1, …, P0} of waiting processes such that P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, …, Pn–1 is waiting for a resource that is held by Pn, and P0 is waiting for a resource that is held by P0.

Methods for Handling Deadlocks

|

  • Deadlock Prevention.
    =>Disallow one of the four necessary conditions for deadlock.
  • Deadlock Avoidance.
    =>Do not grant a resource request if this allocation have the potential to lead to a deadlock.
  • Deadlock Detection.
    =>Always grant resource request when possible. Periodically check for deadlocks. If a deadlock exists, recover from it.
  • Ignore the problem...
    =>Makes sense if the likelihood is very low.

Deadlock Recovery

|

  • Process Termination
    =>Abort all deadlocked processes:
    1.)Fast
    2.)A lot of process work is lost.
    =>Abort one deadlocked process at a time and check for deadlocks again:
    1.)More work to resolve a deadlock.
    2.)Better in terms of process work.
    3.)What is a good order to abort processes?
  • Resource Preemption
    =>what is a good way to select a victim
    =>How can we rollback and then recover from preemption?
    =>How can we protect from starvation

What can the system do if it discovers a deadlock?

Deadlock Prevention

|

Eliminate one of the four conditions:

  • Mutual Exclusion:
    =>Well, if we need it then we need it.
  • Hold and Wait:
    =>Require a process to request and be allocated all its resources before it begins execution, or allow process to request resources only when the process has none.
    =>May lead to low resource utilization.
    =>Starvation is a problem - a process may be held for a long time waiting for all its required resources.
    =>If it needs additional resources, it releases all of the currently held resources and then requests all of those it needs (the application needs to be aware of all of the required resources).

Deadlock Detection

|

  • The system may enter a deadlock state.
  • The system needs:
    1.)An algorithm that periodically determines whether a deadlock has occurred in the system.
    2.)A procedure to recover from a deadlock.
  • Two algorithms:
    1.)One instance per resource type.

    2.)Multiple instances per resource type.

Real - Time Scheduling

|

  • Real-time computing is an important emerging discipline in CS and CE.
  • Control of lab experiments, robotics, process control, telecommunication etc.
  • It is a type of computing where correctness of the computation depends not only on the logical results but also on the time at which the results are produced.
  • Hard real-time systems: Must meet deadline. Ex: Space shuttle rendezvous with other space station.
  • Soft real-time system: Deadlines are there but not mandatory. Results are discarded if the deadline is not met.
  • Hard real-time systems – required to complete a critical task within a guaranteed amount of time
  • Soft real-time computing – requires that critical processes receive priority over less fortunate ones

Multiprocessor Scheduling

|

In computer science, multiprocessor scheduling is an NP-Completeoptimization problem. The problem statement is: "Given a set J of jobs where job ji has length li and a number of processors mi, what is the minimum possible time required to schedule all jobs in J on m processors such that none overlap?" The applications of this problem are numerous, but are, as suggested by the name of the problem, most strongly associated with the scheduling of computational tasks in a multiprocessor environment.
  • CPU scheduling more complex when multiple CPUs are available
  • Homogeneous processors within a multiprocessor
  • Load sharing
Asymmetric multiprocessing – only one processor accesses the system data structures, alleviating the need for data sharing

Thread Scheduling

|

The thread of a parent process forks a child process. The child process inherits the scheduling policy and priority of the parent process. As with the parent thread, it is the child thread whose scheduling policy and priority will be used.
The following figure illustates the flow of creation.

  • Each thread in a process is independently scheduled.
  • Each thread contains its own scheduling policy and priority
  • Thread scheduling policies and priorities may be assigned before a thread is created (in the threads attributes object) or set dynamically while a thread is running.
  • Each thread may be bound directly to a CPU.
  • Each thread may be suspended (and later resumed) by any thread within the process

The following scheduling attributes may be set in the threads attribute object. The newly created thread will contain these scheduling attributes:

contentionscope
PTHREAD_SCOPE_SYSTEM specifies a bound (1 x 1, kernel-spacel) thread. When a bound thread is created, both a user thread and a kernel-scheduled entity are created.

PTHREAD_SCOPE_PROCESS will specify an unbound (M x N, combination user- and kernel-space) thread. (Note, HP-UX release 10.30 does not support unbound threads.)

inheritsched
PTHREAD_INHERIT_SCHED specifies that the created thread will inherit its scheduling values from the creating thread, instead of from the threads attribute object.

PTHREAD_EXPLICIT_SCHED specifies that the created thread will get its scheduling values from the threads attribute object.

schedpolicy
The scheduling policy of the newly created thread

schedparam

The scheduling parameter (priority) of the newly created thread.

TimeLine
A process and its thread change with the passage of time. A thread's priority is adjusted four key times, as shown in the next figure and described in the table that follows..

CPU Scheduling Algorithms

|

First In First Out (FIFO)

This is a Non-Premptive scheduling algorithm. FIFO strategy assigns priority to processes in the order in which they request the processor.The process that requests the CPU first is allocated the CPU first.When a process comes in, add its PCB to the tail of ready queue. When running process terminates, dequeue the process (PCB) at head of ready queue and run it.
Consider the example with P1=24, P2=3, P3=3

Gantt Chart for FCFS : 0 - 24 P1 , 25 - 27 P2 , 28 - 30 P3

Turnaround time for P1 = 24
Turnaround time for P1 = 24 + 3
Turnaround time for P1 = 24 + 3 + 3

Average Turnaround time = (24*3 + 3*2 + 3*1) / 3

In general we have (n*a + (n-1)*b + ....) / n

If we want to minimize this, a should be the smallest, followed by b and
so on.

Comments: While the FIFO algorithm is easy to implement, it ignores the service time request and all other criteria that may influence the performance with respect to turnaround or waiting time.

Problem: One Process can monopolize CPU

Solution: Limit the amount of time a process can run without a context switch. This time is called a time slice.

Round Robin

Round Robin calls for the distribution of the processing time equitably among all processes requesting the processor.Run process for one time slice, then move to back of queue. Each process gets equal share of the CPU. Most systems use some variant of this.

Choosing Time Slice

What happens if the time slice isnt chosen carefully?

  • For example, consider two processes, one doing 1 ms computation followed by 10 ms I/O, the other doing all computation. Suppose we use 20 ms time slice and round-robin scheduling: I/O process runs at 11/21 speed, I/O devices are only utilized 10/21 of time.

  • Suppose we use 1 ms time slice: then compute-bound process gets interrupted 9 times unnecessarily before I/O-bound process is runnable

Problem: Round robin assumes that all processes are equally important; each receives an equal portion of the CPU. This sometimes produces bad results. Consider three processes that start at the same time and each requires three time slices to finish. Using FIFO how long does it take the average job to complete (what is the average response time)? How about using round robin?

* Process A finishes after 3 slices, B 6, and C 9. The average is (3+6+9)/3 = 6 slices.

* Process A finishes after 7 slices, B 8, and C 9, so the average is (7+8+9)/3 = 8 slices.

Round Robin is fair, but uniformly enefficient.

Solution: Introduce priority based scheduling.

Priority Based Scheduling

Run highest-priority processes first, use round-robin among processes of equal priority. Re-insert process in run queue behind all processes of greater or equal priority.

  • Allows CPU to be given preferentially to important processes.
  • Scheduler adjusts dispatcher priorities to achieve the desired overall priorities for the processes, e.g. one process gets 90% of the CPU.

Comments: In priority scheduling, processes are allocated to the CPU on the basis of an externally assigned priority. The key to the performance of priority scheduling is in choosing priorities for the processes.

Problem: Priority scheduling may cause low-priority processes to starve

Solution: (AGING) This starvation can be compensated for if the priorities are internally computed. Suppose one parameter in the priority assignment function is the amount of time the process has been waiting. The longer a process waits, the higher its priority becomes. This strategy tends to eliminate the starvation problem.

Shortest Job First

Maintain the Ready queue in order of increasing job lengths. When a job comes in, insert it in the ready queue based on its length. When current process is done, pick the one at the head of the queue and run it.

This is provably the most optimal in terms of turnaround/response time.

But, how do we find the length of a job?

Make an estimate based on the past behavior.

  Say the estimated time (burst) for a process is E0, suppose the actual
time is measured to be T0.

Update the estimate by taking a weighted sum of these two
ie. E1 = aT0 + (1-a)E0

in general, E(n+1) = aTn + (1-a)En (Exponential average)

if a=0, recent history no weightage
if a=1, past history no weightage.

typically a=1/2.

E(n+1) = aTn + (1-a)aTn-1 + (1-a)^jatn-j + ...

Older information has less weightage

Comments: SJF is proven optimal only when all jobs are available simultaneously.

Problem: SJF minimizes the average wait time because it services small processes before it services large ones. While it minimizes average wiat time, it may penalize processes with high service time requests. If the ready list is saturated, then processes with large service times tend to be left in the ready list while small processes receive service. In extreme case, where the system has little idle time, processes with large service times will never be served. This total starvation of large processes may be a serious liability of this algorithm.

Solution: Multi-Level Feedback Queques

Multi-Level Feedback Queue

Several queues arranged in some priority order.

Each queue could have a different scheduling discipline/ time quantum.

Lower quanta for higher priorities generally.

Defined by:

  • # of queues
  • scheduling algo for each queue
  • when to upgrade a priority
  • when to demote
Attacks both efficiency and response time problems.
  • Give newly runnable process a high priority and a very short time slice. If process uses up the time slice without blocking then decrease priority by 1 and double its next time slice.
  • Often implemented by having a separate queue for each priority.
  • How are priorities raised? By 1 if it doesn't use time slice? What happens to a process that does a lot of computation when it starts, then waits for user input? Need to boost priority a lot, quickly.