Thursday, August 27, 2009

Resource-Allocation Graph

A set of vertices V and a set of edges E.

  • V is partitioned into two types:
    =>FP = {P1, P2, …, Pn}, the set consisting of all the processes in the system.

    =>FR = {R1, R2, …, Rm}, the set consisting of all resource types in the system.
  • request edge – directed edge P1 ® Rj.
  • assignment edge – directed edge Rj ® Pi


Resource-Allocation Graph Algorithm

  • Claim edge Pi ® Rj indicated that process Pj may request resource Rj; represented by a dashed line.
  • Claim edge converts to request edge when a process requests a resource.
  • When a resource is released by a process, assignment edge reconverts to a claim edge.
  • Resources must be claimed a priori in the system.

Resource-Allocation Graph and Wait-for Graph

Resource-Allocation Graph Corresponding wait-for graph

How did you know if there is a deadlock based on a resource allocation graph?

  • In a multiprogramming system, processes request resourceused by other processes then the process enters a processes are also in a waiting state, we have deadlock.



Example of a Resource Allocation Graph


Resource allocation graphs are drawn in order to see the allocatioand resources easily. In these graphs, processes are represented are represented by boxes. Resource boxes have some number available number of that resource, that is number of instances.

If the resource allocation graph contains no cycles then there is no deadlock in the system at that instance.
• If the resource allocation graph contains a cycle then a deadlock may exist.
• If there is a cycle, and the cycle involves only resources which have a single instance, then a deadlock has occurred.






Resource Allocation Graph With A Deadlock


*In addition to the request and assignment edges, a claim edge is also introduced.
*Claim edge Pi ® Rj indicated that process Pj may request resource Rj in future; represented by a dashed line.
*Claim edge converts to request edge when a process requests a resource.
*When a resource is released by a process, assignment edge reconverts to a claim edge.
*Resources must be claimed a priori in the system. That is, before a process starts executing, all of its claim edges must already appear in the resource-allocation graph.
*Suppose that process Pi requests resource Rj. The request can be granted only if converting the request edge if converting the request edge Pi®Rj to an assignment edge does not result in a cycle in the resource-allocation graph. That is we use a cycle detection algorithm is used. If no cycle exits, the process Pi will have to wait.

Resource Allocation Graph With A Cycle But No Deadlock

*2 resources "R1, R2"
*4 processes "P1, P2, P3, P4"
*P1 and P3 is requesting a process to R1 and R2 respectively.
*P2 and P3 holds an instance of R1
*P1 and P4 holds an instance of R2
*There is no DeadLock













Resource-Allocation Graph For Deadlock Avoidance


The Resource Allocation Graph (RAG) is composed of 2 processes and 2 resources.
· P1 holds an instance of R1
· P2 requests an instance of R1
· P1 and P2 may request an instance of R2














Unsafe State In Resource-Allocation Graph


*the graph shows that R1 is holding the instances of P1, then P1 is requesting for a resource in R2, then R2 is holding the instances of P2 and P2 is requesting instance of R1.







Thursday, August 20, 2009

Methods for Handling

Deadlock

Prevent
– write code such that the necessary conditions can
never hold simultaneously
Avoid
– dynamically ensure that a deadlock state is not
reachable.
Detect and recover
– allow deadlock but provide mechanisms to detect
it and recover (e.g., kill a process)
Ignore (ostrich approach)
pretend they cannot occur and reboot if they do.
Deadlock Prevention
  • Disallow one of the four necessary conditions for deadlock
  • Preventing deadlocks by constraining how requests for resources can be made in the system and how they are handled (system design).
  • The goal is to ensure that at least one of the necessary conditions for deadlock can never hold.
Deadlock Detection

  • Always grant resource request when possible. Periodically check for deadlocks. If a deadlock exists, recover from it.
Deadlock Recovery
  • Abort all deadlock processes and release resource - too drastic - will lead to loss of work.
  • Abort one process at a time - releasing resources until no deadlockHow do we determine which process to abort first ? - priority ordering, process which has done least work.
  • Selectively restart processes from a previous checkpoint i.e. before it claimed any resources difficult to achieve - sometimes impossible.
  • Successively withdraw resources from a process and give to another process until deadlock is broken. How to choose which processes and which resources ?
  1. Complex decisions due to the large number of processes present within a system.
  2. Difficult to automate.
  3. Use Operator to resolve conflicts - BUT this requires the operator to have skill and understanding of what processes are actually doing.
Deadlock Characterization

Necessary conditions:

1) Mutual exclusion:
• at least one shared resource is held
2) Hold and wait:
• a process must be holding at least one resource and
waiting for another
3) No preemption:
• cannot steal a resource away from a process
4) Circular wait:
• E.g., A is waiting for B who is waiting for C who is
waiting for A.

Thursday, August 13, 2009

Real Time Scheduling

Correctness of the system may depend not only on the logical result of the computation but also on the time when these results are produced, e.g.

–> Tasks attempt to control events or to react to events that take place in the outside world
–> These external events occur in real time and processing must be able to keep up
–> Processing must happen in a timely fashion,• neither too late, nor too early



Thread Scheduling

->Distinction between user-level and kernel-level threads
OS only schedules kernel-level threads. User-level threads are scheduled through a direct or indirect (LWP) mapping
->Many-to-one and many-to-many models, thread library schedules user-level threads to run on LWP
->Known as process-contention scope (PCS) since scheduling competition is within the process
->Kernel thread scheduled onto available CPU is system-contention scope (SCS) – competition among all threads in system
->Typically – PCS is priority based. Programmer can set user-level thread priorities


thread scheduling criteria:

-> a priority, or in fact usually multiple "priority" settings that we'll discuss below;
-> a quantum, or number of allocated timeslices of CPU, which essentially determines the amount of CPU time a thread is allotted before it is forced to yield the CPU to another thread of the same or lower priority
-> a state, notably "runnable" vs "waiting";
-> metrics about the behaviour of threads, such as recent CPU usage or the time since it last ran (i.e. had a share of CPU), or the fact that it has "just received an event it was waiting for".


Multiprocessor Scheduling


-> is an NP-Complete optimization problem.
-> Given a set of runnable threads, and a set of CPUs, assign threads to CPUs
-> Same considerations as uniprocessorscheduling
(Fairness, efficiency, throughput, response time…)
-> But also new considerations:
* Load balancing
* Processor affinity
-> Will consider only shared memory multiprocessor
Central queue – queue can be a bottleneck
Distributed queue – load balancing between queue



Monday, August 10, 2009

CPU Scheduling Algorithms

Scheduling Algorithms
1.First-come, first-served (FCFS) scheduling
2.Shortest-job first (SJF) scheduling
3.Priority scheduling
4.Round-robin scheduling
5.Multilevel queue scheduling
6.Multilevel feedback queue scheduling

-> First-come, First-served (FCFS) scheduling is the simplest scheduling algorithm, but it can cause short processes to wait for very long processes.
-> Shortest-job-first (SJF) scheduling is provably optimal, providing the shortest average waiting time. Implementing SJF scheduling is difficult because predicting the length of the next CPU burst is difficult. The SJF algorithm is a special case of the general priority-scheduling algorithm, which simply allocates the CPU to the highest-priority process. Both priority and SJF scheduling may suffer from starvation. Aging is a technique to prevent starvation.
->Priority Based Scheduling Run highest-priority processes first, use round-robin among processes of equal priority. Re-insert process in run queue behind all processes of greater or equal priority.Allows CPU to be given preferentially to important processes.Scheduler adjusts dispatcher priorities to achieve the desired overall priorities for the processes, e.g. one process gets 90% of the CPU.Comments: In priority scheduling, processes are allocated to the CPU on the basis of an externally assigned priority. The key to the performance of priority scheduling is in choosing priorities for the processes.
-> Round-robin (RR) scheduling is more appropriate for a time-shared (interactive) system. RR scheduling allocates the CPU to the first process in the ready queue for q time units, where q is the time quantum. After q time units, if the process has not relinquished the CPU, it is preempted and the process is put at the tail of the ready queue. The major problem is the selection of the time quantum. If the quantum is too large, RR scheduling degenerates to FCFS scheduling; if the quantum is too small, scheduling overhead in the form of context-switch time becomes excessive.The FCFS algorithm is nonpreemptive, the RR algorithm is preemptive. The SJF and priority algorithms may be either preemptive or nonpreemptive.

-> Multilevel queue algorithms allow different algorithms to be used for various classes of processes. The most common is a foreground interactive queue which uses RR scheduling, and a background batch queue, which uses FCFS scheduling. Multilevel feedback queues allow processes to move from one queue to another.

Because such a wide variety of scheduling algorithms are available, we need methods to select among them. Analytic methods use mathematical analysis to determine the performance of an algorithm. Simulation methods determine performance by imitating the scheduling algorithm on a “representative” sample of processes, and computing the resulting performance.

Operating Systems supporting threads at the kernel level must schedule threads - not processes - for execution. This is the case with Solaris 2 and Windows 2000 where both systems schedule threads using preemptive priority based on scheduling algorithm including support for real-time threads. The Linux process scheduler also uses a priority-based algorithm with real-time supports as well. The scheduling algorithms for these three operating systems typically favor interactive over batch and CPU-bound processes.systems typically favor interactive over batch and CPU-bound processes.
Substantial Information about Threads of atleast three OS
WINDOWS XP THREAD

Implements the one-to-one mapping
Each thread contains
-> A thread id
-> Register set
-> Separate user and kernel stacks
-> Private data storage area
The register set, stacks, and private storage area are known as the context of the threads
The primary data structures of a thread include:
-> ETHREAD (executive thread block)
-> KTHREAD (kernel thread block)
-> TEB (thread environment block)
JAVA THREADS

-> Java threads are managed by the JVM
-> Java threads may be created by:

Extending Thread class
Implementing the Runnable interface

Windows 2000

Implements the one-to-one mapping

Each thread contains

->A thread ID

->Register set

->Separate user and kernel stacks for user and kernel modes

->Private data storage area used by various run-time libraries and dynamic link libraries (DDLs)

->The latter three are known as the context of the thread and are architecture-specific to HW.





Wednesday, August 5, 2009

Procedure-Consumer Example

PROCEDURE

A Procedure encapsulates a task composed of Steps (and possibly, SubSteps). Procedures are usually performed sequentially, unless individual Steps direct the reader explicitly.Often it is important to assure that certain conditions exist before a procedure is performed, and that the outcome of the procedure matches the expected results. DocBook does not provide explicit semantic markup for these pre- and post-conditions. Instead, they must be described as steps (check the pre-conditions in the first step and the results in the last step), or described outside the body of the procedure.

CONSUMER

Consumer is a broad label that refers to any individuals or households that use goods and services generated within the economy. The concept of a consumer is used in different contexts, so that the usage and significance of the term may vary.
Thread Library

The threads library allows concurrent programming in Objective Caml. It provides multiple threads of control (also called lightweight processes) that execute concurrently in the same memory space. Threads communicate by in-place modification of shared data structures, or by sending and receiving data on communication channels.The threads library is implemented by time-sharing on a single processor. It will not take advantage of multi-processor machines. Using this library will therefore never make programs run faster. However, many programs are easier to write when structured as several communicating processes.
Kernel threads

=> Supported by the Kernel
=> Kernel can run them on multiple processors in parallel
User thread

=> Thread management done by user-level threads library
=> includes a set of registers and a stack, and shares the entire address space with the other threads in the enclosing process
=> is handled entirely in user code, usually by a special library that provides at least start, swap and suspend calls
Thread





In computer science, a thread of execution results from a fork of a computer program into two or more concurrently running tasks. The implementation of threads and processes differs from one operating systemto another, but in most cases, a thread is contained inside a process. Multiple threads can exist within the same process and share resources such as memorywhile different processes do not share these resources.

Multithreading models

•There are three dominant models for thread libraries, each with its own trade-offs
–many threads on one LWP (many-to-one)
–one thread per LWP (one-to-one)
–many threads on many LWPs (many-to-many)
•Similar models can apply on scheduling kernel threads to real CPUs
Many-to-one
=> Many user-level threads mapped to single kernel thread

•In this model, the library maps all threads to a single lightweight process•Advantages:

–totally portable
–easy to do with few systems dependencies

•Disadvantages:

–cannot take advantage of parallelism

–may have to block for synchronous I/O

–there is a clever technique for avoiding it

•Mainly used in language systems, portable libraries
One-to-one

=>Many user-level threads mapped to single kernel thread

•In this model, the library maps each thread to a different lightweight process
•Advantages:
–can exploit parallelism, blocking system calls
•Disadvantages:–thread creation involves LWP creation
–each thread takes up kernel resources
–limiting the number of total threads
•Used in LinuxThreads and other systems where LWP creation is not too expensive
Many-to-many => Allows many user level threads to be mapped to many kernel threads
=> Allows the operating system to create a sufficient number of kernel threads

•In this model, the library has two kinds of threads: bound and unbound
–bound threads are mapped each to a single lightweight process
–unbound threads may be mapped to the same LWP
•Probably the best of both worlds
•Used in the Solaris implementation of Pthreads (and several other Unix implementations)
Benefits of Multi-Threaded Programming Multi-threading as a widespread programming and execution model allows multiple threads to exist within the context of a single process. These threads share the process' resources but are able to execute independently. The threaded programming model provides developers with a useful abstraction of concurrent execution. However, perhaps the most interesting application of the technology is when it is applied to a single process to enable parallel execution on a multiprocessor system.

This advantage of a multithreaded program allows it to operate faster on computer systems that have multiple CPUs, CPUs with multiple cores, or across a cluster of machines — because the threads of the program naturally lend themselves to truly concurrent execution. In such a case, the programmer needs to be careful to avoid race conditions, and other non-intuitive behaviors. In order for data to be correctly manipulated, threads will often need to rendezvous in time in order to process the data in the correct order. Threads may also require atomic operations (often implemented using semaphores) in order to prevent common data from being simultaneously modified, or read while in the process of being modified. Careless use of such primitives can lead to deadlocks.

Another advantage of multithreading, even for single-CPU systems, is the ability for an application to remain responsive to input. In a single threaded program, if the main execution thread blocks on a long running task, the entire application can appear to freeze. By moving such long running tasks to a worker thread that runs concurrently with the main execution thread, it is possible for the application to remain responsive to user input while executing tasks in the background.


Operating systems schedule threads in one of two ways:

1. Preemptive multithreading is generally considered the superior approach, as it allows the operating system to determine when a context switch should occur. The disadvantage to preemptive multithreading is that the system may make a context switch at an inappropriate time, causing priority inversion or other negative effects which may be avoided by cooperative multithreading.

2. Cooperative multithreading, on the other hand, relies on the threads themselves to relinquish control once they are at a stopping point. This can create problems if a thread is waiting for a resource to become available.

Traditional mainstream computing hardware did not have much support for multithreading as switching between threads was generally already quicker than full process context switches. Processors in embedded systems, which have higher requirements for real-time behaviors, might support multithreading by decreasing the thread-switch time, perhaps by allocating a dedicated register file for each thread instead of saving/restoring a common register file. In the late 1990s, the idea of executing instructions from multiple threads simultaneously has become known as simultaneous multithreading. This feature was introduced in Intel's Pentium 4 processor, with the name hyper threading.