• Correctness of the system may depend not only on the logical result of the computation but also on the time when these results are produced, e.g.
–> Tasks attempt to control events or to react to events that take place in the outside world
–> These external events occur in real time and processing must be able to keep up
–> Processing must happen in a timely fashion,• neither too late, nor too early
Thread Scheduling
->Distinction between user-level and kernel-level threads
OS only schedules kernel-level threads. User-level threads are scheduled through a direct or indirect (LWP) mapping
->Many-to-one and many-to-many models, thread library schedules user-level threads to run on LWP
->Known as process-contention scope (PCS) since scheduling competition is within the process
->Kernel thread scheduled onto available CPU is system-contention scope (SCS) – competition among all threads in system
->Typically – PCS is priority based. Programmer can set user-level thread priorities
thread scheduling criteria:
-> a priority, or in fact usually multiple "priority" settings that we'll discuss below;
-> a quantum, or number of allocated timeslices of CPU, which essentially determines the amount of CPU time a thread is allotted before it is forced to yield the CPU to another thread of the same or lower priority
-> a state, notably "runnable" vs "waiting";
-> metrics about the behaviour of threads, such as recent CPU usage or the time since it last ran (i.e. had a share of CPU), or the fact that it has "just received an event it was waiting for".
Multiprocessor Scheduling
-> is an NP-Complete optimization problem.
-> Given a set of runnable threads, and a set of CPUs, assign threads to CPUs
-> Same considerations as uniprocessorscheduling
(Fairness, efficiency, throughput, response time…)
-> But also new considerations:
* Load balancing
* Processor affinity
-> Will consider only shared memory multiprocessor
Central queue – queue can be a bottleneck
Distributed queue – load balancing between queue
No comments:
Post a Comment