Operating Systems: Internals and Design Principles, 6/E William Stallings - Chapter 10: Multiprocessor and Real-Time Scheduling

Loosely coupled processors,

Each has their memory & I/O channels

Functionally specialized processors

Controlled by a master processor

Such as I/O processor

Tightly coupled multiprocessing

Processors share main memory

Controlled by operating system

 

 

 

 

pptx75 trang | Chia sẻ: tieuaka001 | Lượt xem: 642 | Lượt tải: 0download
Bạn đang xem trước 20 trang nội dung tài liệu Operating Systems: Internals and Design Principles, 6/E William Stallings - Chapter 10: Multiprocessor and Real-Time Scheduling, để xem tài liệu hoàn chỉnh bạn click vào nút DOWNLOAD ở trên
Chapter 10 Multiprocessor and Real-Time SchedulingOperating Systems: Internals and Design Principles, 6/E William StallingsDave BremerOtago Polytechnic, N.Z. ©2008, Prentice HallRoadmapMultiprocessor SchedulingReal-Time SchedulingLinux Scheduling Unix SVR4 Scheduling Windows Scheduling Classifications of Multiprocessor SystemsLoosely coupled processors, Each has their memory & I/O channelsFunctionally specialized processorsControlled by a master processorSuch as I/O processorTightly coupled multiprocessingProcessors share main memory Controlled by operating systemGranularityOr frequency of synchronization, between processes in a system.Five categories, differing in granularity: Independent Parallelism Coarse Parallelism Very Coarse-Grained Parallelism Medium-Grained Parallelism Fine-Grained ParallelismIndependent ParallelismNo explicit synchronization among processesSeparate application or jobExample is time-sharing systemCoarse and Very Coarse-Grained ParallelismSynchronization among processes at a very gross levelGood for concurrent processes running on a multiprogrammed uniprocessorCan by supported on a multiprocessor with little changeMedium-Grained ParallelismSingle application is a collection of threadsThreads usually interact frequently, affecting the performance of the entire applicationFine-Grained ParallelismHighly parallel applicationsSpecialized and fragmented areaSynchronization Granularity and ProcessesValve ExampleValve (half-life2 etc) found a hybrid approach works best for their gamesSome systems worked best assigned to a single processor. E.G sound mixingOthers can be threaded so they work on single processors but greatly improve performance spread over multiple processors. E.g. scene renderingThread Structure for Rendering ModuleScheduling Design IssuesScheduling on a multiprocessor involves three interrelated issues:Assignment of processes to processorsUse of multiprogramming on individual processorsActual dispatching of a processThe approach taken will depend on the degree of granularity of applications and the number of processors availableAssignment of Processes to ProcessorsAssuming all processors are equal, it is simplest to treat processors as a pooled resource and assign process to processors on demand.Should the assignment be static or dynamic though?Dynamic Assignmentthreads are moved for a queue for one processor to a queue for another processor; Static AssignmentPermanently assign process to a processorDedicate short-term queue for each processorLess overheadAllows the use of ‘group’ or ‘gang’ scheduling (see later)But may leave a processor idle, while others have a backlogSolution: use a common queueAssignment of Processes to ProcessorsBoth dynamic and static methods require some way of assigning a process to a processorTwo methods:Master/SlavePeerThere are of course a spectrum of approaches between these two extremes.Master / Slave ArchitectureKey kernel functions always run on a particular processorMaster is responsible for schedulingSlave sends service request to the masterDisadvantagesFailure of master brings down whole systemMaster can become a performance bottleneckPeer architectureKernel can execute on any processorEach processor does self-schedulingComplicates the operating systemMake sure two processors do not choose the same processProcess SchedulingUsually processes are not dedicated to processorsA single queue is used for all processesOr multiple queues are used for prioritiesAll queues feed to the common pool of processorsThread SchedulingThreads execute separate from the rest of the processAn application can be a set of threads that cooperate and execute concurrently in the same address spaceDramatic gains in performance are possible in multi-processor systemsCompared to running in uniprocessor systemsApproaches to Thread SchedulingMany proposals exist but four general approaches stand out:Load SharingGang SchedulingDedicated processor assignmentDynamic schedulingComparison One and Two ProcessorsFigure 10.2Load SharingProcesses are not assigned to a particular processorLoad is distributed evenly across the processorsNo centralized scheduler requiredThe global queue can be organized and accessed using any of the schemes discussed in Chapter 9.Disadvantages of Load SharingCentral queue needs mutual exclusionCan lead to bottlenecksPreemptive threads are unlikely resume execution on the same processorIf all threads are in the global queue, all threads of a program will not gain access to the processors at the same timeGang SchedulingA set of related threads is scheduled to run on a set of processors at the same timeParallel execution of closely related processes may reduce overhead such as process switching and synchronization blocking.Example Scheduling GroupsDedicated Processor AssignmentWhen application is scheduled, its threads are assigned to a processorSome processors may be idleNo multiprogramming of processorsButIn highly parallel systems processor utilization is less important than effectivenessAvoiding process switching speeds up programsApplication SpeedupDynamic SchedulingNumber of threads in a process are altered dynamically by the applicationThis allows the OS to adjust the load to improve utilizationRoadmapMultiprocessor SchedulingReal-Time SchedulingLinux Scheduling Unix SVR4 Scheduling Windows Scheduling Real-Time SchedulingCorrectness of the system depends not only on the logical result of the computation but also on the time at which the results are producedTasks or processes attempt to control or react to events that take place in the outside worldThese events occur in “real time” and tasks must be able to keep up with themHard vs Soft “Hard “ real time task:One that must meet a deadline“Soft” real time taskHas a deadline which is desirable but not mandatoryPeriodic vs AperiodicPeriodic tasksAre completed regularly, once per period T or T units apartAperiodic tasks have time constraints either for deadlines or startReal-Time SystemsControl of laboratory experimentsProcess control in industrial plantsRoboticsAir traffic controlTelecommunicationsMilitary command and control systemsCharacteristics of Real Time SystemsReal time systems have requirements in five general areas: Determinism Responsiveness User control Reliability Fail-soft operationDeterminismOperations are performed at fixed, predetermined times or within predetermined time intervalsConcerned with how long the operating system delays before acknowledging an interrupt and there is sufficient capacity to handle all the requests within the required timeResponsivenessHow long, after acknowledgment, it takes the operating system to service the interruptResponsiveness includes:Amount of time to begin execution of the interruptAmount of time to perform the interruptEffect of interrupt nestingUser controlIt is essential to allow the user fine-grained control over task priority.May allow user to specify things such as paging or process swappingDisks transfer algorithms to useRights of processesCharacteristicsReliabilityDegradation of performance may have catastrophic consequencesFail-soft operationAbility of a system to fail in such a way as to preserve as much capability and data as possibleStability is important – if all deadlines are impossible, critical deadlines still meet.Features of Real-Time OSFast process or thread switchSmall sizeAbility to respond to external interrupts quicklyMultitasking with interprocess communication tools such as semaphores, signals, and eventsFeatures of Real-Time OS contUse of special sequential files that can accumulate data at a fast ratePreemptive scheduling base on priorityMinimization of intervals during which interrupts are disabledDelay tasks for fixed amount of timeSpecial alarms and timeoutsRound Robin scheduling unacceptablePriority driven unacceptableCombine priorities with clock-based interruptsImmediate PreemptionClasses of Real-Time Scheduling AlgorithmsStatic table-drivenTask execution determined at run timeStatic priority-driven preemptiveTraditional priority-driven scheduler is usedDynamic planning-basedFeasibility determined at run timeDynamic best effortNo feasibility analysis is performedDeadline SchedulingReal-time applications are not concerned with speed but with completing tasks“Priorities” are a crude tool and may not capture the time-critical element of the tasksDeadline SchedulingInformation usedReady timeStarting deadlineCompletion deadlineProcessing timeResource requirementsPrioritySubtask schedulerPreemptionWhen starting deadlines are specified, then a nonpreemptive scheduler makes sense. E.G. if task X is running and task Y is ready, there may be circumstances in which the only way to allow both X and Y to meet their completion deadlines is to preempt X, execute Y to completion, and then resume X to completion.Two TasksPeriodic SchedulingExecution ProfileAperiodic SchedulingRate Monotonic SchedulingAssigns priorities to tasks on the basis of their periodsHighest-priority task is the one with the shortest periodTask SetPeriodic Task Timing DiagramPriority InversionCan occur in any priority-based preemptive scheduling schemeOccurs when circumstances within the system force a higher priority task to wait for a lower priority taskUnbounded Priority InversionDuration of a priority inversion depends on unpredictable actions of other unrelated tasksPriority InheritanceLower-priority task inherits the priority of any higher priority task pending on a resource they shareLinux SchedulingScheduling classesSCHED_FIFO: First-in-first-out real-time threadsSCHED_RR: Round-robin real-time threadsSCHED_OTHER: Other, non-real-time threadsWithin each class multiple priorities may be usedRoadmapMultiprocessor SchedulingReal-Time SchedulingLinux Scheduling Unix SVR4 Scheduling Windows Scheduling Linux Real Time Scheduling ClassesSCHED_FIFO: First-in-first-out real-time threadsSCHED_RR: Round-robin real-time threadsSCHED_OTHER: Other, non-real-time threadsLinux Real-Time SchedulingNon-Real-Time SchedulingLinux 2.6 uses a new scheduler the O(1) schedulerTime to select the appropriate process and assign it to a processor is constantRegardless of the load on the system or number of processorsLinux Scheduling Data StructuresRoadmapMultiprocessor SchedulingReal-Time SchedulingLinux Scheduling Unix SVR4 Scheduling Windows Scheduling SVR4 SchedulingA complete overhaul of the scheduling algorithm used in earlier UNIX systems. The new algorithm is designed to give:highest preference to real-time processes, next-highest preference to kernel-mode processes, and lowest preference to other user-mode processes, referred to as time-shared processes.UNIX SVR4 SchedulingNew features include:Preemptable static priority schedulerIntroduction of a set of 160 priority levels divided into three priority classesInsertion of preemption pointsSVR Priority ClassesSVR Priority ClassesReal time (159 – 100)Guaranteed to be selected to run before any kernel or time-sharing processCan preempt kernel and user processesKernel (99 – 60)Guaranteed to be selected to run before any time-sharing processTime-shared (59-0)Lowest-prioritySVR4 Dispatch QueuesRoadmapMultiprocessor SchedulingReal-Time SchedulingLinux Scheduling Unix SVR4 Scheduling Windows Scheduling Windows SchedulingPriorities organized into two bands or classesReal timeVariablePriority-driven preemptive schedulerWindows Thread Dispatching PrioritiesWindows Priority RelationshipMultiprocessor SchedulingWith multiprocessors, multiple threads with the same highest priority share the processor in a round robin fashionLower-priority, threads must wait until the other threads block or have their priority decay.Lower-priority threads may also have their priority boosted briefly to 15 if they are being starved, to prevent priority inversion.

Các file đính kèm theo tài liệu này:

  • pptxchapter10_new_0318.pptx
Tài liệu liên quan