Process Scheduling Scheduling in Operating Systems
Introduction to Process Scheduling
In computing, scheduling refers to the method by which work specified by some means is assigned to resources that complete the work. The work may be processes, threads, or data flows, and the resources may be processors, network links, or expansion cards. The primary aim of scheduling is to keep the system efficient, balanced, and fair.
Goals of Process Scheduling
Process scheduling has several goals:
- Maximizing Throughput: This involves maximizing the total amount of work completed per time unit.
- Minimizing Wait Time: This is the time from when a task becomes ready until it begins execution.
- Minimizing Latency or Response Time: This is the time from when a task becomes ready until it finishes or the system responds and hands the first output to the user.
- Maximizing Fairness: Ensuring equal CPU time to each process or appropriate times according to the priority and workload of each process.
Types of Operating System Schedulers
Operating systems may feature up to three distinct scheduler types:
- Long-term Scheduler (Admission Scheduler): Decides which jobs or processes are to be admitted to the ready queue. It controls the degree of multiprogramming.
- Medium-term Scheduler: Manages the swapping of processes in and out of main memory to manage available resources.
- Short-term Scheduler (CPU Scheduler): Decides which ready, in-memory process to execute next.
Process Scheduler
The process scheduler is an integral part of the operating system, deciding which process runs at a certain point in time. It can be preemptive or cooperative, depending on whether it can forcibly remove a process from the CPU to allocate it to another process.
Long-term Scheduling
The long-term scheduler, or admission scheduler, decides which processes are to be admitted to the system. It ensures a good mix of I/O-bound and CPU-bound processes, which is crucial for system balance.
Example Numerical: Suppose a system has 60% CPU-bound processes and 40% I/O-bound processes. Calculate the ideal mix to maximize throughput.
Solution: An ideal mix would maintain a balance such that the CPU is always busy, and I/O devices are also utilized efficiently. This prevents bottlenecks and maximizes throughput.
Medium-term Scheduling
The medium-term scheduler temporarily removes processes from main memory and places them in secondary memory (such as a hard disk drive) or vice versa. This helps free up memory for active processes.
Short-term Scheduling
The short-term scheduler, or CPU scheduler, makes frequent decisions on which process to execute next. It can use preemptive or non-preemptive methods.
Dispatcher
The dispatcher is responsible for giving control of the CPU to the process selected by the short-term scheduler. It performs context switches, transitioning the CPU from one process to another.
Scheduling Algorithms
First Come, First Served (FCFS)
FCFS is the simplest scheduling algorithm, which queues processes in the order they arrive.
Advantages:
- Simple and easy to implement.
- Fair in the sense that jobs are executed in the order they arrive.
Disadvantages:
- Can lead to the convoy effect, where short processes wait for a long process to complete.
Example Numerical: Calculate the average waiting time for the following processes using FCFS.
Process | Arrival Time | Burst Time |
---|---|---|
P1 | 0 | 24 |
P2 | 1 | 3 |
P3 | 2 | 3 |
Solution:
- P1: Waiting Time = 0
- P2: Waiting Time = 24 – 1 = 23
- P3: Waiting Time = 27 – 2 = 25
Average Waiting Time = (0 + 23 + 25) / 3 = 16
Shortest Job First (SJF)
SJF executes the process with the shortest burst time first.
Advantages:
- Minimizes average waiting time.
Disadvantages:
- Can lead to starvation if short processes keep arriving.
Example Numerical: Calculate the average waiting time for the following processes using SJF.
Process | Arrival Time | Burst Time |
---|---|---|
P1 | 0 | 6 |
P2 | 2 | 8 |
P3 | 4 | 7 |
P4 | 5 | 3 |
Solution:
- P1: Waiting Time = 0
- P4: Waiting Time = 6 – 5 = 1
- P3: Waiting Time = 10 – 4 = 6
- P2: Waiting Time = 17 – 2 = 15
Average Waiting Time = (0 + 1 + 6 + 15) / 4 = 5.5
Priority Scheduling
Processes are assigned priorities, and the highest priority process is executed first.
Advantages:
- Important processes are executed first.
Disadvantages:
- Can lead to starvation if low-priority processes never get CPU time.
Example Numerical: Calculate the average waiting time for the following processes using Priority Scheduling.
Process | Arrival Time | Burst Time | Priority |
---|---|---|---|
P1 | 0 | 10 | 3 |
P2 | 2 | 1 | 1 |
P3 | 3 | 2 | 4 |
P4 | 4 | 1 | 5 |
P5 | 5 | 5 | 2 |
Solution:
- P2: Waiting Time = 0
- P5: Waiting Time = 1 – 5 = -4 (adjusted to 0, as it can’t be negative)
- P1: Waiting Time = 6 – 0 = 6
- P3: Waiting Time = 16 – 3 = 13
- P4: Waiting Time = 18 – 4 = 14
Average Waiting Time = (0 + 0 + 6 + 13 + 14) / 5 = 6.6
Round Robin (RR)
Each process is assigned a fixed time slice (quantum), and processes are cycled through.
Advantages:
- Fair, as each process gets an equal share of CPU time.
- Suitable for time-sharing systems.
Disadvantages:
- High overhead if the quantum is too small.
Example Numerical: Calculate the average waiting time for the following processes using Round Robin with a time quantum of 4.
Process | Arrival Time | Burst Time |
---|---|---|
P1 | 0 | 10 |
P2 | 1 | 4 |
P3 | 2 | 6 |
P4 | 3 | 8 |
Solution:
- Gantt Chart: P1 (4) -> P2 (4) -> P3 (4) -> P4 (4) -> P1 (4) -> P3 (2) -> P4 (4)
- Waiting Times:
- P1: 10
- P2: 3
- P3: 9
- P4: 16
Average Waiting Time = (10 + 3 + 9 + 16) / 4 = 9.5
Implementations on Various Operating Systems
Windows
Windows NT-based operating systems use a multilevel feedback queue with dynamic priority adjustment. Threads that interact with users or are I/O-bound get higher priority, ensuring responsiveness.
macOS
macOS also uses a multilevel feedback queue. It prioritizes interactive and system-critical processes to maintain performance and user experience.
Android
Android, based on the Linux kernel, employs the Completely Fair Scheduler (CFS). It ensures that each process gets a fair share of CPU time, optimized for mobile performance.
Linux
Linux uses the CFS, which implements fair queuing to allocate CPU time. The scheduler aims for equal distribution of CPU time among all processes.
UNIX
UNIX systems typically use a multilevel feedback queue, focusing on stability and performance, ensuring fair CPU time distribution.
iOS
iOS, like macOS, uses a multilevel feedback queue to manage process scheduling, optimizing for power efficiency and performance on mobile devices.
Multilevel Queue Scheduling
Multilevel queue scheduling is used for systems that can easily divide processes into different categories, each with its own scheduling needs. For example, foreground (interactive) processes might require a different scheduling approach than background (batch) processes.
Advantages:
- Provides a good balance between different types of processes.
- Can prioritize critical system processes over less critical user processes.
Disadvantages:
- Complex to implement and manage.
- Can lead to starvation if lower-priority queues are neglected.
Example Numerical: Calculate the average waiting time for the following processes using Multilevel Queue Scheduling with two queues: one for system processes (priority 0) and one for user processes (priority 1).
Process | Arrival Time | Burst Time | Queue |
---|---|---|---|
P1 | 0 | 8 | System |
P2 | 1 | 4 | User |
P3 | 2 | 9 | System |
P4 | 3 | 5 | User |
Solution:
- System Queue: P1, P3
- User Queue: P2, P4
Scheduling Order: P1 (8) -> P3 (9) -> P2 (4) -> P4 (5)
- Waiting Times:
- P1: 0
- P3: 8
- P2: 17
- P4: 21
Average Waiting Time = (0 + 8 + 17 + 21) / 4 = 11.5
Multilevel Feedback Queue Scheduling
Multilevel feedback queue scheduling allows processes to move between queues based on their behavior and execution history. This flexibility makes it suitable for systems with diverse process requirements.
Advantages:
- Adaptive and dynamic, responding to process behavior.
- Reduces starvation by allowing processes to move to higher-priority queues.
Disadvantages:
- Complex to implement.
- Requires careful tuning of parameters and policies.
Example Numerical: Simulate the execution of the following processes using Multilevel Feedback Queue Scheduling with three queues and time slices of 8, 16, and 32 units respectively.
Process | Arrival Time | Burst Time |
---|---|---|
P1 | 0 | 20 |
P2 | 2 | 36 |
P3 | 4 | 12 |
P4 | 6 | 18 |
Solution:
- Queue 1 (time slice = 8): P1 (8) -> P2 (8) -> P3 (8) -> P4 (8)
- Queue 2 (time slice = 16): P1 (12) -> P2 (16) -> P4 (10)
- Queue 3 (time slice = 32): P2 (12)
Scheduling Order: P1 (8) -> P2 (8) -> P3 (8) -> P4 (8) -> P1 (12) -> P2 (16) -> P4 (10) -> P2 (12)
- Waiting Times:
- P1: 8
- P2: 20
- P3: 16
- P4: 18
Average Waiting Time = (8 + 20 + 16 + 18) / 4 = 15.5
Real-Time Scheduling Algorithms
Real-time systems require scheduling algorithms that can guarantee processes meet their deadlines. Common real-time scheduling algorithms include Rate-Monotonic Scheduling (RMS) and Earliest Deadline First (EDF).
Rate-Monotonic Scheduling (RMS)
RMS assigns priorities based on the periodicity of tasks. Tasks with shorter periods have higher priorities.
Example Numerical: Given three tasks with periods 3, 5, and 7 units and execution times 1, 2, and 2 units respectively, determine if the system is schedulable using RMS.
Solution:
- Task 1 (T1): Period = 3, Execution Time = 1
- Task 2 (T2): Period = 5, Execution Time = 2
- Task 3 (T3): Period = 7, Execution Time = 2
Utilization = (1/3) + (2/5) + (2/7) = 0.333 + 0.4 + 0.285 = 1.018
Since the total utilization exceeds 1, the tasks are not schedulable using RMS.
Earliest Deadline First (EDF)
EDF assigns priorities based on deadlines. The task with the nearest deadline is given the highest priority.
Example Numerical: Given the same tasks as above, determine if the system is schedulable using EDF.
Solution:
- Task 1 (T1): Deadline = 3, Execution Time = 1
- Task 2 (T2): Deadline = 5, Execution Time = 2
- Task 3 (T3): Deadline = 7, Execution Time = 2
Scheduling Order: T1 (1) -> T2 (2) -> T3 (2)
Since EDF can dynamically adjust priorities, it is more flexible and can handle a total utilization of up to 1, making the tasks schedulable.
I/O Scheduling
I/O scheduling determines the order in which I/O operations are processed. This is crucial for optimizing the performance of storage devices.
First-Come, First-Served (FCFS)
Processes I/O requests in the order they arrive.
Example Numerical: Given a sequence of disk requests: 98, 183, 37, 122, 14, 124, 65, 67. The initial head position is 53. Calculate the total head movement.
Solution:
- Sequence: 53 -> 98 -> 183 -> 37 -> 122 -> 14 -> 124 -> 65 -> 67
Total Head Movement = |53-98| + |98-183| + |183-37| + |37-122| + |122-14| + |14-124| + |124-65| + |65-67| = 45 + 85 + 146 + 85 + 108 + 110 + 59 + 2 = 640
Shortest Seek Time First (SSTF)
Selects the request with the shortest seek time from the current head position.
Example Numerical: Given the same sequence of disk requests, calculate the total head movement using SSTF.
Solution:
- Initial head position = 53
- Sequence: 53 -> 65 -> 67 -> 37 -> 14 -> 98 -> 122 -> 124 -> 183
Total Head Movement = |53-65| + |65-67| + |67-37| + |37-14| + |14-98| + |98-122| + |122-124| + |124-183| = 12 + 2 + 30 + 23 + 84 + 24 + 2 + 59 = 236
Implementation on Various Operating Systems
Windows
Windows uses a priority-based preemptive scheduling algorithm with a multilevel feedback queue. This ensures that critical system tasks receive immediate attention while user applications are managed efficiently.
macOS
macOS employs a similar multilevel feedback queue with dynamic priority adjustments. It prioritizes interactive tasks to maintain system responsiveness and user experience.
Android
Android, built on the Linux kernel, uses the Completely Fair Scheduler (CFS) to manage process scheduling. CFS ensures that all processes receive a fair share of CPU time.
Linux
Linux’s CFS is designed to provide a balanced and fair scheduling approach. It uses a red-black tree data structure to manage processes, ensuring efficient CPU time distribution.
UNIX
UNIX systems typically use a multilevel feedback queue, focusing on stability and performance. This ensures that critical system processes receive the necessary CPU time.
iOS
iOS uses a similar multilevel feedback queue to macOS, optimizing for power efficiency and performance on mobile devices.
Advanced Scheduling Concepts
Fair Share Scheduling
Fair share scheduling ensures that each user or group of users receives a fair share of system resources. This is particularly useful in multi-user systems and cloud environments.
Gang Scheduling
Gang scheduling is used in parallel processing environments where related processes are scheduled to run simultaneously on different processors. This minimizes communication delays and improves performance.
Placement Questions and Answers
Q1: Explain the difference between preemptive and non-preemptive scheduling. A1: Preemptive scheduling allows the operating system to forcibly remove a running process from the CPU to allocate it to another process, whereas non-preemptive scheduling requires the running process to relinquish control voluntarily.
Q2: What is the convoy effect in process scheduling? A2: The convoy effect occurs when short processes are delayed by a long process, leading to inefficient CPU utilization and increased waiting times for the short processes.
Q3: Describe the purpose of the dispatcher in process scheduling. A3: The dispatcher is responsible for giving control of the CPU to the process selected by the short-term scheduler. It performs context switches, transitioning the CPU from one process to another.
Q4: How does the Completely Fair Scheduler (CFS) in Linux ensure fairness? A4: CFS uses a red-black tree data structure to manage processes and ensures that each process receives a fair share of CPU time. It calculates the fair share based on the time already used by each process.
MCQs
Q1: Which scheduling algorithm can lead to starvation?
- a) FCFS
- b) SJF
- c) Round Robin
- d) Multilevel Queue
Answer: b) SJF
Q2: In which scheduling algorithm is the time quantum used?
- a) FCFS
- b) Priority Scheduling
- c) Round Robin
- d) SJF
Answer: c) Round Robin
Q3: What is the main disadvantage of the FCFS scheduling algorithm?
- a) High scheduling overhead
- b) Convoy effect
- c) Starvation
- d) Complex to implement
Answer: b) Convoy effect
Q4: Which scheduling algorithm dynamically adjusts process priorities?
- a) FCFS
- b) Priority Scheduling
- c) Round Robin
- d) Multilevel Feedback Queue
Answer: d) Multilevel Feedback Queue
FAQs
Q1: What is the difference between long-term and short-term scheduling? A1: Long-term scheduling decides which jobs or processes are to be admitted to the system and controls the degree of multiprogramming. Short-term scheduling, on the other hand, decides which of the ready, in-memory processes is to be executed next.
Q2: How does the Linux kernel implement process scheduling? A2: The Linux kernel uses the Completely Fair Scheduler (CFS), which ensures that all processes receive a fair share of CPU time. It uses a red-black tree data structure to manage processes efficiently.
Q3: Why is I/O scheduling important in operating systems? A3: I/O scheduling determines the order in which I/O operations are processed. This is crucial for optimizing the performance of storage devices and ensuring that processes do not experience excessive wait times for I/O operations.
Q4: What is the role of the dispatcher in CPU scheduling? A4: The dispatcher is responsible for giving control of the CPU to the process selected by the short-term scheduler. It performs context switches, transitioning the CPU from one process to another.
Diagrams
- Gantt Chart for FCFS Scheduling:
- Create a Gantt chart illustrating the execution order of processes in the FCFS algorithm.
- Red-Black Tree for CFS:
- Illustrate the red-black tree data structure used by the Completely Fair Scheduler to manage processes.
- Multilevel Feedback Queue:
- Diagram showing the different priority queues and how processes move between them in a multilevel feedback queue scheduling algorithm.
Real-Time Scheduling Algorithms
Real-time scheduling is crucial for systems that require precise timing and predictability, such as embedded systems, robotics, and critical infrastructure. The primary goal is to ensure that all critical tasks meet their deadlines.
Rate-Monotonic Scheduling (RMS)
Rate-Monotonic Scheduling (RMS) is a fixed-priority algorithm where priorities are assigned based on the periodicity of tasks. Tasks with shorter periods are given higher priorities.
Example Numerical: Given three tasks with periods 3, 5, and 7 units and execution times 1, 2, and 2 units respectively, determine if the system is schedulable using RMS.
Solution:
- Task 1 (T1): Period = 3, Execution Time = 1
- Task 2 (T2): Period = 5, Execution Time = 2
- Task 3 (T3): Period = 7, Execution Time = 2
Utilization = (1/3) + (2/5) + (2/7) = 0.333 + 0.4 + 0.285 = 1.018
Since the total utilization exceeds 1, the tasks are not schedulable using RMS.
Earliest Deadline First (EDF)
Earliest Deadline First (EDF) is a dynamic scheduling algorithm where tasks are prioritized based on their deadlines. The task with the nearest deadline is given the highest priority.
Example Numerical: Given the same tasks as above, determine if the system is schedulable using EDF.
Solution:
- Task 1 (T1): Deadline = 3, Execution Time = 1
- Task 2 (T2): Deadline = 5, Execution Time = 2
- Task 3 (T3): Deadline = 7, Execution Time = 2
Scheduling Order: T1 (1) -> T2 (2) -> T3 (2)
Since EDF can dynamically adjust priorities, it is more flexible and can handle a total utilization of up to 1, making the tasks schedulable.
Complex Numerical Examples
Example 1: Preemptive Priority Scheduling
Consider the following set of processes with their arrival times, burst times, and priorities:
Process | Arrival Time | Burst Time | Priority |
---|---|---|---|
P1 | 0 | 10 | 3 |
P2 | 1 | 1 | 1 |
P3 | 2 | 2 | 4 |
P4 | 3 | 1 | 5 |
P5 | 4 | 5 | 2 |
Calculate the average waiting time using preemptive priority scheduling.
Solution:
- At time 0, P1 starts executing.
- At time 1, P2 arrives and preempts P1 since it has higher priority.
- P2 finishes at time 2, and P1 resumes.
- At time 2, P3 arrives but has lower priority than P1.
- At time 3, P4 arrives but has lower priority than P1.
- At time 4, P5 arrives and preempts P1 since it has higher priority.
- P5 finishes at time 9, and P1 resumes.
- P1 finishes at time 14, and P3 starts.
- P3 finishes at time 16, and P4 starts.
- P4 finishes at time 17.
Waiting Times:
- P1: 0 (initial wait) + 1 (P2) + 5 (P5) = 6
- P2: 0
- P3: 12
- P4: 13
- P5: 0
Average Waiting Time = (6 + 0 + 12 + 13 + 0) / 5 = 6.2
Implementation on Various Operating Systems
Windows
Windows uses a priority-based preemptive scheduling algorithm with a multilevel feedback queue. This ensures that critical system tasks receive immediate attention while user applications are managed efficiently.
Example: The Windows NT-based operating system uses a 32-level priority scheme where 0 is the highest priority (reserved for system processes) and 31 is the lowest priority. The scheduler dynamically adjusts priorities based on the process’s behavior and resource usage.
macOS
macOS employs a multilevel feedback queue with dynamic priority adjustments. It prioritizes interactive tasks to maintain system responsiveness and user experience.
Example: macOS uses four priority bands for threads – normal, system high priority, kernel mode only, and real-time. Threads are scheduled preemptively, with real-time threads getting the highest priority.
Android
Android, built on the Linux kernel, uses the Completely Fair Scheduler (CFS) to manage process scheduling. CFS ensures that all processes receive a fair share of CPU time.
Example: Android’s scheduler dynamically adjusts process priorities based on the application’s interactivity and resource usage, ensuring a smooth user experience even with multiple applications running.
Linux
Linux’s CFS is designed to provide a balanced and fair scheduling approach. It uses a red-black tree data structure to manage processes, ensuring efficient CPU time distribution.
Example: The Linux kernel’s CFS assigns a virtual runtime to each process, which is used to determine the next process to run. Processes with lower virtual runtimes get higher priority, ensuring fair CPU distribution.
UNIX
UNIX systems typically use a multilevel feedback queue, focusing on stability and performance. This ensures that critical system processes receive the necessary CPU time.
Example: Traditional UNIX systems use a priority-based scheduling algorithm where processes can dynamically change their priority based on their behavior and resource usage.
iOS
iOS uses a similar multilevel feedback queue to macOS, optimizing for power efficiency and performance on mobile devices.
Example: iOS prioritizes foreground applications to ensure a responsive user experience while managing background tasks efficiently to conserve battery life.
Advanced Scheduling Concepts
Fair Share Scheduling
Fair share scheduling ensures that each user or group of users receives a fair share of system resources. This is particularly useful in multi-user systems and cloud environments.
Example: In a cloud computing environment, fair share scheduling can ensure that each tenant receives a fair share of CPU time, preventing any single tenant from monopolizing resources.
Gang Scheduling
Gang scheduling is used in parallel processing environments where related processes are scheduled to run simultaneously on different processors. This minimizes communication delays and improves performance.
Example: In a high-performance computing environment, gang scheduling can be used to ensure that all threads of a parallel application are scheduled to run at the same time, minimizing synchronization overhead.
Placement Questions and Answers
Q1: Explain the difference between preemptive and non-preemptive scheduling. A1: Preemptive scheduling allows the operating system to forcibly remove a running process from the CPU to allocate it to another process, whereas non-preemptive scheduling requires the running process to relinquish control voluntarily.
Q2: What is the convoy effect in process scheduling? A2: The convoy effect occurs when short processes are delayed by a long process, leading to inefficient CPU utilization and increased waiting times for the short processes.
Q3: Describe the purpose of the dispatcher in process scheduling. A3: The dispatcher is responsible for giving control of the CPU to the process selected by the short-term scheduler. It performs context switches, transitioning the CPU from one process to another.
Q4: How does the Completely Fair Scheduler (CFS) in Linux ensure fairness? A4: CFS uses a red-black tree data structure to manage processes and ensures that each process receives a fair share of CPU time. It calculates the fair share based on the time already used by each process.
MCQs
Q1: Which scheduling algorithm can lead to starvation?
- a) FCFS
- b) SJF
- c) Round Robin
- d) Multilevel Queue
Answer: b) SJF
Q2: In which scheduling algorithm is the time quantum used?
- a) FCFS
- b) Priority Scheduling
- c) Round Robin
- d) SJF
Answer: c) Round Robin
Q3: What is the main disadvantage of the FCFS scheduling algorithm?
- a) High scheduling overhead
- b) Convoy effect
- c) Starvation
- d) Complex to implement
Answer: b) Convoy effect
Q4: Which scheduling algorithm dynamically adjusts process priorities?
- a) FCFS
- b) Priority Scheduling
- c) Round Robin
- d) Multilevel Feedback Queue
Answer: d) Multilevel Feedback Queue
FAQs
Q1: What is the difference between long-term and short-term scheduling? A1: Long-term scheduling decides which jobs or processes are to be admitted to the system and controls the degree of multiprogramming. Short-term scheduling, on the other hand, decides which of the ready, in-memory processes is to be executed next.
Q2: How does the Linux kernel implement process scheduling? A2: The Linux kernel uses the Completely Fair Scheduler (CFS), which ensures that all processes receive a fair share of CPU time using a red-black tree data structure.
Q3: Why is I/O scheduling important in operating systems? A3: I/O scheduling determines the order in which I/O operations are processed. This is crucial for optimizing the performance of storage devices and ensuring that processes do not experience excessive wait times for I/O operations.
Q4: What is the role of the dispatcher in CPU scheduling? A4: The dispatcher is responsible for giving control of the CPU to the process selected by the short-term scheduler. It performs context switches, transitioning the CPU from one process to another.
Diagrams
- Gantt Chart for FCFS Scheduling:
- Create a Gantt chart illustrating the execution order of processes in the FCFS algorithm.
- Red-Black Tree for CFS:
- Illustrate the red-black tree data structure used by the Completely Fair Scheduler to manage processes.
- Multilevel Feedback Queue:
- Diagram showing the different priority queues and how processes move between them in a multilevel feedback queue scheduling algorithm.
Tag:Admission Control, Advanced Scheduling Algorithms, AIX Scheduling, Android Scheduling, Batch Processing Scheduling, Competitive Exams, Completely Fair Scheduler, computer science, Context Switching, Cooperative Multitasking, CPU Affinity in Scheduling, CPU Scheduling, CPU Utilization, Critical Section Management, DigiiMento Education, Dispatch Latency, Dispatcher Function, Dispatcher in OS, DRDO, DRDO OS Questions, Earliest Deadline First Scheduling, EDF, Embedded Systems Scheduling, Fair Queuing, Fair Share Scheduling, Fairness in Scheduling, FAQs on OS Scheduling, FCFS, First-Come First-Served Scheduling, FreeBSD Scheduling, Gang Scheduling, GATE Exam OS Questions, GATE Preparation, Himanshu Kaushik, I/O Scheduling, Interactive Process Management, iOS Scheduling, ISRO, ISRO OS Questions, IT Competitive Exams, IT Training, Job Scheduling, Kernel Mode vs User Mode, Latency Reduction, Linux Kernel Scheduling, Linux Scheduling, Load Balancing, Load Balancing in OS, Long-Term Scheduling, Mac OS X Scheduling, Mac Scheduling, MCQs on OS Scheduling, Medium-Term Scheduling, Multilevel Feedback Queue, Multilevel Queue Scheduling, Multithreading in OS, NetBSD Scheduling, NIELIT NIC, NIELIT NIC OS Questions, Non-Preemptive Scheduling, Numerical Examples in Scheduling, Operating Systems, OS Process Management, OS Scheduling in Practice, Performance Metrics in Scheduling, PGT Computer Science, PGT OS Questions, PhD Admissions OS Questions, PhD in Computer Science, Placement Questions in OS, Preemptive Multitasking, Preemptive Scheduling, Priority Scheduling, Process Management, Process Mix Optimization, Process Preemption, Process Scheduling, Process States in OS, Processor Allocation, Queue Management in OS, Rate-Monotonic Scheduling, Real-Time Operating Systems, Real-Time Scheduling, Real-Time Task Scheduling, Red-Black Tree in CFS, Response Time, Rotating Staircase Deadline, Round-Robin Scheduling, RTOS, Scheduler Implementation, Scheduler Overhead, Scheduling Algorithms, Scheduling Disciplines, Scheduling in Android, Scheduling in iOS, Scheduling in Linux, Scheduling in macOS, Scheduling in Real-Time Systems, Scheduling in UNIX, Scheduling in Windows, Scheduling Optimization, Scheduling Policy Comparison., Semaphore and Mutex in OS, Short-Term Scheduling, Shortest Job First Scheduling, SJF, SO Officer, SO Officer OS Questions, Solaris Scheduling, Stride Scheduling, Synchronization in OS, System Call Handling, Thread Scheduling, Throughput Optimization, Time Quantum, UGC NET Computer Science, UGC NET OS Questions, UNIX Scheduling, Wait Time Minimization, Windows NT Scheduling, Windows Scheduling