scheduling multiple CPUs solved

$50.00

Category: You will receive a download link of the .ZIP file upon Payment

Description

5/5 - (1 vote)

In multiple-processor scheduling multiple CPUs are available to execute tasks in parallel and
hence load sharing and quicker execution is possible. When the processors are identical in
terms of their functionality, memory, and speed, we can use any processor available to run any
process in the queue. When the processors are not identical, we need to give more
consideration in terms of speed and space before attempting to take on a task in the queue.
Handling Multiple-Processor Scheduling
In one simple approach, we can let a single processor known as the master to handle and
dispatch all the scheduling decisions and I/O processing. The other processors then execute the
tasks as assigned. are handled by a single processor which is called the Master Server and the
other processors execute only the user code.
The second approach which we used, uses Symmetric Multiprocessing where each processor is
self scheduling. All processes are loaded in a common ready queue where each processor
picks a task to execute independently. A different iteration of this approach may see each
processor having its own private queue, thus tentatively eliminating the need for data sharing.
Load balancing is necessary in this case because once a processor becomes idle it does not
extract a process from the common run queue but from the private queue.
QA
1. Suppose that all 6 processors are identical (i.e., same speed and memory), benchmark
each of the following scheduling algorithms that we talked about in class. Compute the
average wait time and average turnaround time for each scheduling algorithm.
a. FIFO
b. SJF
c. RR
Here, we create a csv list of processors with the specified properties and pass them to
the scheduling functions as command line parameters. The 250 processes are also
generated and used by the schedulers
PA,4,12
PB,4,12
PC,4,12
PD,4,12
PE,4,12
PF,4,12
2. Modern CPU design is moving towards heterogeneous computing architecture. Made
famous by ARM and their big.LITTLE design, newer CPUs have been designed with two
types of processor cores. A set of power saving efficiency cores paired with high
performance cores. Assume that our system has been upgraded to use a
heterogeneous CPU with half of the cores being efficiency cores. Specifically: PA = PB =
PC = 2 GHz, and PD = PE = PF = 4GHz. Develop an algorithm that minimizes the
turnaround time of the set of processes
Without modifying the functionality of the scheduling functions we create a csv with the
processor specifications in the format:
PA,2,12
PB,2,12
PC,2,12
PD,4,12
PE,4,12
PF,4,12
Round robin seems to have the best performance in this case with an average
turnaround time of 334817280 and an average waiting time of 164840680
3. In order to execute a process on a specific processor, sufficient memory has to be
available. Assume that the processing nodes are identical in speed to Part 2 but have
the following memory availability: PA = PB = PC = 8 GB, and PD = PE = PF = 16GB.
Modify your algorithm from Q2 to assign processes to the previously described
processors. Show how well your algorithm minimizes the turnaround time of the set of
200 processes. Compare the results of your solution to the results from Part 2
Modifying the processor csv input to specification gives:
PA,4,8
PB,4,8
PC,4,8
PD,4,16
PE,4,16
PF,4,16
RR gives an average turnaround time of 426019120 and an average waiting time of
262831501
4. Finally, modify your scheduling algorithm so that it can deal with the sequential arrival of
the 250 processes. The scheduler cannot inspect the entire set of processes but must
schedule them one by one in the order that they arrive. What is the best turnaround time
you can achieve?
Using SJF, the average turnaround time for sequential processes seems to be the best
at 326790000 and an average waiting time of 161465880
Algorithms
1. FIFO
First in First out automatically executes queued requests and processes in order of their
arrival for each processor until the queue is empty. Since we are using a single queue
the processing loop ends immediately when the queue becomes empty. It is the easiest
and simplest CPU scheduling algorithm since processes which get to the CPU first get
the CPU allocation first without much added thought to it.
2. SJF
This algorithm associates with each process the length of its next CPU burst and it takes
the shortest next CPU burst. When a CPU is available for the use, this is assigned to the
process that has smallest next CPU burst and the SJF scheduling is especially
appropriate for batch jobs for which the run times it well known in anticipation, Where the
SJF scheduling algorithm allows the average minimum time for specific a set of
processes, it is probably optimal. This algorithm favors short processors at the expense
of longer ones. To achieve this, we use the sum of all the burst times when picking a job
from the ready queue.
3. RR
The Round Robin algorithm is designed for time sharing systems. This algorithm is
similar to FIFO scheduling, but preemption is added to alternate between processes.
The queue is ready and allocates each process for a certain time interval until a quantum
slice time. To achieve this, we pass a quantum to the rr function to essentially override
the expected burst times.
Limitations and Assumptions
We assume that there is always a steady stream of processes coming in at all times. This
means that at no point is any CPU idle when making calculations. IO and CPU operations are
also assumed to have no latency. All tasks are assumed to have equal importance.
Performance
Each iteration done several times and the average used
1. Turnaround time
Using an internal clock counter:
Turnaround time = Exit time – Arrival time
2. Average waiting time
Waiting time = Turnaround time – Burst time