Commit 5adbaae

mo khan <mo@mokhan.ca>
2021-03-15 01:39:04
work on assigment 2
1 parent 7a9ac36
Changed files (1)
doc/assignment2.md
@@ -5,14 +5,111 @@ This assignment should be submitted after you have completed Unit 2. It is worth
 Instructions: Please answer the following questions in complete sentences. Your answer for each question should be about 150 words. (100 marks total)
 
 1. Define short-term scheduler and long-term scheduler, and explain the main differences between them. (6 marks)
+
+long-term scheduler: selects processes from the job pool and loads them into memory for execution
+short-term scheduler: select from among the processes that are ready to execute and allocates the CPU to one of them.
+
+The primary distinction between these two schedulers lies in frequency of execution.
+The short-term scheduler must select a new process for the CPU frequently.
+A process may execute for only a few milliseconds before waiting for an I/O request.
+Often, the short-term scheduler executes at least once every 100 milliseconds.
+Because of the short time between executions, the short-term scheduler must be fast.
+
+The long term scheduler executes much less frequently; minutes may separate the creation
+of one new process and the next. The long-term scheduler controls the degree of multiprogramming
+(the number of processes in memory). If the degree of multiprogramming is stable, then the
+average rate of process creation must be equal to the average departure rate of processes
+leaving the system.
+
 1. Explain the concept of a context switch. (6 marks)
+
+When an interrupt occurs, the system needs to save the current context of the process
+running on the CPU so that it can restore that context when its processing is done,
+essentially suspending the process and then resuming it.
+
+Switching the CPU to another process requires performing a state save of the current process
+and a state restore of a different process. This task is known as a context switch.
+When a context switch occurs, the kernel saves the context of the old process in its PCB
+and loads the saved context of the new process scheduled to run.
+Context-switch time is pure overhead, because the system does no useful work while switching.
+Its speed varies from machine to machine, depending on the memory speed, the number of
+registers that must be copied, and the existence of special instructions (such as a single
+instruction to load or store all registers).
+
 1. Explain the terms at most once and exactly once, and indicate how these terms relate to remote procedure calls. (6 marks)
+
+The remote procedure call was designed as a way to abstract the procedure-call mechanism for use
+between systems with network connections. Processes are executing on separate systems.
+The semantics of RPCs allow a client to invoke a procedure on a remote host as it would invoke
+a procedure locally. Making an remote procedure call can be an expensive operation.
+RPCs can fail or be duplicated and executed more than once, as a result of common network errors.
+
+* at most once: The client can send a message one or more times and be assured that it only executes once.
+* exactly once: The client must resend each RPC call periodically until it receives the ACK for that call.
+
 1. Identify and briefly explain each of the four major categories of benefits of multithreaded programming. (6 marks)
+
+Benefits:
+
+* Responsiveness: Multithreading an interactive application may allow a program to continue running
+  even if part of it is blocked or is performing a lengthy operation, thereby increasing responsiveness to the user.
+* Resource sharing: Processes may only share resources through techniques such as shared memory or message passing.
+  Such techniques must be explicitly arranged by the programmer. However, thread share the memory and the resources
+  of the process to which they belong by default.
+* Economy: Allocating memory and resources for process creation is costly. Because threads share resources of the process
+  to which they belong, it is more economical to create and context-switch threads.
+  In general it is more time consuming to create an manage processes than threads.
+* Scalability: The benefits of multithreading can be greatly increased in a multiprocessor architecture,
+  where threads may be running in parallel on different processors.
+
 1. Briefly describe the benefits and challenges for multithreaded programming that are presented by multicore systems. (8 marks)
+
+Challenges:
+
+* Dividing activities: This involves examining applications to find areas that can be divided into separate,
+  concurrent tasks and thus can run in parallel on individual cores.
+* Balance: While identifying tasks that can run in parallel, programmers must also ensure that the tasks perform equal work of equal value.
+  In some instances, a certain task may not contribute as much value to the overall process as other tasks;
+  using a separate execution core to run that task may not be worth the cost.
+* Data splitting: Just as applications are divided into separate tasks,
+  the data accessed and manipulated by the tasks must be divided to run on separate cores.
+* Data dependency: The data accessed by the tasks must be examined for dependencies between two
+  or more tasks. In instances where one task depends on data from another, programmers must
+  ensure that the execution of the tasks is synchronized to accommodate the data dependency.
+* Testing and debugging: When a program is running in parallel on multiple cores, there are
+  many different execution paths. Testing and debugging such concurrent programs is inherently
+  more difficult than testing and debugging single-threaded applications.
+
 1. Define coarse-grained multithreading and fine-grained multithreading, and explain their differences. (6 marks)
+
+coarse-grained multithreading: a thread executes on a processor until a long-latency event such as a memory stall occurs.
+fine-grained multithreading: switches between threads at a much finer level of granularity - typically at the boundary of an instruction cycle.
+
 1. Explain process starvation and how aging can be used to prevent it. (6 marks)
+
+A major problem with priority scheduling algorithms is indefinite blocking, or starvation.
+A process that is ready to run but waiting for the CPU can be considered blocked.
+A priority scheduling algorithm can leave some low-priority processes waiting indefinitely.
+In a heavily loaded computer system, a steady stream of higher-priority processes can
+prevent a low-priority process from ever getting the CPU. Generally one of two things will
+happen. Either the process will eventually be run or the computer system will eventually crash
+and lose all unfinished low-priority processes.
+
+A solution to the problem is aging. Aging is a technique of gradually increasing the priority
+of processes that wait in the system for a long time. For example, if priorities range
+from 127 (low) to 0 (high), we could increase the priority of a waiting process by 1
+every 15 minutes. Eventually, even a process with an initial priority of 127 would have
+the highest priority in the system and would be executed.
+
 1. How does the dispatcher determine the order of thread execution in Windows? (6 marks)
+
+Microsoft Windows systems often have no long-term scheduler but simply put every new process
+in memory for the short-term scheduler. The stability of these systems depends either on
+a physical limitation or on the self-adjusting nature of human users.
+
 1. Define critical section, and explain two general approaches for handling critical sections in operating systems. (8 marks)
+
+
 1. Describe the dining-philosophers problem, and explain how it relates to operating systems. (6 marks)
 1. Define the two-phase locking protocol. (6 marks)
 1. Describe how an adaptive mutex functions. (6 marks)