1.1 Explain how semaphores work
- Semaphore was proposed by Dijkstra in 1965 which is a very significant technique to manage concurrent processes by using a simple integer value, which is known as a semaphore. Semaphore is simply a variable that is non-negative and shared between threads. This variable is used to solve the critical section problem and to achieve process synchronization in the multiprocessing environment.
- How it works:
- P operation is also called wait, sleep, or down operation, and V operation is also called signal, wake-up, or up operation.
- Both operations are atomic and semaphore(s) is always initialized to one. Here atomic means that variable on which read, modify and update happens at the same time/moment with no preemption i.e. in-between read, modify and update no other operation is performed that may change the variable.
- A critical section is surrounded by both operations to implement process synchronization. See the below image. The critical section of Process P is in between P and V operation.
- Semaphores are of two types:
- Binary Semaphore –
This is also known as mutex lock. It can have only two values – 0 and 1. Its value is initialized to 1. It is used to implement the solution of critical section problems with multiple processes. - Counting Semaphore –
Its value can range over an unrestricted domain. It is used to control access to a resource that has multiple instances.
- Binary Semaphore –
1.2 Write simple code using semaphores
- This week we used semaphores to solve a common producer-consumer problem.
1.3 List the pros and cons of semaphores versus other synchronization variables
Advantages
- In semaphores there is no
spinning, hence no waste of resources due to no busy waiting. That is
because threads intending to access the critical section are queued. And
could access the priority section when the are de-queued, which is done
by the semaphore implementation itself, hence, unnecessary CPU time is
not spent on checking if a condition is satisfied to allow the thread to
access the critical section.
Semaphores permit more than one thread to access the critical section, in contrast to alternative solution of synchronization like monitors, which follow the mutual exclusion principle strictly. Hence, semaphores allow flexible resource management.
Finally, semaphores are machine independent, as they are implemented in the machine independent code of the microkernel services.Disadvantages
Problem 1: Programming using Semaphores makes life harder as utmost care must be taken to ensure Ps and Vs are inserted correspondingly and in the correct order so that mutual exclusion and deadlocks are prevented. In addition, it is difficult to produce a structured layout for a program as the Ps and Vs are scattered all over the place. So the modularity is lost. Semaphores are quite impractical when it comes to large scale use.
Problem 2: Semaphores involve a queue in its implementation. For a FIFO queue, there is a high probability for a priority inversion to take place wherein a high priority process which came a bit later might just have to wait when a low priority one is in the critical section. For example, consider a case when a new smoker joins and is desperate to smoke. What if the agent who handles the distribution of the ingredients follows a FIFO queue (wherein the desperate smoker is last according to FIFO) and chooses the ingredients apt for another smoker who would rather wait some more time for a next puff?



No comments:
Post a Comment