Skip to main content

Principles of deadlock

Introduction:
Deadlock Definition:
A set of processes are deadlocked when every process in the set is waiting for an event that can only be generated by some process in the set.
Example: A computer system with two processes
  • A process that is printing a large PostScript job is waiting for more memory.
  • A visualization process with lots of memory is waiting to use the printer.

System Model, Deadlock Characterization:
System Model:
  • A set of blocked processes each holding a resource and waiting to acquire a resource held by another process in the set.
  • Example of a deadlock problem: – System has 2 tape drives. – Process P1 and process P2 each hold one tape drive and each needs another one.
  • A system consists of a finite number of resources to be distributed among a number of competing processes.
  • Resource types R1, R2, . . ., Rm CPU cycles, memory space, I/O devices.
  • The resources may be either physical (printers, tape drives, memory space, and CPU cycles) or logical (files and semaphores).
  • A process must request a resource before using it, and must release the resource after using it.

A process may utilize a resource in only the following sequence:
Request:
If the request cannot be granted immediately, then the requesting process must wait until it can acquire the resource.
Use: The process can operate on the resource (ex., print on printer).
Release: The process releases the resource.
  • The request and release of resources are system calls (examples: request and release device, open and close file, and allocate and free memory system calls).
  • Request and release of other resources can be accomplished through the wait and signal operations on semaphores.

Deadlock Characterization:
Mutual exclusion:
Only one process at a time can use a resource. If another process requests that resource, the requesting process must be delayed until the resource has been released.

Hold and wait:
A process is holding at least one resource and is waiting to acquire additional resources held by other processes (i.e., you hold a resource and wait for another one).

No preemption:
A resource can be released only voluntarily by the process holding it, after that process has completed its task (ex., in FCFS a process use the CPU until it terminates).

Circular wait:
There exists a set {P0, P1, …, P0} of waiting processes such that P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, …, Pn–1 is waiting for a resource that is held by Pn, and P0 is waiting for a resource that is held by P0.

Deadlock prevention:
Havender in his pioneering work showed that since all four of the conditions are necessary for deadlock to occur, it follows that deadlock might be prevented by denying any one of the conditions.

Elimination of “Mutual Exclusion” Condition:
The mutual exclusion condition must hold for non-sharable resources. That is, several processes cannot simultaneously share a single resource. This condition is difficult to eliminate because some resources, such as the tap drive and printer, are inherently non-shareable. Note that shareable resources like read-only-file do not require mutually exclusive access and thus cannot be involved in deadlock.

Elimination of “Hold and Wait” Condition:
  • There are two possibilities for elimination of the second condition. The first alternative is that a process request be granted all of the resources it needs at once, prior to execution.
  • The second alternative is to disallow a process from requesting resources whenever it has previously allocated resources. This strategy requires that all of the resources a process will need must be requested at once.
  • The system must grant resources on “all or none” basis. If the complete set of resources needed by a process is not currently available, then the process must wait until the complete set is available.
  • While the process waits, however, it may not hold any resources. Thus the “wait for” condition is denied and deadlocks simply cannot occur. This strategy can lead to serious waste of resources.
  • For example, a program requiring ten tap drives must request and receive all ten derives before it begins executing. If the program needs only one tap drive to begin execution and then does not need the remaining tap drives for several hours. Then substantial computer resources (9 tape drives) will sit idle for several hours.
  • This strategy can cause indefinite postponement (starvation). Since not all the required resources may become available at once.

Elimination of “No-preemption” Condition:
  • The nonpreemption condition can be alleviated by forcing a process waiting for a resource that cannot immediately be allocated to relinquish all of its currently held resources, so that other processes may use them to finish. Suppose a system does allow processes to hold resources while requesting additional resources.
  • Consider what happens when a request cannot be satisfied. A process holds resources a second process may need in order to proceed while second process may hold the resources needed by the first process. This is a deadlock. This strategy require that when a process that is holding some resources is denied a request for additional resources.
  • The process must release its held resources and, if necessary, request them again together with additional resources. Implementation of this strategy denies the “no-preemptive” condition effectively.

High Cost:
  • When a process release resources the process may lose all its work to that point. One serious consequence of this strategy is the possibility of indefinite postponement (starvation).
  • A process might be held off indefinitely as it repeatedly requests and releases the same resources.

Elimination of “Circular Wait” Condition:
  • The last condition, the circular wait, can be denied by imposing a total ordering on all of the resource types and than forcing, all processes to request the resources in order (increasing or decreasing).
  • This strategy impose a total ordering of all resources types, and to require that each process requests resources in a numerical order (increasing or decreasing) of enumeration. With this rule, the resource allocation graph can never have a cycle.
  • For example, provide a global numbering of all the resources, as shown

1 = Card reader
2 = Printer
3 = Plotter
4 = Tape drive
5 = Card punch

  • Now the rule is this: Processes can request resources whenever they want to, but all requests must be made in numerical order. A process may request first printer and then a tape drive (order: 2, 4), but it may not request first a plotter and then a printer (order: 3, 2). The problem with this strategy is that it may be impossible to find an ordering that satisfies everyone.

Detection and avoidance:
  • This requires that the system has some information available up front. Each process declares the maximum number of resources of each type which it may need. Dynamically examine the resource allocation state to ensure that there can never be a circular-wait condition.
  • The system's resource-allocation state is defined by the number of available and allocated resources, and the maximum possible demands of the processes. When a process requests an available resource, the system must decide if immediate allocation leaves the system in a safe state.
  • The system is in a safe state if there exists a safe sequence of all processes:
    • Sequence < P1, P2, ... Pn > is safe for the current allocation state if, for each Pi, the resources which Pi can still request can be satisfied by the currently available resources plus the resources held by all of the Pj's, where j < i.
  • If the system is in a safe state, there can be no deadlock. If the system is in an unsafe state, there is the possibility of deadlock.

Example:

consider a system with 12 magnetic tapes and 3 processes (P0, P1, and P2): a) Is the system in a safe state? If so, which sequence satisfies the safety criteria?

available = 3

Process

Maximum Needs

Holding

Needs

P0

10

5

5

P1

4

2

2

P2

9

2

7







b) Is the system in a safe state? If so, which sequence satisfies the safety criteria?

available = 3

Process

Maximum Needs

Holding

Needs

P0

10

5

5

P1

4

2

2

P2

9

3

6

In this scheme, a process which requests a resource that is currently available may still have to wait. Thus, resource utilization may be lower than it would otherwise be.

I/O systems:
Overview:
  • Management of I/O devices is a very important part of the operating system - so important and so varied that entire I/O subsystems are devoted to its operation. ( Consider the range of devices on a modern computer, from mice, keyboards, disk drives, display adapters, USB devices, network connections, audio I/O, printers, special devices for the handicapped, and many special-purpose peripherals).
  • I/O Subsystems must contend with two (conflicting) trends: (1) The gravitation towards standard interfaces for a wide range of devices, making it easier to add newly developed devices to existing systems, and (2) the development of entirely new types of devices, for which the existing standard interfaces are not always easy to apply.
  • Device drivers are modules that can be plugged into an OS to handle a particular device or category of similar devices.

Hardware:
  • I/O devices can be roughly categorized as storage, communications, user-interface, and other
    a) Devices communicate with the computer via signals sent over wires or through the air.
    b) Devices connect with the computer via ports, e.g. a serial or parallel port.
  • A common set of wires connecting multiple devices is termed a bus.
    • Buses include rigid protocols for the types of messages that can be sent across the bus and the procedures for resolving contention issues.
    • Figure given below illustrates three of the four bus types commonly found in a modern PC:
    1. The PCI bus connects high-speed high-bandwidth devices to the memory subsystem ( and the CPU. )
    2. The expansion bus connects slower low-bandwidth devices, which typically deliver data one character at a time ( with buffering. )
    3. The SCSI bus connects a number of SCSI devices to a common SCSI controller.
    4. A daisy-chain bus, ( not shown) is when a string of devices is connected to each other like beads on a chain, and only one of the devices is directly connected to the host.

    Application interface:
    • Stands for "Application Program Interface," though it is sometimes referred to as an "Application Programming Interface." An API is a set of commands, functions, and protocols which programmers can use when building software for a specific operating system.
    • The API allows programmers to use predefined functions to interact with the operating system, instead of writing them from scratch.
    • All computer operating systems, such as Windows, Unix, and the Mac OS, provide an application program interface for programmers.
    • APIs are also used by video game consoles and other hardware devices that can run software programs. While the API makes the programmer's job easier, it also benefits the end user, since it ensures all programs using the same API will have a similar user interface.

    Kernel I/O subsystem:
    • Scheduling - Some I/O request ordering via per-device queue.
    • Attempt to use devices optimally while still providing priority - Some implement Quality of Service (i.e. IPQOS).
    • Buffering - store data in memory while transferring between devices - Helps cope with device speed mismatch - Helps cope with device transfer size mismatch - To maintain “copy semantics”.
    • Data is first copied from user application memory into kernel memory • Data from kernel memory is then written to device.
    • Prevents application from changing contents of a buffer before it is done being written 4 CS420: Operating Systems Kernel I/O Subsystem.
    • Caching - possible to reuse buffers in main memory for caching - Cache files that are read/written frequently - Increases performance.
    • Spooling - a buffer that holds output for a device that cannot accept interleaved data streams - If device can serve only one request at a time - Allows a user to see individual data streams and remove streams if desired - i.e., Printing.
    • Device reservation - provides exclusive access to a device - System calls for allocation and de-allocation - Watch out for deadlock.
    Engineering Study Material

    Stream performance:
    The part of the Unix operating system thatdeals with terminals and other character devices has always been complicated. In recent versions of the system it has become even more so, for two reasons.
    1. Network connections require protocols more ornate than are easily accommodated in the existing structure. A notion of ‘‘line disciplines’’ was only partially successful, mostly because in the traditional system only one line discipline can be active at a time.
    2. The fundamental data structure of the traditional character I/O system, a queue of individual characters (the ‘‘clist’’), is costly because it accepts and dispenses characters one at a time. Attempts to avoid overhead by bypassing the mechanism entirely or by introducing ad hoc routines succeeded in speeding up the code at the expense of regularity. Patchwork solutions to specific problems were destroying the modularity of this part of the system. The time was ripe to redo the whole thing. This paper describes the new organization.
    3. The system described here runs on about 20 machines in the Information Sciences Research Division of Bell Laboratories. Although it is being investigated by other parts of Bell Labs, it is not generally available.
Published date : 12 Feb 2016 03:26PM

Photo Stories