OS Management Functions
Although the range of services and value-added features provided by a modern operating system is continually growing, there are four fundamental operating system management functions that are implemented by all operating systems. Each of these management functions is briefly described below in order to provide an overall context for what follows. The four main operating system management functions (each of which will be dealt with in greater depth elsewhere) are:
- Process management
- Memory management
- File and disk management
- I/O system management
The term process refers here to program code that has been loaded into a computer's memory so that it can be executed by the central processing unit (CPU). In a multiprogramming system, there will be a number of competing processes. The operating system must ensure that each process gets a fair share of the CPU's time. Before a program can be executed, at least part of the program's executable code must be loaded into memory as a process. The operating system must then determine when the CPU can be made available to the process, and for how long. Once a process controls the CPU, its instructions will be executed until its allotted time expires, or until it terminates, or until it requests an input or output operation. In the latter case, the operating system will service the I/O request and suspend the execution of the process until the I/O request has been satisfied, and the CPU is once more available (on suspension of one process, the CPU is made available to the next waiting process). In order to be able to schedule the execution of multiple processes, the operating system must maintain a significant amount of information about each process, including the location of the process in memory, the current state of the process (i.e. running, ready, or blocked), the address of the last program instruction executed, and whether or not the process is awaiting the completion of an I/O operation.
In order for a process to be executed by the processor, it must first be loaded into working memory (random access memory, or RAM). In a single-user, single-tasking system, all of the memory not required by the operating system is allocated to the program. In a multi-tasking system, each process requires its own separate area of memory. In order to control the use of memory, the operating system must impose some kind of structure that allows it to address individual blocks of memory and allocate them to processes. The system of addressing used and the size of the memory blocks allocated varies from one operating system to another, but virtually all operating systems use fixed-size blocks of memory, as this simplifies the task of moving data from secondary storage into memory (and vice versa). Memory is allocated to processes dynamically (as and when needed), and released when no longer needed. Modern operating systems can overcome the limitations of having only a relatively small amount of working memory available by using disk space to create virtual memory. Any programs loaded into memory but not currently running can be temporarily moved from memory and stored on the computer’s hard drive. This increases the amount of memory available for other programs, and removes the need for programmers to limit the size of a program.
Memory management in a multiprogramming system is a complex task. The operating system cannot know in advance what programs the user will want to run, and therefore cannot reserve memory for a program. In some cases, there will not be enough contiguous memory to load a new process into a single area of memory, and it will instead be slotted into several different locations. If there is simply not enough free memory available to load the new process, the operating system will have to free up enough memory to load the process by temporarily moving one or more other processes out of memory and into virtual memory. To make life even more interesting, processes that are "swapped out" of main memory and into virtual memory will almost always be loaded into a completely different area of memory to the one they were forced to vacate.
The ability to use any available memory slots, and to swap processes in and out of memory as and when necessary, means that a large number of programs can be active at the same time (although, for a single processor system, only one process at a time is actually running). There is no wasted space, since any free block of memory can be allocated to a process, and the process itself does not need to have a contiguous block of memory allocated to it in order to run. The down side is the significant amount of overhead incurred by the operating system due to the need to keep track of each process in both memory and virtual memory. The complexity is increased by the fact that a single process may be scattered across any number of memory locations. The operating system must keep track of every part of every process. It must also prevent the memory allocated to one process from being overwritten by another process, while at the same time enabling collaborating processes to communicate, and to share common data and procedures.
File and disk management
Most computer systems employ secondary storage devices (magnetic disk, magnetic tape, optical media, flash drives etc.) to provide cheap, non-volatile storage for programs and data. The programs, and the user data they work with, are held in discrete storage units called files. The operating system is responsible for allocating space for files on secondary storage media as and when required. There is no guarantee that a file, especially a large file, will be stored in a contiguous location on a physical disk drive. It will very much depend on the amount of space available. The more full a disk becomes, the more likely it is that new files will be written to multiple locations. As far as the user is concerned, however, the view of the file presented to them by the operating system will hide the fact that the file has been fragmented into several pieces. The operating system is responsible for keeping track of the location on disk of every piece of every file on the disk. In some cases, that can mean keeping track of hundreds of thousands of files and file fragments on a single physical disk. In addition, the operating system must be able to find each file whenever it is required, and carry out read and write operations on it. The operating system is thus responsible for the organisation of the file system, for ensuring that read and write operations to a secondary storage device are secure and reliable, and for keeping access times (the time required to write data to or read data from secondary storage) to a minimum.
I/O system management
Input devices are used to get information into a computer system, and include peripheral devices like the keyboard and mouse now found attached to virtually all computer systems. Output devices receive information from a computer, and include devices such as monitors and printers. Some input and output (I/O) devices can be used for input and output, including network adapters and secondary storage devices. The transfer of data into or out of the computer can take place one character at a time (e.g. keyboard input) or in fixed-size blocks (as for the transfer of data between secondary storage and working memory). In the personal computer systems of the 1980s and 90s, devices such as printers and disk drives were connected to the system’s main circuit board (the mainboard or motherboard) via parallel cables, allowing a number of bits to be sent along the cable at the same time using multiple signal wires. More recently, serial technology (in which the data is transferred one bit at a time along a single wire) has improved to such an extent that most modern I/O devices, including printers and disk drives, are connected to the mainboard via a serial cable. Core system components such as the CPU and random access memory (RAM) modules are still interconnected via high-speed parallel buses, implemented on the mainboard as integrated circuits.
One of the main functions of an operating system is to control access to the input and output devices attached to the system’s mainboard. It must respond to user keystrokes and mouse clicks, interpret I/O requests from user applications and arbitrate when two or more processes require the services of a device at the same time. A request for I/O from a user process is signaled to the operating system using a system call (sometimes called a software interrupt). When a process makes a system call, it effectively hands control of the processor back to the operating system to enable it to service the request. Even the operating system itself does not talk directly to hardware devices, however. It will instead pass the request on to the appropriate device driver. A device driver is a small program that resides in memory and does nothing until called upon by the operating system. Its sole purpose is to relay instructions and data between the operating system and a specific hardware device. This somewhat lengthy chain of command between a user process and a hardware device serves two purposes. First, the availability of a vendor-supplied device driver for each hardware device means that the operating system itself does not need to know the details of every piece of hardware attached to the system, and is thus device-independent. Second, it prevents an application process from accessing hardware devices directly, allowing the operating system to arbitrate between applications competing for the same resources.