Introduction

In the early days of computing, only a small number of experimental computers existed, and programming was carried out by researchers who worked directly with the hardware. There was no operating system as we understand the term today, and programs written in machine code controlled the computer. Only one person could use the computer at a time, and researchers consequently had to book time on the computer if they wanted to use computing resources.

The high cost of these early computers meant that they had to be used as efficiently as possible. Programmers would therefore prepare the programs in advance on some kind of input medium (punched cards or paper tape for example), which a computer operator would then load onto the computer. Even so, only one program could be run at a time, and the program still had to control the entire operation of the computer. The program would run to completion, with its output typically being sent to a print device of some description. If the program crashed, the entire contents of memory would be output for the programmer to examine later (this was known as a core dump).

It soon became apparent that much of the code written for each program was duplicated effort because it dealt with routine input and output procedures, such as reading input from a tape device, or sending output to the printer. Common program procedures were therefore developed to handle input and output via the standard I/O hardware devices. These "device drivers", as they became known, eliminated the need for programmers to write their own I/O routines, which meant that programs could be produced more efficiently. The same principle was subsequently applied to other common programming functions such as arithmetic and string handling operations. The code produced was organised into code libraries, which could be used by programmers as and when required.

During the 1950s, high-level programming languages were developed that made the task of coding an application program much easier. Any library routines needed could be linked to the main program code using a special program called a linker, after which the program would be compiled into object code using a compiler. Finally, the object code would be assembled into a machine code program that the computer could understand, using an assembler. This end product was called an executable, because it consisted of a file that could be loaded into memory and executed by the processor. The concept of files that could contain either program code or data meant that the program or dataset could be treated as an entity in its own right and be referred to by a filename.

In the 1960’s, IBM mainframe computers were introduced into large businesses and government departments. These computers could be accessed via terminals by many users simultaneously. Users could interact with the system by entering text-based commands using a command-line interface. A multi-user, multi-tasking operating system would interpret and execute each command, and processing resources were shared between users by allocating a short time slot to each user process (this was known as time-sharing). Initially, the operating system running on a particular computer would be written specifically for that computer, and would not work with a different model. Soon, however, IBM began to develop multi-purpose operating systems that would run on a whole family of computers.

Up until 1970, only the computer hardware was actually sold, with the operating systems, application programs and documentation being supplied by the manufacturers as part of the whole package. In 1970, however, IBM began charging separately for each item of software, which encouraged the development of an independent software industry.

Probably the first major multi-user, multi-tasking operating system to emerge was UNIX, originally written for mini computers but later widely used on mainframe computers, and still one of the most reliable and widely used operating systems in the world. The advent of the IBM PC in 1981 was accompanied by the emergence of IBM PC-DOS, which was provided by Microsoft and later marketed virtually unchanged as Microsoft MS-DOS. DOS was a command-line operating system which borrowed many ideas from UNIX and other early operating systems, although it was essentially a single-user, single-tasking system.

In the early 1990s, Microsoft produced the first of the Windows family of operating systems. This early version provided a graphical user interface (GUI), but was essentially little more than a desktop management system that ran on top of DOS. Later versions (from Windows 95 onwards) are fully fledged operating systems in their own right, and have steadily increased in both size and sophistication.

The original graphical user interface was developed by Rank-Xerox in the early 1970s, and later popularised by the Apple Macintosh computer which was introduced in 1984. Microsoft currently dominates the world market for desktop operating systems, however, with the current version being Windows 7. They have also developed a strong presence in the network operating system market, first with Windows NT Server, then later with Windows 2000 Server, and currently with Windows Server 2008, which like the various desktop versions of Windows comes in a number of flavours.

Despite Microsoft’s seemingly unbreakable stranglehold on both the desktop, and to a lesser extent the server operating system market, various distributions of Linux are beginning to challenge Windows. Linux is a non-proprietary version of UNIX, originally written by Linus Torvalds and now developed and maintained by a growing community of open source software developers. Linux runs on most hardware platforms, and current versions provide much the same functionality and features as Windows.

Although in the past it has been regarded as being less user-friendly than either Windows or the Apple Macintosh operating system, Linux has undergone rapid improvements in terms of both its ease of use and the ease with which it can be installed and configured. Most Linux distributions are free of charge, and can be downloaded from the distributor’s web site. Perhaps one of the most advantageous features of all, however, is that the same distribution of Linux can be used to install and configure either a desktop or a server installation. The user can install any feature they require during installation, install additional features at a later time, change the operating system configuration as often as they like, and upgrade to the latest version as and when it becomes available - all without any money changing hands. There is also a growing body of robust, good quality open source software available for the Linux platform.