Operating system: Difference between revisions
imported>Eric M Gearhart (Added section on kernel design, added micro-kernel. about to add monolithic and hybrid next.) |
imported>Eric M Gearhart (Need to add hybrid kernel next) |
||
Line 7: | Line 7: | ||
=== Kernel === | === Kernel === | ||
'''note: for a more detailed explanation see [[kernel]]''' | |||
Overall a kernel is responsible for managing the resources provided by the computer, especially the [[CPU]] and the [[memory]], for providing hardware access, often in conjunction with [[drivers]], and for providing hardware abstractions such as the [[file system]] or [[network]] e.g. providing [[socket]]s. As such management tasks often involve the need to protect certain resources from user access, the kernel is also responsible for managing [[access right]]s and [[user identification]]. | |||
It also typically runs in a special "[[privileged mode]]" where it has unrestricted access to all the hardware of the system it is running on. The exact amount and kind of services provided by the kernel depends on its design and architecture. | |||
There are literally hundreds of different kernel designs, however they typically fall into one of three categories: | |||
* [[Micro-kernel]] architectures such as [[Mach]] only contain the basic functions within the kernel and run [[user space]] sub-systems in a separate [[address space]]. The kernel's main function is to coordinate the different sub-systems' requests for hardware and [[CU|processor]] time. This design encourages modularity, and is intended to increase the kernel's reliability. For example, if the video card driver crashed in a micro-kernel design, only the video subsystem would be affected. The service could even restart automatically, only affecting the user for a short period of time. In other kernel designs such a low-level driver crashing could possibly take the entire system down with it. While compelling in theory, the design of a micro-kernel has proven to be much more complicated in practice, especially when considering that all the sub-system's accesses to hardware have to be coordinated. Usually this is accomplished with "message passing," where the different subsystems coordinate access to resources by "passing messages" to the kernel. The complexity of this design multiplies exponentially when factors such as multi-threading and multi-processor machines are taken into account, however. [[Minix]] is a popular example of a micro-kernel architecture. | |||
* A [[Monolithic kernel]], as the name implies, is one in which all the kernel's functions are run in the same [[address space]] to simplify development and improve performance. The [[Linux]] [[Linux kernel|kernel]] is a popular example of a monolithic kernel. | |||
Some kernels are tied to one set of [[driver]]s or [[user interface]], while some are interchangeable in one or both. The [[Microsoft Windows]] and [[Macintosh OS]] series have only one user interface per kernel, but can interchange drivers to work with different types of hardware. In comparison, the [[BSD]] and [[Linux]] kernel has no defined hardware nor user interface, and there are several different drivers and interfaces available for it. | Some kernels are tied to one set of [[driver]]s or [[user interface]], while some are interchangeable in one or both. The [[Microsoft Windows]] and [[Macintosh OS]] series have only one user interface per kernel, but can interchange drivers to work with different types of hardware. In comparison, the [[BSD]] and [[Linux]] kernel has no defined hardware nor user interface, and there are several different drivers and interfaces available for it. |
Revision as of 15:21, 24 April 2007
An operating system (OS) is the underlying software that controls the applications and hardware resources of a computer. The typical major components of an operating system are a kernel, drivers, and its user interface. These components are generally bundled together.
Operating systems can be written so that they are basically not seen by the user; in fact on embedded systems such as cell phones, this is intentional.
Typical Services of an Operating System
To modularize the functions performed by an operating system, its major responsibilities are usually further divided into subsystems. These subsystems are covered in the sections that follow.
Kernel
note: for a more detailed explanation see kernel Overall a kernel is responsible for managing the resources provided by the computer, especially the CPU and the memory, for providing hardware access, often in conjunction with drivers, and for providing hardware abstractions such as the file system or network e.g. providing sockets. As such management tasks often involve the need to protect certain resources from user access, the kernel is also responsible for managing access rights and user identification.
It also typically runs in a special "privileged mode" where it has unrestricted access to all the hardware of the system it is running on. The exact amount and kind of services provided by the kernel depends on its design and architecture.
There are literally hundreds of different kernel designs, however they typically fall into one of three categories:
- Micro-kernel architectures such as Mach only contain the basic functions within the kernel and run user space sub-systems in a separate address space. The kernel's main function is to coordinate the different sub-systems' requests for hardware and processor time. This design encourages modularity, and is intended to increase the kernel's reliability. For example, if the video card driver crashed in a micro-kernel design, only the video subsystem would be affected. The service could even restart automatically, only affecting the user for a short period of time. In other kernel designs such a low-level driver crashing could possibly take the entire system down with it. While compelling in theory, the design of a micro-kernel has proven to be much more complicated in practice, especially when considering that all the sub-system's accesses to hardware have to be coordinated. Usually this is accomplished with "message passing," where the different subsystems coordinate access to resources by "passing messages" to the kernel. The complexity of this design multiplies exponentially when factors such as multi-threading and multi-processor machines are taken into account, however. Minix is a popular example of a micro-kernel architecture.
- A Monolithic kernel, as the name implies, is one in which all the kernel's functions are run in the same address space to simplify development and improve performance. The Linux kernel is a popular example of a monolithic kernel.
Some kernels are tied to one set of drivers or user interface, while some are interchangeable in one or both. The Microsoft Windows and Macintosh OS series have only one user interface per kernel, but can interchange drivers to work with different types of hardware. In comparison, the BSD and Linux kernel has no defined hardware nor user interface, and there are several different drivers and interfaces available for it.
Drivers
Drivers define methods for accessing hardware in terms a particular operating system can handle. Drivers are generally written by the manufacturer, which means the hardware manufacturer can decide what operating system or systems their products support.
Drivers are often loaded on bootup to ensure correct operations of all hardware. This means that hardware can only be changed while the computer is off. Plug and play hardware, however, can load the driver into memory as it is plugged in, as long as the driver has already been installed. If drivers are visible from user space as with the kernel module concept in Linux, it is also possible for a user with sufficient access right to unload one driver and load another.
User interface
A user interface allows for humans to interact with a computer. If a system is not designed to be interacted with directly, the interface may be nonexistent, or perhaps only have a simple interface for debugging. The two major tasks of a user interface are to provide access to core functions, and to organize them into as seamless and intuitive a system as possible.
A command line-driven user interface (CLI), such as MS-DOS and Unix shells, of which there are many, work by parsing and executing text commands. Although they have largely phased out since the dawn of the fifth generation of computers, their low memory requirements make them useful for highly specialized purposes, such as computer repair, accessing a computer over a network or performing a large number of tasks in a sequential order very quickly.
A graphic user interface (or GUI). Graphic user interfaces generally emulate the system used in the Microsoft Windows series, with control panels to handle access system functions, icons, mouse-controlled pointers, context menus on a right click, multiple windows for multiple windows, and some analogue for the Start button. Some interfaces (such as BumpTop), while graphical, use completely different elements to present control structures.
Much rarer are voice-driven interfaces. These interfaces are usually used alongside with a GUI, although some computers are beginning to use them, due to purpose (such as certain GPS navigation systems) or experimentation.
Core applications
The core applications of an operating system are the applications that cannot be used. Without these applications, there is no difference between a computer and a television set to an empty channel. In the first generations of computers, the application and the operating system were identical; a new problem or program required an entire new operating system. Beginning with the fourth generation of computers, however, applications and kernels became distinct.
In the fourth generation of computers, the core application was the interpreter, which took in BASIC code (or, rarely, code from another programming language, such as COBOL, Fortran, or Pascal) and parsed it for output. Although coding in assembly language or machine language was possible, some code was usually interpreted, as that made development easier. (For example, a Commodore 64 program might only put time-intensive subroutines into assembly, keeping the code interpreted for ease of development, only compiling the code at the end of production.)
In the fifth generation of computers, the core application has become the web browser. Most operating systems on the market today are including less and less software; instead, they are relying on access to the internet, as online content delivery makes it possible for a code fix to be created, tested, and released in a matter of hours or days, instead of weeks or months. Additionally, if a person has a functional web browser, no matter how stark or barebone it is, it is possible to purchase all sorts of other software, making the need to include any other form of software a cost-increasing "frill."