Win NT, for what? Windows NT is too big for the average desktop PC. So what is it good for? It is good for the workstation and small server markets. Microsoft believes NT is the product to get them with. NT may look like Windows 3. 1, but the appearance is superficial. The underlying operating system bears no resemblance whatsoever to Windows. It is of a microkernal design and it truly all 32 bit. The interface is also a source of user complaints. Users are demanding a new interface like that of Win 95. Microsoft played it smart though, they decided to experiment on an unproven interface with Win 95 to see how stable it is then most likely port it to NT when it is as stable as their clients require.
In 1988, Microsoft hired David Cutler, the designer of Digital Equipment Corp.'s VMS operating system. At Digital, Cutler had been working on the Prism project. The project was canceled at roughly the same time that Microsoft was looking for people to build its own next-generation operating system; soon Cutler and many Digital engineers found themselves at Microsoft with carte blanche to build it.
Microsoft expected only a portable version of OS/2, but Cutler's crew did not intend to stop there. They targeted workstation-class machines from the beginning. Given Cutler's history and the target platform, there seems little doubt that the NT group, if not Microsoft as a whole, intended the NT operating system to compete with UNIX from the beginning. Its also interesting to note that ”NT is also more then paritally based on a variant of Unix called Mach, developed at Carnegie-Mellon University,”1 as is NextStep from Next Computers.
Portability is the single largest factor in UNIX's current popularity among workstation vendors. It has a layered architecture and is written in a relatively architecture-neutral language; these features, and relatively cheap source-code availability, have made UNIX an excellent choice for many vendors looking for a fast way to get an operating system on their new hardware.
The NT group's primary goal was to produce a portable operating system. At the time, the most promising operating system research focused around "microkernel" technology-the idea that the best way to build an operating system was to produce a tiny operating system with minimal services-and, on top of that, run processes that provide the bulk of operating system services.
This scheme had several advantages over previous "monolithic" designs. One was that a small operating system was easier to port simply because there was less of it. Unfortunately, this design has a cost. Communication between the operating system components is much slower than with a traditional operating system's integrated design. This is one reason why if you compsre for instance Linux and NT on comparble machines Linux will almost always be faster. The NT crew, convinced that this penalty could be minimized while the benefits would remain substantial, chose to build NT using microkernel techniques.
The NT we see today shows just how committed the team was to portability. The microkernel is actually split into two pieces. The lowest layer is the HAL, or Hardware Abstraction Layer. Its job is to manage certain aspects of the raw hardware (such as cache coherency in a multiprocessor system) so that the operating system need only know about relatively high-level abstractions. This allows vendors flexibility in hardware design within a CPU architecture without requiring each to maintain its own variant of the operating system. All that is required is a different HAL. This is almost identical in intent to the UNIX device switch, which is used to provide an abstraction to physical or virtual devices. On top of the HAL is the kernel itself, which is semi-portable. A lot of kernel functionality carries over in a port, but some sections may require significant work to achieve the best possible performance on each new architecture.
It seems that NT is quite portable since in the 3.51 release it includes full support for four of the six most popular computer architectures including PowerPC, an impressive achievement for an operating system not yet two years old.
What it includes: While NT's history and design are interesting, far more valuable to most people is whether or not the product actually works and what features it provides.
NT supplies a wide selection of features typically found in workstation or mainframe operating systems
The File system
The traditional UNIX file system (UFS) achieved much of its well-known performance characteristics through the use of extensive caching. Today most UNIX systems, including Solaris, use the BSD Fast File System (FFS), which improves on the traditional design by attempting to optimize file data blocks for low-access latency. The combination of caching and block latency scheduling gives modern UNIX excellent general-purpose file system performance. Unfortunately, extensive caching causes some reliability problems. Changes made to the file system cache but not written to the physical disks are lost when the system crashes or is unexpectedly powered down. To handle this problem, UNIX provides a complex file system integrity checker and repair utility, fsck which laboriously verifies the consistency of file systems at each reboot-a time-consuming process in today's world of multigigabyte disk arrays.
In 1990, IBM introduced AIX, which used the Journaled File System (JFS), a file system implemented with a transactional model long used in database systems. Journaling turns every file system modification into a multistep process: Write the intention to change the file system into a log, change the file system and clean up the log. The result is perfect reliability: If the write is interrupted before completion, the log provides all the details necessary to complete it. If the log was never written, it's as if the operation never occurred.
AIX uses caching to improve performance, but AIX integrates the file system cache with the general virtual memory system, allowing the filesystem to use as much memory as possible to achieve the best possible performance. (SVR3 and traditional BSD used fixed filesystem caches, but most modern UNIX variants, including SunOS similarly integrate the filesystem cache and VMM.) AIX demonstrated that journaling in combination with extensive caching was both fast and reliable. It is notable the Win 95 also uses this type of caching and VMM management.
Windows NT's filesystem (NTFS) and cache design closely mirror those of AIX. NT is transaction-based and fully integrated with the virtual memory subsystem. As with AIX, the result is a very fast, robust filesystem well-suited for mission-critical situations.
NT's robustness features go beyond those found in the file system. Its basic package provides both disk mirroring and disk striping, features typically found in UNIX as add-ons. Disk mirroring simply mirrors a disk for data redundacy. Disk striping is the process of splitting 1/0 operations across multiple drives in order to increase bandwidth. This allows several small, inexpensive drives to approach the bandwidth of much more expensive counterparts.
Outside of its reliability characteristics Windows NT's file system has two major distinctions from its UNIX counterparts. The first is case independence.
File and directory names retain the case they are created with, but any reference to the name is case-independent. This allows the information content of mixed-case naming (e.g., ReadMe) without the confusion it often creates (readme, ReadMe and readme are all different files under UNIX).
The second major distinction is file attribute streams. Every file can have a set of parallel data streams associated with it, accessed using a variation on the file name. This is a generic implementation of the Apple Computer Inc. Macintosh "fork" concept, which is used to associate a file with other attributes such as an icon, and will doubtless be useful for any application that needs to associate extra information with a file (e.g., a log of file-specific backup information, or the application that created the file).
In recent years, one of UNIX's most important features has been its flexible networking, which has rarely been matched by other operating systems. Anything intended to compete with UNIX must provide similar, if not superior, networking capabilities.
The basic NT package provides support for a number of protocols including: TCP/IP, NetBEUI, and AppleTalk. In general, Microsoft has attempted to support at least basic services for each, although in most cases the support is poor. The TCP/IP package comes with a functional ftp and usable, but not really commercial grade, implementations of the ftp and telnet client interfaces. Noticeably missing are the Internet remote login services (teinetd and rlogind), NFS and XI 1. While all but riogin have been available from third parties for some time, it is surprising to see them missing from a supposedly Internet aware operating system, and their omission severely affects the ability of NT to integrate with existing UNIX networks.
NT’s most glaring problems in its networking limitations are in router services. While it easily supports multiple network interfaces, only static routing is possible, and even that is not very well supported. There is no packet filtering whatsoever. These limitations make NT virtually useless as a gateway system or firewall.
Where NT really shines is in serial line networking, provided through the Remote Access Service, or RAS. NT has fully functional implementations of SLIP, CSLIP and PPP, along with an interface to make configuration and usage fairly straightforward. Compared with setting up these services on UNIX systems, NT is a breeze.
Another way in which NT more closely resembles UNIX operating systems than PCs is in its support for security. While it may seem that NT is truly multiuser the exclusion of placing quotas on users accounts may undermine this idea. There are third party programs however that allow the allocation of quotas to users, however. Users have authenticated accounts, and A system services, files, processes and even threads have ownership and permissions attributes.
NT extends the UNIX user/ group/other security attributes using access control lists (ACLs), which specify a series of "allow" and "deny” attributes for operations appropriate to each of the protected objects. While slightly more complicated than UNIX's traditional security mechanism, ACLs are also substantially more flexible, particularly for users who do not have administrator privileges. A number of UNIX variants offer ACL support, but this is the exception rather than the rule.
NT provides C2 level security by totally separating processes in memory by not allowing other processes to reuse that processes contents.7 That is one reason why when you delete somehting in NT it is gone. The disk subsystem also works similarly. While this may provide for high security it also forces the user to decide if a specific file should really be deleted. It also forces them to make frequent backups.
Most other distributed file systems manage security differently. Users who intend to use a network resource (such as a file system or printer) must be individually authenticated by the remote system, thus making account management the job of the administrator of the individual network resource rather than some central organization.
NT attempts to allow both distributed account management and single authentication by introducing the domain, a set of machines that use a centralized authentication authority. All members of a domain implicitly trust the authority of domain servers, so logging into a domain as a user gives access to all resources the user is entitled to within that domain without additional authentication. This is essentially identical to the NIS approach, except that a user may have accounts on multiple domains and may log into outside domains to access their resources as well. This technique allows account management to be handled at any granularity from the single machine up to the entire enterprise, a significant improvement over the NFS/NIS approach.
NT breaks up the Windows 3.1 graphics subsystem into a client library and a graphics server, much as Xl I does. Unfortunately, NT only supports access to the graphics system through local procedure calls, so it remains node-locked. It is expected that Microsoft will someday add distributed graphics support to NT via RPC mechanisms, and some vendors already offer the capability as an add-on. But right now almost all NT systems are crippled in the same way as is Windows 3. 1.
In addition to the Windows 3.1 graphics interface, NT supports OpenGL, the 3D graphics interface based on Silicon Graphics Inc.'s popular GL interface. In theory, this provides the same 3D graphics capabilities available on high-end UNIX workstations, but in practice hardware limitations on most PC platforms (particularly the poor floating-point performance of the Intel x86 chips) cause the performance of OpenGL on NT to be quite poor. However, this is expected to change over the next year as several hardware vendors introduce 3D hardware accelerators at fairly low prices.
Overall Windows NT proves to be a worthy competitor to Unix and even incorparates some Unix compatibility, though limited at best and suggested that it was included just to get gov’t contreacts that require POSIX compatibility. 4