Real World Distributed Operating System Case Study: Plan 9



Download 36.9 Kb.
Date30.04.2017
Size36.9 Kb.
#16748

Real World Distributed Operating System Case Study: Plan 9

Real World Distributed Operating System Case Study:

Plan 9

Bryan Kinney

March 21, 2005

CPSC450/550 Operating System II

Department of Physics, Computer Science and Engineering

Christopher Newport University


Introduction


A distributed system can be defined as a system in which its components are computers connected using a network that communicate and coordinate their actions only by passing messages [1]. Some concepts to consider when designing a distributed system include how a user of the system interacts with the system, how the system deals with resources and how to deal with errors. The major design considerations are (from [1]): Heterogeneity: The system must be constructed from a variety of different networks, operating systems, computer hardware and programming languages. Openness: Distributed systems should be extensible and provide published interfaces. Security: Provide adequate protection of shared resources and keep sensitive information safe when transmitting messages. Scalability: Provide a system in which the cost to support adding a user is consistent in terms of the resources that must be added. Failure handling: Provide a system that can deal with the failure of any one of its components. Concurrency: Provide safe concurrent requests to shared resources. Transparency: Provide a single system image that allows the user to only be concerned with their application and not aspects of the system such as networks topography and CPUs. This case study will compare the definition and functionality of the Plan 9 Distributed Operating System with the definition and design considerations of the before mentioned description of a distributed system.

History


During the mid 1980’s centralized timesharing based computing was the standard. The centralized computer system was a mainframe and allowed users access to timesharing capabilities via terminals. The mainframe model allowed for a central point of administration and configuration. At the time of Plan 9’s conception in the late 1980’s the trend in computing was heading towards decentralized computing in which more cost effective computing systems existed as workstations accessible to the user at the desktop level. A user could have a personal computer and be able to utilize its resources fully as opposed to time sharing on a mainframe. The typical operating system on these workstations was UNIX. The creators of Plan 9 identified that the use of UNIX in the personal computing environment did not take full advantage of the resources. The early focus on having private machines made it difficult for networks of machines to serve as seamlessly as the old monolithic timesharing systems. Timesharing centralized the management and amortization of costs and resources; personal computing fractured, democratized, and ultimately amplified administrative problems. The choice of an old timesharing operating system to run those personal machines made it difficult to bind things together smoothly [2].

Plan 9 was developed by Bell Labs in Murray Hill New Jersey starting in the late 1980’s. By 1989 the system had matured enough such that, the researchers at Bell Labs were able to use it as their exclusive computing environment Since there have been 3 releases making the latest release, release 4. Plan 9 release 4 was made available to the public in April of 2002 and was later updated in June of 2003. It is available in several media formats (source code is accessible) from the Bell Labs Plan 9 distribution website: http://cm.bell-labs.com/plan9dist [2].


Goals


The motivation for Plan 9’s creation was to have a system that was centrally administered and cost effective using cheap modern microcomputers as its components. The main idea was to build a time sharing system out of workstations using cheap machines to act as terminals to access large central shared resources such as computing servers and file servers. The philosophy behind Plan9 was to build a UNIX out of a lot of little systems, not a system out of a lot of little UNIXs. The creators of Plan 9 incorporated some of the ideas considered an advantage from the UNIX operating system such as a file system used to coordinate naming of and access to resources, even those, such as devices, not traditionally treated as files and did not use what they considered inadequate ideas of the UNIX operating system in the creation of a usable [2].

The Plan 9 Distributed Operating System is a complete operating system designed from the ground up. By integrating some of the services and concepts from the older UNIX system they were able to provide a way to use applications from their UNIX systems in their environment as well. During this integration process they chose to address issues they considered that UNIX had addressed poorly. By designing and implementing new compilers, languages, libraries, window systems, and many new applications they were able to provide an all encompassing system that had clean functionality. In the complete new system, problems were able to be solved as the implementers thought best. One example of an improvement from the old UNIX system to the new Plan 9 system is the tty driver. The designers considered the job of the tty driver to be allocated to the window system, no the kernel. Considering that the computing environment the system provides to be the most important test, the designers were more interested in whether the new ideas suggested by the architecture of the underlying system encourage a more effective way of working than the older systems [2].

Other goals of implementing a brand new system included having sufficient access to the source code so that device drivers could be implemented with more ease. Also having implemented the system themselves they could redistribute the system as they wished [2].

Definitions of Plan 9


Plan 9: Plan 9 is the name of the Distributed Operating System designed and implemented by researchers at Bell Lab in Murray Hill New Jersey. It is available for download from the Plan 9 website [7].

9Grid: 9Grid is the name of a Plan 9 installation that provides Grid Style Computing. More information can be found at the 9 Grid website [8].

: The Plan 9 window system. It provides textual I/O and bitmap graphic services to both local and remote client programs by offering a multiplexed file service to those clients. It serves traditional UNIX files like /dev/tty as well as more unusual ones that provide access to the mouse and the raw screen. Bitmap graphics operations are provided by serving a file called/dev/bitblt that interprets client messages to perform raster operations [9].

Rc: Rc is a command interpreter for Plan 9 that provides similar facilities

to UNIXs Bourne shell, with some small additions and less idiosyncratic



syntax [10].

9P Protocol: The 9P protocol is the Plan 9 file system protocol. It is structured as a set of transactions that send a request from a client to a (local or remote) server and return the result [2].

IL Protocol: IL is a custom implemented network protocol to transport the remote procedure call messages 9P. It is a connection-based, lightweight transport protocol that carries datagrams encapsulated by IP. IL provides retransmission of lost messages and in-sequence delivery, but has no flow control and no blind retransmission [6].

Factotum: Factotum is the central component of the security architecture. Factotum securely holds a copy of the user’s keys and negotiates authentication protocols, on behalf of the user, with secure services around the network [5].

Features


The system is built on three principles. The first is how resources are accessed. Resources are named and accessed like files in a hierarchical file system. The second is how components communicate with each other. Communication is handled with a custom protocol named 9P. The third is how disjoint hierarchies of services are joined together. The disjoint hierarchies provided by different services are joined together into a single private hierarchical file name space. Another important feature of the Plan 9 implementation is built in security. Plan 9 also provides an environment suitable for parallel programming.

Resources as files


All resources in Plan 9 look like file systems. That does not mean that they are repositories for permanent files on disk, but that the interface to them is file-oriented: finding files (resources) in a hierarchical name tree, attaching to them by name, and accessing their contents by read and write calls. Every resource in the system, either local or remote, is represented by a hierarchical file system; and a user or process assembles a private view of the system by constructing a file name space that connects these resources [4]. This process allows the user to access files that are local or remote in the same manner. When writing a program the user does not need to create code that handles for cases in which the file is not local. Those details are abstracted to the system.

Communication: Message Passing with 9P


The 9P protocol is structured as a set of transactions that send a request from a client to a (local or remote) server and return the result. 9P controls file systems, not just files: it includes procedures to resolve file names and traverse the name hierarchy of the file system provided by the server. On the other hand, the client’s name space is held by the client system alone, not on or with the server. Also, file access is at the level of bytes, not blocks, which distinguishes 9P from protocols like NFS and RFS. This approach was designed with traditional files in mind, but can be extended to many other resources. Plan 9 services that export file hierarchies include I/O devices, backup services, the window system, network interfaces, and many others. One example is the process file system, /proc, which provides a clean way to examine and control running processes. Plan 9 pushes the file metaphor much further. The file system model is well-understood, both by system builders and general users, so services that present file-like interfaces are easy to build, easy to understand, and easy to use. Files come with agreed-upon rules for protection, naming, and access both local and remote, so services built this way are ready-made for a distributed system. (This is a distinction from object oriented models, where these issues must be faced anew for every class of object.) [2]

Security


Security has features three concentrations: authenticating users and services; the safe handling, deployment, and use of keys and other secret information; and, the use of encryption and integrity checks to safeguard communications from prying eyes. Security in Plan 9 is handled using the per-user self contained agent: factotum. Factotum securely holds a copy of the user_s keys and negotiates authentication protocols, on behalf of the user, with secure services around the network [5]. In the following Figure 1, the components of the security architecture can be seen. The Components of the security architecture. Each box is a (typically) separate machine; each ellipse a process. The ellipses labeled FX are factotum processes; those labeled PX are the pieces and proxies of a distributed program. The authentication server is one of several repositories for users security information that factotum processes consult as required. Secstore is a shared resource for storing private information such as keys; factotum consults it for the user during bootstrap [5].

Figure 1 Components of the Security Architecture [5]


Parallel Programming


Plan 9 supports parallel programming using two aspects. First, the kernel provides a simple process model and a few carefully designed system calls for synchronization and sharing. Second, a new parallel programming language called Alef supports concurrent programming. Although it is possible to write parallel programs in C, Alef is the parallel language of choice [2]. Plan 9 provides a system call called rendezvous to provides a way for processes to synchronize. Alef uses it to implement communication channels, queuing locks, multiple reader/writer locks, and the sleep and wakeup mechanism. Rendezvous takes two arguments, a tag and a value. When a process calls rendezvous with a tag it sleeps until another process presents a matching tag. When a pair of tags match, the values are exchanged between the two processes and both rendezvous calls return. This primitive is sufficient to implement the full set of synchronization routines[2].

Structure


A Plan 9 system comprises file servers, CPU servers and terminals. The file servers and CPU servers are typically centrally located multiprocessor machines with large memories and high speed interconnects. A variety of workstation-class machines serve as terminals connected to the central servers using several networks and protocols. The architecture of the system demands a hierarchy of network speeds matching the needs of the components. Connections between file servers and CPU servers are high bandwidth point-to-point fiber links. Connections from the servers fan out to local terminals using medium speed networks. Low speed connections via the Internet and the AT&T backbone serve users in Oregon and Illinois. Basic Rate ISDN data service and 9600 baud serial lines provide slow links to users at home [3].

Since CPU servers and terminals use the same kernel, users may choose to run programs locally on their terminals or remotely on CPU servers. The organization of Plan 9 hides the details of system connectivity allowing both users and administrators to con figure their environment to be as distributed or centralized as they wish. Simple commands support the construction of a locally represented name space spanning many machines and networks. At work, users tend to use their terminals like workstations, running interactive programs locally and reserving the CPU servers for data or compute intensive jobs such as compiling and computing chess endgames. At home or when connected over a slow network, users tend to do most work on the CPU server to minimize traffic on the slow links. The goal of the network organization is to provide the same environment to the user wherever resources are used [3].


How to use


Using Plan 9 follows the paradigm of distributed computing. A user accesses the Plan 9 system using an access point such as a personal computer or other device such as a Personal Digital Assistant (PDA). The features of Plan 9 allow the user to access system resources transparently, not worrying about the location of the file or where the resource is located in the system.

Programming Example


The following are some programming examples that show how to use the Plan 9 system.

Example: Listening for incoming connections from [3].


A program uses four routines to listen for incoming connections. It first announce()s its intention to receive connections, then listen()s for calls and finally accept()s or reject()s them. Announce returns an open file descriptor for the ctl file of a connection and fills dir with the path of the protocol directory for the announcement.

int announce(char *addr, char *dir)

Addr is the symbolic name/address announced; if it does not contain a service, the announcement is for all services not explicitly announced. Thus, one can easily write the equivalent of the inetd program without having to announce each separate service. An announcement remains in force until the control file is closed. Listen returns an open file descriptor for the ctl file and fills ldir with the path of the protocol directory for the received connection. It is passed dir from the announcement.

int listen(char *dir, char *ldir)

Accept and reject are called with the control file descriptor and ldir returned by listen. Some networks such as Datakit accept a reason for a rejection; networks such as IP ignore the third argument.

int accept(int ctl, char *ldir)

int reject(int ctl, char *ldir, char *reason)

The following code implements a typical TCP listener. It announces itself, listens for connections, and forks a new process for each. The new process echoes data on the connection until the remote end closes it. The "*" in the symbolic name means the announcement is valid for any addresses bound to the machine the program is run on.

Int echo_server(void)

{

int dfd, lcfd;



char adir[40], ldir[40];

int n;


char buf[256];

afd = announce("tcp!*!echo", adir);

if(afd < 0)

return −1;

for(;;){

/* listen for a call */

lcfd = listen(adir, ldir);

if(lcfd < 0)

return −1;

/* fork a process to echo */

switch(fork()){

case 0:


/* accept the call and open the data file*/

dfd = accept(lcfd, ldir);

if(dfd < 0)

return −1;

/* echo until EOF */

while((n = read(dfd, buf, sizeof(buf))) >0)

write(dfd, buf, n);

exits(0);

case −1:

perror("forking");

default:

close(lcfd);

break;

}

}



}

Applications


Plan 9 was developed as a capable computing environment and has been used regularly as the primary computing environment for some researchers at Bell Labs. It is also a research tool and may give insight to new ways of doing things given the paradigm of its construction. Although not widely adopted outside of the research arena parts of its philosophies of doing things have been applied to other environments. Specifically Grid computing.

The Advanced Computing Cluster Research Lab, Los Alamos National Laboratory [11] uses Plan 9 for a secure grid environment:The goal of 9grid [12] is to utilize the distributed features of the Plan 9 operating system to create a tightly-coupled Grid environment in which running applications can cross the boundaries of the local cluster or institution and utilize resources around the globe or even further away. A distinguishing feature of 9grid is its security model. Plan 9 is a far more secure system, from the ground up, than any Unix system ever built. There is no need for add-ons such as firewalls to make Plan 9 Grid-capable. As users attach to nodes on the 9grid, their entire file system name space is visible from all the nodes which they are using -- and invisible to anyone else.”


Significance of points


The Plan 9 Distributed operating system fits most of the description of a Distributed System as described in [1]. Plan 9 uses the 9P protocol to pass messages for communication. Plan 9 addresses heterogeneity but not fully. It is constructed from a variety of different networks, hardware and programming languages; however it does not address the use of different operating systems fully because it is its own operating system without the need for a toolkit for grid computing or other services. Since being released to the public and offered freely for download on the Plan 9 website, Plan 9 fulfills the concern of openness by providing a well documented system and source code. Plan addresses security from the ground up having a custom model for security. Plan 9 is considered to be a secure environment by the Los Alamos National Labs. Plan 9 has facilities for failure handling built in such as the always available network which queues 9P messages in the event of a failure. Plan 9 provides an environment for concurrent access to shared resources by design. It also provides the user customizable views of the system such that servers are transparent and can appear as a single system image.

Summary


The Plan 9 Distributed Operating System is a Distributed System which gives its users a single system image of the system of networks and hardware it manages. Plan 9 gives users the tools for conventional programming and parallel programming. Plan 9 is a distributed system built from the ground up incorporating distributed concepts and encouraging future research of concepts developed through using distributed computing environments.

References


  1. Coulouris,George. Jean Dollimore, Tim Kindeberg, Distributed Systems Concepts and Design. Pearson Education Limited, 2001

  2. Pike, Rob. Dave Presotto, Sean Dorward, Bob Flandrena, Ken Thompson, Howard Trickey, and Phil Winterbottom, Plan 9 from Bell Labs,http://plan9.bell-labs.com/sys/doc/index.html

  3. Presotto, Dave. Phil Winterbottom. The Organization of Networks in Plan 9, http://plan9.bell-labs.com/sys/doc/index.html

  4. Pike, Rob. Dave Presotto, Sean Dorward, Bob Flandrena, Ken Thompson, Howard Trickey, and Phil Winterbottom, The Use of Namespaces in Plan 9,http://plan9.bell-labs.com/sys/doc/index.html

  5. Cox, Russ. Eric Grosse, Rob Pike, Dave Presotto, Sean Quinlan. Security in Plan 9. http://plan9.bell-labs.com/sys/doc/index.html

  6. Presotto, Dave. Phil Winterbottom. The IL Protocol. http://plan9.bell-labs.com/sys/doc/index.html

  7. Plan 9 Website, http://plan9.bell-labs.com/plan9dist/index.html

  8. 9Grid Website, http://plan9.bell-labs.com/9grid/index.html

  9. Pike, Rob. 8½, The Plan 9 Window System. http://plan9.bell-labs.com/sys/doc/index.html

  10. LDuff, Tom, Rc- The Plan 9 Shell. http://plan9.bell-labs.com/sys/doc/index.html

  11. Advanced Computing Cluster Research Lab, Los Alamos National Laboratory. http://public.lanl.gov/cluster/projects/index.html

  12. 9Grid (LANL): http://www.9grid.net

- -

Download 36.9 Kb.

Share with your friends:




The database is protected by copyright ©ininet.org 2024
send message

    Main page