Bibliography Agents – The Multi-agent Community 張碧娟Pi-Chuan Chang, 余家興Chia-Hsing Yu



Download 82.47 Kb.
Date09.06.2017
Size82.47 Kb.
#20127



Bibliography Agents – The Multi-agent Community
張碧娟Pi-Chuan Chang, 余家興Chia-Hsing Yu

黃振修Chen-Hsiu Huang, 葉人豪Jen-Hao Yeh,

Department of Computer Science and Information Engineering

National Taiwan University, Taipei, Taiwan, ROC

ABSTRACT


  1. INTRODUCTION

Finding useful information on the web is an interesting topic. There are some benefits in using multi-agent system to do this kind of job because whenever a more intelligent agent joins, the search power will increase.

The ultimate goal is to find the publication given complete or partial bibliography from user input. There are also some agents who do the data post-processing job (such as clustering and format conversion) to show more analyzed information to the user. In our work, we present a flexible agent architecture, which enables agents with new (or improved) skills to join the community.

In Section 2, we will describe the way our agents communicate and coordinate, and an exemplary workflow is shown in Section 3. The detailed description of each agent in the community will be explained in Section 4. Some technical issues we encountered while we were implementing the system will be discussed in Section 5. Section 6 is about how to setup the agents in our community. And in the end, all references will be listed in Section 7.


  1. AGENT COMMUNICATION AND COORDINATION

1.1.Communication Language

Our agents communicate in reduced KQML, and we choose Scheme as the language for the content. The KQML performatives implemented in our system are described below:



    • ASK
      The ASK performative is used to ask the truth of a question given in the content. It is the most common performative used in multi-agent systems.

    • TELL
      In response to the ASK performative, agents use TELL to reply questions being asked.

    • ADVERTISE
      Agents use this performative to advertise the facilitator of their capabilities.

    • RECRUIT
      The RECRUIT performative is used to tell the facilitator what the agent need from other agents.

One benefit to use Scheme is that we can treat KQML as a specialized version of Scheme. Thus we need only one parser in agent communication. Besides, Scheme is simple and powerful enough to present data structures used in this system. In fact, we use a reduced version of Scheme.
1.2.Coordination Mechanism

The facilitator is the key of the coordination between agents. It is an agent who maintains capabilities of all agents in the system. When joining the system, every agent advertises its capabilities to the facilitator. An agent who needs help from other agents will tell the facilitator what it is interested in, and the facilitator will redirect the message to the agents who can help. So new agents may join the system easily without modification of other agents.

When the agent B joins the system, it sends a message to the facilitator like this:

(advertise

:sender B

:receiver F

:content (ask

:content (find-paper x)))



B tells F (facilitator) that it can find the paper given in x. Once another agent A sends a recruit message to the facilitator:

(recruit

:sender A

:receiver F

:content (tell

:content (find-paper x)))

The facilitator will ask B “(find-paper x)” for A. Finally B tells A the truth of “(find-paper x).” The interactions between agents are illustrated in Figure 1.




Figure 1. Interactions between agents.


Under this architecture, agents can freely join and leave the community. Agents can have some common capabilities/skills. In our system, there are more than four agents who can find paper. Each of them uses different information sources. The repetition of capabilities improves the performance and robustness of the entire system.

Another benefit of this architecture is that agents can fork themselves when necessary. We use lots of this technique to improve the speed of fetching data from the Internet. In our experiments, sometimes there were more than fifty agents and their child agents in the community at the same time.
1.3.Underlying Network Implementation

The underlying communication network is implementing over the TCP layer. The server works like a hub in Ethernet. Each agent has one TCP connection to the “hub” server. A message sent from an agent to the server will be broadcasted to all other agents connecting to the server.




  1. WORKFLOW

User interacts with our agent system via web interface and posts the bibliography in the form of BibTeX to ask our agent system to do the job. The overall work flow may be described as follows:
The user input one or more BibTeX from web (twenty in the test case).

  1. Given input BibTeX, the Interface Agent asks the agent community if someone can answer/solve the question/problem from the facilitator.

  2. The question may be something like “where is the paper of bibliography xxxx ?” (Certainly, in the agent communication language). So the information source agents will try to answer the question.

  3. Information source agents know that if the supplied material is enough to accomplish their job. If the BibTeX is not regarded as complete, it may just give up (e.g. book without book title) or do its best to find it out (e.g. IEEE papers but with no IEEE indication in the BibTeX fields).

Figure 2. General architecture of information source agents



  1. There are three different kinds of information source agents: Publisher Agent, Bookstore Agent, Library Agents, and Search Engine Agent. Information agents do their best to find the exact paper, not to find as more as possible.

  2. All of them share the common architecture shown above: first read the input BibTeX questioned from facilitator and compose it into specific query string (e.g. ACM and IEEE have their own query notations). Then send the search request to the information source and wait for the response. At last, parsing the result for available information (e.g. match items or nothing found).

  3. Now the interface agent gets the answer, it sees the answer if it contains paper’s URL. If provided with paper’s URL, it asks the Retrieve Agent to get the paper’s electronic file.

  4. The data of the paper are now stored in the database, which is served as the NFS of our agent community. The data post-processing agents, such as the Format Agent, the Document Agent, the Abstract Agent, will try to process these data fetched from the Internet. The Format Agent transforms between several formats. The Document Agent will analyze the text files generated by the Format Agent, and the Abstract Agents will try to extract abstract from the text file.




  1. INDIVIDUAL AGENT DESIGN

1.4.Implementation Language and Platform

Agents in our system need not reside in one central host, whereas they can be dispersed over different computers with heterogeneous system environments. The TCP/IP network provides the best infrastructure of agents’ communication.

Language used for implementation is not limited, since support for socket programming interface is the only requirement. C/C++, PERL, Java are all candidates, depending on the team members’ personal preference. And all of them support a variety of platforms, so the overall multi-agent system can be regards as platform independent.
1.5.User Interface Agent (UI Agent)
1.5.1.Agent architecture

The UI agent is a purely reactive agent; it works between the real world and the agent community. Its goal is after understanding the input of user, and presents the result obtained from other agents to the user. The action of this agent is simple:





  1. Translate the input (BibTeX in here) into agent languages (KQML) and ask other agents where the paper is, it will also asking for information of the author if user specified.

  2. Collect the answers returned by others, arrange them. Ask if there's anyone could extract the abstract from this paper.

  3. Present the possible candidates o to user.

1.5.2.Detailed Function Specification

The interaction between the user and our agent system is achieved via a web interface. Upon visiting our site, the user can use BibTeX for querying, and our agents will use those clues to find the possible target publications.

The Agent will translate the input into KQML format and recruit help from the facilitator and see if any agent can help with it. After receiving the answer, UI agent will arrange the answer and ask if there’s any agent could get the file, or the abstract of the papers for him to show.

This Agent is implemented in C on UNIX (FreeBSD) to provide web interface in CGI form.

Also, it will save the answer in this query for future use. It may report failure to the user if nothing is found. Following image is an example of our output.



1.5.3.Interface with other agents:

This Agent may ask/say the following to other agents:



Interface


Description

Ask(find-paper(BibTeX))

Where is the paper related to the following BibTeX?

Ask(find-author(BibTeX))

Please find the homepage of the Author who was described in BibTeX for me

Ask(abstract(filename))

Please extract the abstract of the paper for me if possible




    1. Retrieve Agent

1.5.4.Agent Architecture

The goal of retrieve agent's is simple, Given the URL and retrieve the file from the Internet by invoking program “wget”. After the file has been downloaded, it will be inserted into database for further processing. Of course it may fail if the link is dead or the network is terrible. Obviously, it is also a purely reactive agent.
1.5.5.Interface with other agents:

It could answer the following:



Interface


Description

(get URL)

Retrieve Agent will try to download the file from the given URL.

According to the question, it may answer:

Interface


Description

tell(file-path(F))

The file is downloaded and is put in the following path F.

tell(fail(F))

Retrieve has failed due to the reason F.




    1. Information Source Agents (Publisher, Bookstore, Search Engine, and Library Agent)

1.5.6.Agent Architecture

We have Google agent, IEEE agent, ACM agent, Amazon agent, NTU Library agent, CiteSeer Agent, and SDOS agent as our information source agent. They are purely reactive agents with internal states. Their actions and internal state transition can be illustrated as the following figure:
The actions of information source agents are already described in the WORK FLOW section.
1.5.7.Detailed Function Specification

Function


Description

Advertise()

When system boot, advertise the Facilitator their capability of doing something

Receive()

Function constantly listens on the agent communication channel. When asked with some question, agent will be wakened from here.

Reply()

Reply to the sender with the answer which agent solved.

Judge()

Judge the answer is yes or no (found or not)

1.5.8.Interface with Other Agents


Interface


Target/Description

Tell(answer(P))

Interface Agent

Reply the answer of P to sender




    1. Document Agent

1.5.9.Agent Design

The mission of the Document Agent is to analyze the documents in the database and provide useful analysis result to the user, e.g., the clustering of previously found papers.

Document clustering definitely isn’t an easy problem. There are many issues to be thought over. For example, how to set the number of clusters, the number of documents in each clusters, what clustering method should be used, etc. I will cover these issues in “Implementation strategies”.


1.5.10.Interface with Other Agents

The Document Agent is a hardworking agent. Most time it works on its own tasks, but other agents can also ask it for services. There are two kinds of services it can provide, that is, produce the document vector and clustering of every document.

When the computation is over, the Document Agent store the results into the database, where every agent in the society can access.

Also, the Document Agent might need file format conversion service, which can be provided by the Format Agent.


1.5.11.Detailed Function Specification

    • Produce document vectors for documents in database
      The Document Agent will actively check the existing documents in the database. If there’s any unprocessed document, the Document Agent will first perform word stemming (using Porter stemming algorithm1) and stop-word elimination, and then process it into a document vector. The table below is an example.
      The actual program used by Document Agent is doc2mat2, which can convert documents into the vector-space format used by CLUTO.

A document:

Corresponding document vector


This is a book.

There are books and pencils.



book 2 pencil 1




      • Ask others for converting the paper format to pure text
        A subtask of producing document vectors. Oftentimes the retrieved papers are PDF or PS files. Since the Document Agent needs to do text processing, the pure text of the papers must be extracted first.




    • Cluster all documents in database
      The Document Agent will cluster all documents in database, using the document vectors processed beforehand. This can be done when other agents ask for this skill to be performed, or the Document Agent would perform it every now and then.



1.5.12.Implementation Strategies

At first I intended to use the Bow toolkit [10], but unfortunately, the document clustering front-end (crossbow) in the Bow toolkit is not a bug-free program. I tested some data with the crossbow program and found bugs in it. Therefore, I tried to find another clustering toolkit for this task.

The clustering part of my Document Agent relies on the clustering toolkit: CLUTO. There are several reasons why I choose it. First of all, it’s a well-documented [11] tool. This also means the development of this tool is quite well-organized. Second, it’s a general clustering toolkit, which provides us three different classes of clustering algorithms. These algorithms are based on the partitional, agglomerative, and graph-partitioning paradigms. The clustering method I used was k-way clustering via repeated bisections [12]. In this approach, a k-way solution is obtained by first bisecting the entire collection. Then, one of the two clusters is selected and it is further bisected, leading to a total of three clusters. The process of selecting and bisecting a particular cluster continues until k clusters are obtained. Each of these bisections is performed so that the resulting two-way clustering solution optimizes a particular criterion function.

Beside its generality, it also provides several functionalities that aims at document clustering problems, e.g., I use a parameter -colmodel=idf which scales the columns of the matrix according to the inverse-document-frequency (IDF) paradigm.

Although CLUTO provides many different clustering method, I don’t make use of all of them. In this project, I use the stand-alone program vcluster of CLUTO, and the command is:

vcluster -colmodel=idf -clabelfile=$CLABEL $matfile $numcluster

“$matfile”, the primary input of CLUTO’s vcluster program, is a matrix storing the documents to be clustered. Each row of this matrix is a document vector of a single document, and its various columns correspond to the dimensions (i.e., features) of the documents.

“$clabel” stores all columns (features) of the document matrix.

“$numcluster” is the number of clusters that is desired. As I said before, choosing the number of clusters is also a nontrivial problem. There are some papers discussing this issue [13], but I merely apply a naïve heuristic to decide the cluster number. If the number of all files (with converted text) is N, the cluster number is set to N/5. If the cluster number is greater than 10, then the cluster number is set to 10.

The implementation programming language of Document Agent is Perl, and the platform it runs on is a FreeBSD server.




    1. Format Agent

1.5.13.Agent Design

Format Agent welcomes any kinds of file conversions task, if it knows how to do. Now there are two kinds of target formats: html and pure text; the kinds of source formats are pdf, ps, ps.Z, ps.gz, html.
1.5.14.Interface with Other Agents

When other agents ask for tohtml or totext skills, Format Agent will look up the given file name in the file table (our NFS). If the file exists, it will complete the requested conversion and store the results into the file table. Then the Format Agent will send back a message to notify the sender that the job is done.


1.5.15.Detailed Function Specification

    • Convert file formats
      When tohtml or totext skills are requested, Format Agent will do the conversion if it can.

1.5.16.Implementation Strategies

The following table shows the conversions available now and how they are accomplished:

Conversion kinds:

How to do the conversion


PDF to Pure Text

pdftotext3 –raw PDF-file text-file

PS to Pure Text

ps2ascii4 PS-file text-file

ps.Z to Pure Text

uncompress, then ps2ascii

ps.gz to Pure Text

gunzip, then ps2ascii

HTML to Pure Text

Remove html tags (everything enclosed in angle brackets)

PDF to HTML

Pdftohtml5 -q -noframes -i PDF-file html-file

PS to HTML

ps2pdf PS-file PDF-file,

pdftohtml6 -q -noframes -i PDF-file html-file



ps.Z to HTML

uncompress, then perform PS to HTML

ps.gz to HTML

gunzip, then perform PS to HTML

The performance from PS to HTML is not very satisfying because the PS files are converted to PDF files first, then converted to HTML files. But as far as I know, there’s no good way to directly do the conversion. Thus, the method adopted in my implementation is an indirect one.


    1. Abstract Agent

1.5.17.Interface with Other Agents

Abstract Agent is purely reactive because it will act only when someone asks it to perform the abstract extraction. Abstract Agent will look up the given text file name in the file table (our NFS). If the file exists, it tries to extract the abstract and return the result to the sender.
1.5.18.Detailed Function Specification


    • Extract abstract for pure text document
      The method of extracting is quite simple. I manually transformed all papers I collected (about 200 files) to text and observed that the abstract often start with “Abstract” in the beginning of a line, and end with some patterns such as “1. Introduction”, “I.”, “Keywords …”, etc. So the Abstract Agent simply matches these patterns and get the abstract




    1. Author Agent

1.5.19.Architecture

The author agent tries to find authors’ homepages. It first extracts the authors’ names from the BibTeX and then put the names on the Google one by one to search their homepages. The agent maintains a threshold of string-distance to recognize the candidates returned from Google. The threshold in our implementation is 60% of CER (character error rate).
1.5.20.Skill and Interface

The skill provided by the agent is find-author.




  1. Technical Issues

1.6.UI agent

  1. Lack of interactions with the user – User has to wait a short time before the answers are presented. UI agent was designed to output an answer right after it is being retrieved, but consider to the restrictions of CGI and user browsing habits (one answer may change its position if a better one comes), I use C to implement it. Java Applets could be an answer to this problem. If cross platform is not very important, an UI client program in each user’s computer could solve this problem much better.

  2. How long should I wait for a single query? – The advantage of our system is that you don’t need to know how many agents are there is working for you.( The facilitator will warn you if there’s nobody can serve your request.). On the other hand, you could never know if anyone is still working on this query. The agents will tell you either they can’t find it or an possible answer, but how about if they are still working on it? Maybe a better system design (e.g. a smarter facilitator) could solve this issue.

  3. Network vs. Performance -- I ask for abstract of the papers in the answer set. The paper may not yet downloaded by the retrieve agent. Should I let user wait? or just forget the abstract?

  4. Strange URLs – links with strange format everywhere, and I can’t get filename from it (maybe a redirecting URL). They trouble me all the time. I have no idea how to deal with them. I use MD5 to encode these URLs and set them as filename in database. Don’t know if there’s a better way to keep this problem away.

1.7.Google Agent

Six rules are used by the Google Agent to determine the search results of Google search engine: (The following Like notation is achieved by a Perl module String::Approx to calculate the string approximation)


Rule 0

If the BibTeX contains an URL field

Rule 1

If the link given is a ps or pdf file and the link name is the same with BibTeX title.

Rule 2

We got a ps/pdf link but the title is not like the BibTeX title, but if the string text given by google contains the BibTeX title, We take this one as desired result.

Problem: sometimes it may be some other paper which references the questioned paper or a list of someone’s publication. We did not handle this situation but perhaps we can depreciate this case when ‘publication’ or ‘reference ’ word appeared.

Rule 3

The URL found by google is not a ps/pdf file link but a html link, we retrieve the html link to see if it’s converted from some converter such as LaTex2HTML program and the html file tag context is the same as BibTeX title. <br /></td> </tr> <tr valign=top> <td width=69> <br />Rule 4 <br /></td> <td width=476> <br />If rule 0-3 are all failed, we must do much harder work. <br />Get the html link, parse all it’s links, comparing it’s URL title with BibTeX title, if they looks alike and link type is a ps/pdf file. <br /></td> </tr> <tr valign=top> <td width=69> <br />Rule 5 <br /></td> <td width=476> <br />The same as rule 4 but no ps/pdf type link is found, return the most similar link title. <br /></td> </tr> </table> <br />All the rules are based on simulating the user experience of using Google Search Engine. Agent returns nothing if all these six rules failed. </p> <p>Google agent is quite slow comparing with ACM/IEEE agent because we must deal with ten links in each rule, and over-forked child process (grandchild process ?) to do this job is not regarded as an efficient way.</p> <p>By using the six rules, nine of the test twenty BibTeX can be found. <br />1.8.ACM/IEEE Agent </p> <p>Searching the ACM and IEEE database is much easier then dealing with google output. But perhaps the ACM and IEEE web site may limit the number of requests from single IP address in a small period of time, so successive requests may get high probability of miss hit. </p> <br />1.9.NTU <a href="/agent-based-modeling-what-is-it.html">Library Agent </a><p>The problem of NTU library agent is that the search result is simple a dynamic output from cgi scripts, so recording the URL is meaningless (because all are <u>http://tulips.ntu.edu.tw/search*chi/a?a</u> with corresponding POST data but we can not record the POST data ). Instead, we report the paper URL as HTTP GET form: <u>http://xxx.xxx.xx/search.cgi?action=xxx&title=xxxx</u> </p> <p>The other question is, the URL given by NTU library is quite strange: <u><b>http://tulips.ntu.edu.tw/search*chi/tLATEX%20:%20a%20document%20preparation%20system%20:%20user%27s%20guide%20and%20reference%20manual/tlatex+a+document+preparation+system+users+guide+and+reference+manual/1,1,1,B/marc&FF=tlatex+a+document+preparation+system+users+guide+and+reference+manual&1,,0</b></u><b>,,</b> which can be viewed by general browser but can not recognized by our interface agent. <br />1.10.Journal Agents (ntulib) </p> <p>There are lots of online journal databases subscribed by the NTU Library. How to search among these online databases with different layouts? A basic idea is to make use of the search interface on their website. Sometimes this strategy does not work well (unable to find certain papers, or the page is protected by using sessions). The only way to access the information is to brutally browse all the titles and get the information. The SDOS Agent does this kind of brutal job.</p> <br /> <br /><ol start=6> <li> <br />HOW TO SETUP OUR AGENTS <br /></ol> <br />1.11.User Agent and Retrieve Agent <br /><ul> <li> <br />List of dictionaries: <br /></ul> <ul> <ul> <li> <br />uiweb/ :source of ui agent. <br /><li> <br />uiweb/html :source of html files and pic, CSS file for the html. <br /><li> <br />retrieve-agent/ :source of retrieve-agent <br /><li> <br />mysql/ :MYSQL library and include files <br /><li> <br />libagent/ :our agent library and include files <br /></ul> </ul> <ul> <li> <br />How to compile: (Under FreeBSD or other UNIX systems) <br /></ul> <ul> <ol> <li> <br />ui-agent <br /></ol> </ul> <br />If you would like to compile UI agent, simply type "make all" under the uiweb/ dictionary. This will generate a binary file called "uiweb.cgi". Copy it to uiweb/html/, and open paperagent.html to run it. <br /><ul> <ol start=2> <li> <br />retrieve-agent <br /></ol> </ul> <br />In the dictionary retrieve-agent/, type "make all" to generate the binary "retrieve". <p>These codes were complied and tested on FreeBSD, but haven't tested on other UNIX platforms yet.</p> <p>1.12.Information Source Agents <br /><ul> <li> <br />Perl with LWP and String::Approx modules can be installed on almost any OS (Unix like, Win32, etc ... due to different fork() implementation). <br /><li> <br />These agents runs on a FreeBSD 3.5-STABLE machine. <br /><li> <br />Hardware: Pentium III 500MHz dual CPU, 512 MB RAM <br /></ul> <br /> </p> <p>1.13.Document Agent, Format Agent, Abstract Agent and SDOS Agent</p> <br /><ul> <li> <br />OS: FreeBSD <br /><li> <br />Language: Perl <br /></ul> <br />These agents are implemented in Perl. LWP and String::Approx modules must be installed in order to get them work. Notice that some paths in the code should be modified to your own directories. For example: <p>#!/usr/bin/perl -w -I/home/onlyblue/Agents/utility</p> <br /><ul> <li> <br />Some utility programs <br /></ul> <br />Since I used "pstotext", "pdftohtml", "ps2ascii", "ps2pdfwr" to deal with the file format, you'll need to install them all. You can find them in FreeBSD ports. And I use CLUTO to do the clustering, you must install it and set your PATH to the executable files of CLUTO. <p>1.14.Server, Facilitator Agent, Author Agent, Citeseer Agent and SDOS Agent</p> <br /><ul> <li> <br />OS: FreeBSD <br /><li> <br />Language: C <br /></ul> <br />Change directory to libagent, AgentNetwork, <a href="/dear-potential-applicant.html">facilitator</a>, citeseer, and sdos in the order, and simply run make Then run the AgentNetwork first, facilitator, second, and rest of them. <br /> <br /><ol start=7> <li> <br />REFERENCES <br /></ol> <ol> <li> <br />Michael Wooldridge <br />An introduction to multiagent systems <br /><li> <br />Joseph Bigus, Jennifer Bigus <br />Constructing intelligent agents with Java: a programmer's guide to smarter applications <br /><li> <br />Michael N. Huhns and Munindar P. Singh <br />Readings in agents <br /><li> <br />Stan Franklin and Art Graesser <br />Is it an agent, or just a program? : A taxonomy for autonomous agents. <br />In <i>Proceedings of the Third International Workshop on Agent Theories, Architectures, and Languages</i>, pages 21-- 35. Springer-Verlag. <br /><li> <br />The Bibliography Agent Community. <br />http://bcyang.dhs.org/work/spider/bib_agent.ps <br /><li> <br />Michael Wooldridge <br />Intelligent agents <br />in <i>Multiagent Systems: A Modern Approach to Distributed Artificial Intelligence</i>. <br /><li> <br />Tim Finin, Richard Fritzson, Don McKay and Robin McEntire <br />KQML as an Agent Communication Language <br />in <i>Proceedings of the 3rd International Conference on Information and Knowledge Management </i>(CIKM'94). <br /><li> <br />Jamey Graham <br />The reader's helper: a personalized document reading environment <br />In <EM>Proceeding of the CHI 99 conference on Human factors in computing systems: the CHI is the limit</EM>, pages 481-488. ACM Press, 1999. <br /><li> <br />Philip R. Cohen, Adam Cheyer, Michelle Wang, <a href="/faculty-of-business.html">Soon Cheol Baeg </a><br />An open agent architecture <br />in <i>Readings in Agents</i> <br /><li> <br />Andrew Kachites McCallum <br />Bow: A toolkit for statistical language modeling, text retrieval, classification and clustering <br />http://www.cs.cmu.edu/~mccallum/bow <br /><li> <br />George Karypis <br />CLUTO: A Clustering Toolkit <br />Department of Computer Science, University of Minnesota <br />Technical Report: #02-017, August 26, 2002 <br /><li> <br />Ying Zhao and George Karypis <br />Evaluation of Hierarchical Clustering Algorithms for Document Datasets <br />Department of Computer Science, University of Minnesota <br />Technical Report: #02-022 <br /></ol> <div ID="sdfootnote1"> <br />1 http://www.tartarus.org/~martin/PorterStemmer/ <br /></div> <div ID="sdfootnote2"> <br />2 http://www-users.cs.umn.edu/~karypis/cluto/files/doc2mat.html <br /></div> <div ID="sdfootnote3"> <br />3 The pdftotext software and documentation are copyright 1996-2002 Glyph & Cog, LLC. <br /></div> <div ID="sdfootnote4"> <br />4 L. Peter Deutsch <ghost@aladdin.com> was the original author. The current version has substantial improvements by David M. Jones <dmjones@theory.lcs.mit.edu>. <br /></div> <div ID="sdfootnote5"> <br />5 Copyright 1999-2002 Gueorgui Ovtcharov and Rainer Dorsch. Based on Xpdf version 1.01 <br /></div> <div ID="sdfootnote6"> <br />6 Copyright 1999-2002 Gueorgui Ovtcharov and Rainer Dorsch. Based on Xpdf version 1.01 <br /></div> <br />
Download 82.47 Kb.

Share with your friends:





The database is protected by copyright ©ininet.org 2024
send message