Knowledge organisation by means of concept process mapping Knowledge organisation by means of concept-process mapping


A Wikipedia introduction to Knowledge Organisation



Download 395.15 Kb.
Page11/20
Date18.10.2016
Size395.15 Kb.
#1899
1   ...   7   8   9   10   11   12   13   14   ...   20

13.2A Wikipedia introduction to Knowledge Organisation


Wikipedia http://en.wikipedia.org/wiki/Knowledge_organization accessed 12/06/2013 suggests:

“The term knowledge organization (KO) (or "organization of knowledge", "organization of information" or "information organization") designates a field of study related to Library and Information Science (LIS). In this meaning, KO is about activities such as document description, indexing and classification performed in libraries, databases, archives etc. These activities are done by librarians, archivists, subject specialists as well as by computer algorithms. KO as a field of study is concerned with the nature and quality of such knowledge organizing processes (KOP) as well as the knowledge organizing systems (KOS) used to organize documents, document representations and concepts.

The leading journal in this field is Knowledge Organization published by the International Society for Knowledge Organization (ISKO).

Simple Knowledge Organization System (SKOS) is a W3C recommendation designed for representation of thesauri, classification schemes, taxonomies, subject-heading systems, or any other type of structured controlled vocabulary. SKOS is part of the Semantic Web family of standards built upon RDF and RDFS, and its main objective is to enable easy publication and use of such vocabularies as linked data. See http://en.wikipedia.org/wiki/Simple_Knowledge_Organization_System.”


13.3Schema representation


(Paquette, 2010) discusses the relationship between structured knowledge representation and learning, which he sees as being inextricably linked. Thus understanding is impossible without identifying and classifying objects and ideas and linking them by association in some organised way. These mental structures or schemas vary in complexity. The concept of schema as the building block of mental structures is now well established in cognitive psychology. The language and the thinking derive initially from the work of Jean Piaget (Inhelder and Piaget, 1955), who discussed the meta-concepts of schema, structure, strategy and operation to describe cognitive processes. According to Piaget, growth of the intellect is achieved through increasingly logical, numerous and complex schemas. Such schemas play a central role in the construction of knowledge which in turn is essential to the learning process.

“Learning is a process by which a representation of a certain knowledge representation is transformed into another representation of that knowledge. Learning is a process, whereas the representation of knowledge is both the starting point and result.” (Paquette, 2010)

The G-MOT (and therefore Conceprocity) representation system is based on the theory of schemas. We distinguish between two broad categories of schemas, these being declarative or conceptual; and procedural. The first category involves data while the second includes the procedures and methods used in processing data in order to organise information. We also follow Paquette in recognising a third category of conditional or strategic schemas which consist of principles having one or more conditions that describe context and conditional sequences. Those conditions can either be embedded in principles (in both G-MOT and Conceprocity) or they can be made explicit in the form of logical connectors attached to events (Conceprocity only).

13.4Knowledge Representation


(Hjørland and Nicolaisen, 2005) discuss knowledge representation. They remind us that “Knowledge representation is thus depending both on the objective pole: what knowledge exists to be represented and on the subjective pole: the representator or selector.” (Hjørland and Nicolaisen, 2005).

We can summarise their findings as in Table :



Table Knowledge representation according to (Hjørland and Nicolaisen, 2005) with additional commentary in italics

Framework

Technique

Characteristics

AI: symbol representation and manipulation

Logic based representations

Declarative sentences and inferencing. Comment: We would suggest that propositional calculus, predicate calculus, first order logic and Horn clauses (as used in Prolog) fall within this category.

AI: symbol representation and manipulation

Procedure based representations

The meaning of a knowledge base is in its use.

AI: symbol representation and manipulation

Frame based representations

“Frame-based systems are knowledge representation systems that use frames, a notion originally introduced by (Minsky, 1975) as their primary means to represent domain knowledge. A frame is a structure for representing a concept or situation such as "restaurant" or "being in a restaurant". Attached to a frame are several kinds of information, for instance, definitional and descriptive information and how to use the frame. Frames are supposed to capture the essence of concepts or stereotypical situations, for example going out for dinner, by clustering all relevant information for these situations together. This means, in particular, that a great deal of procedurally expressed knowledge should be part of the frames. Collections of such frames are to be organized in frame systems in which the frames are interconnected.” (Hjørland and Nicolaisen, 2005)

AI: artificial neural networks




Parallels are drawn between neural nets and behaviourism. There is an emphasis on noting stimulus and response in an empiricist tradition and comparatively little interest in what is happening within the black box. Feedback and/or feedforward are emphasised.

Statistical analysis of large corpora of data




“The statistical approach to AI involves taking very large corpora of data, and analyzing them in great depth using statistical techniques. These statistics can then be used to guide new tasks. The resulting data, as compared to the knowledge-based approach, are extremely shallow in terms of their semantic content, since the categories extracted must be easily derived from the data, but they can be immensely detailed and precise in terms of statistical relations. Moreover, techniques - such as maximum entropy analysis - exist that allow a collection of statistical indicators, each individually quite weak, to be combined effectively into strong collective evidence. From the point of view of knowledge representation, the most interesting data corpora are online libraries of text. Libraries of pure text exist online containing billions of words; libraries of extensively annotated texts exist containing hundreds of thousands to millions of words, depending on the type of annotation. Now, in 2001, statistical methods of natural language analysis are, in general, comparable in quality to carefully hand-crafted natural language analyzers; however, they can be created for a new language or a new domain at a small fraction of the cost in human labor”

(Davis, 2001)

Large corpora of data may be approached by methods related to empiricism, which seems to be what Ernest Davis is suggesting. There is an important difference, however, between traditional empiricist approaches to knowledge representation and “text corpora” approaches. The traditional approach represents what is considered knowledge by the person doing the representation. There is only one voice present. In large corpora of texts many voices are present (what kind of voices varies according to how the text corpus is selected, e.g. if it consists of newspapers or scholarly papers).

Author’s comment: textual analysis tools such as Leximancer are capable of analysing large text corpora and summarising their findings in the form of concept maps. The remark concerning “many voices” is valid and important. For this reason it is pragmatically desirable to subset large text corpora and to analyse them separately as well as together.


Semantic networks

Involve nodes and links between nodes. The nodes represent objects or contents.




(Davis, Shrobe and Szolovits, 1993) discuss knowledge representation. Randall Davis and his co-authors make a clear distinction between what they call reasoning and representation, which they point out are intertwined in many knowledge representations such as Minsky’s frames. They suggest user-supplied axioms, theorems and lemmas as parts of a logic-based approach to reasoning.

“The good news here is that by remaining purposely silent on the issue of recommended inferences, logic offers both a degree of generality and the possibility of making information about recommended inferences explicit and available to be reasoned about in turn” (Davis, Shrobe and Szolovits, 1993)

(Davis, Shrobe and Szolovits, 1993) do considerable service, inter alia by emphasising the necessity for triggers and procedural elements in knowledge representation and they point out that these are implicit in Minsky’s frames. However, we regard as inadmissible their suggestion of logic as a programming language – as proposed by (Kowalski, 1974) - since this approach is inaccessible to the large majority of knowledge workers. Furthermore, we are making no suggestion that Conceprocity should develop in the direction of machine execution. Instead, the emphasis is very much on enabling ordinary knowledge workers, perhaps mentored or working collaboratively, to achieve real understanding and learning about the situation that they are facing.



Download 395.15 Kb.

Share with your friends:
1   ...   7   8   9   10   11   12   13   14   ...   20




The database is protected by copyright ©ininet.org 2024
send message

    Main page