Anders Michelsen



Download 104.85 Kb.
Date31.03.2018
Size104.85 Kb.
#44443
Unknowledge: on the imaginary of the artificial?

Anders Michelsen

Assistant professor, Coordinator of Visual Culture Studies

(http://www.hum.ku.dk/kunst/Ansatte/Ansatte_Anders_Michelsen.html)

Department of Arts and Cultural Studies

University of Copenhagen


Karen Blixen Vej 1.4

DK-2300 Copenhagen S



E-mail: amichel@hum.ku.dk


Keywords:

Unknowledge, artifice, creativity, “the world as design,” “the imaginary institution of society,” interface, design of complexity, self-creation



Abstract:

The comments forwarded in the review from the EAD06 panel has inspired a re-focus of the issue of creative imagination in relation to “unknowledge” in comparison with the abstract first submitted. Thus in the following, imagination is seen as an approach to the issue of unknowledge. The paper debates the proposed theme of ‘unknowledge’ as liaisoned with a new role for the imagination, related to the comprehensive state of the artificial today, what is termed a ‘design of complexity’. According to the view presented any form of knowledge – any artifact, is ontologically inconceivable without recourse to a constituting creative imagination, that is, beyond affirmation/critique of a given or assumed knowledge-base. The paper proceeds in two parts. First Otl Aicher’s “the world as design” is used to introduce the idea of design of complexity as a ‘made’ human form of self-organization, pointing towards what Cornelius Castoriadis terms self-creation, or the “imaginary institution of society.” Second, this view is exemplified by a discussion of the “imaginary of the artificial” in relation to the issue of interface in postwar computing.



  1. The ‘Deep Web’: unknowledge and manifest complexity

In July 2000, in a white paper entitled “The Deep Web: Surfacing Hidden Value,”1 Michael Bergmann from the Internet firm BrightPlanet threw a bomb into the then highly fashionable debates about ‘content’ on the Web. At a time when almost any radical assumption had been applied to the Internet, the paper - backed by the firm’s new search tool, “The Lexibot,” presented the baffling assertion that content-vise the Internet was 500 times larger than estimated, and therefore, that the ‘known’ net was only a shallow surface of a state much more complex: “a vast reservoir of Internet content (…) 500 times larger than the known “surface” World Wide Web”” existed.2 Established generations of search engines – whether the random link following-approach, or the ‘directed’-approach based on popularity of pages – were entrenched at the surface.

While the BrightPlanet paper might illustrate daily frustrations among web users, it prompted a characteristic comment by a veteran of Danish neo-connectionism, the scientist Benny Lautrup from the Niels Bohr Institute.3 He argued that the Lexibot-results was verifying the idea of ‘networking’ as ‘scalable’ on par with the complexity of the human brain, and, moreover, could be compared to an organic phenomenon, e.g. a “cancer tumor:” “if one follows this idea to its consequence, the level of connection of the Internet is akin to something which could be life.”4 Lautrup’s idea agreed with the ‘neo-cybernetic’ invigoration of the computational heritage in the 90s, but did not fit well with the stated intent of BrightPlanet to engineer a tool for efficient information retrieval.

However, this difference between neo-cybernetic ‘speculation’ and transparent ‘engineering’ is only apparant. At a closer look, the phenomenon of the Deep Web points to a different issue of what I will term a design of complexity, that is, to a possible review of how design is liaisoned with artifice ‘post’ computing. Beyond the speculations of the computational heritage and the ongoing complication of engineering science, we may pose new and different questions regarding designed artifice. Ezio Manzini has argued, significantly, for the existence of what I will term a schisma between the creation of the artificial and the lack of insight into the distinctive character of the artificial.5 This appears pertinent in relation to the debate about the Deep Web: (a) In both cases the status of the artificial appears unclear. In Lautrup’s argument it borders to mystification, due to an assumed metamorphosis of a designed system. In BrightPlanet’s project the artificial is conceived in terms of dysfunctional engineering to be solved by appropriate design measures, (b) However, in both cases, the issue of creativeness, of creation – in the basic creative sense of “a plan, or schema conjectured by humans of something to be realized” (Oxford English Dictionary’s definition of design from 1588) – seems to be negligible. Put differently, ‘in between’ speculation and engineering, design in the sense of creative invention, conception, articulation seems to pose a different question: that is, the bringing forward of a complex reality – a design of manifest complexity, so to speak, as a process of creation ex nihilo with an ontological contingency beyond inherited determinations and constraints.

What neither speculation nor engineering address is how the apparently Gargantuan artifact of the Deep Web comes across as creation in a given context, and moreover, creation by what definition? From a perspective of creation – what Cornelius Castoriadis terms “the imaginary institution of society,”6 what is being said is de facto devoid of meaning, at least in two ways: it neglects any definition of design as creation tout cour, that is, at all, and, second, and moreover, as a particular novel form of designed complexity, in some capacity, for instance, of the artificial. To put it conversely: with the Deep Web we encounter a threshold for design: a schisma between how an artifact may be generated as de facto organization, and how it is valorized as creative meaning.

In relation to the issue of unknowledge this is important: the – in a broad sense, determinant stipulation of design by way of defined knowledge: the ”knowledge base position” is questioned by creativity coming to the fore. In between speculation and engineering is appearing not “unknowledge,” in some capacity, but rather unacknowledged forms of creation, what I will term the imaginary of the artificial. Phenomena such as the Deep Web may put the agreed assumption of what design is at stake by introducing a form of artifice, which render the issue of unknowledge susceptible to a quite different problematic of artifice which may be designed, treated, and used – that is, created (also from the perspective of a certain unknowledge) yet being beyond a clear grasp of creation per se: that is, the imaginary of given design strategies may not coincide with an elaborate reflection upon the wider impact of design in today’s world. An example: in the newspaper report on the BrightPlanet it was conveyed that the efficiency of the Lexibot multiplied the returns on a given problematic – e.g "percolating network," from 58 hits on Google to 1558 hits.7 But shouldn’t we ask – or emphasize, perhaps naively, how such a return could possibly be digested, not to mention creatively approached by a user, that is, by any user? Put differently the Deep Web is pointing to a novel state of artifice by way of its complex, yet creatively designed incomprehension for humans, as debated variously- directly and indirectly, in the complexity-‘paradigm’, e.g. as emergence, surprise, autopoiesis, organizaction, third order and so forth.8 Contrary to much of the impetus of this paradigm, I argue that creativity is not rendered irrelevant by designed systems relying on various aspects of the paradigm, but brought to the fore.

In the following I will debate this as a reason for reviewing unknowledge by juxtaposing emphatic creativity with the notion of complexity cum design. First I will treat a certain ‘genealogy’ of a design of complexity in Otl Aicher’s programmatic text “The World as Design.” Second, I will take a brief look on unknowledge from the perspective of interface in postwar computing.


  1. The imaginary of the artificial: on design of complexity

In “The World as Design,”9 cofounder of HfG Ulm Otl Aicher, outlines a connection between complexity and the issue of the purposive. Aicher does not explicitly threat complexity, but it is clear from his argument that he debates, directly and indirectly, how complexity comes to be and how, conversely, complexity has an impact on how we see the world, when basing ourselves on this novel view.

Aicher’s argument starts with the disclosure of the complex in Darwinean and post-Darwinean thought: the processes of nature is neither wholly planned nor exclusively contingent, but contingent on complex situational dynamics (nature works its way ahead through an extended systemic bricolage). The understanding of the world as complex must acknowledge a new form of self-organization which stipulates that the world is being neither a “constantly predetermined cosmos” nor a “process of development into which one is born,” Aicher writes.10 However, the most decisive complement to this understanding is still emphatically missing: that is, an understanding of the world as made, as “a product of a civilization, that is “as a world made and organized by men.”11 Darwin’s real achievement is:

“there (…) [is] no longer any point in looking for causes for particular effects, the effects themselves are the cause of world development. what confirmed itself in practice is the selective principle of forms of appearance. no reason of any kind controls the development of the world, its way is determined by the selective principle of effect, of effectivity. just as everything arranges itself in the interplay with other things.”12


Now, at times the idea of complexity will be taken simply as more explanation, more knowledge – more ratio, but Aicher emphasizes that this cannot really be proven. Complexity implies an emphasis on effect and this is abundantly illustrated by the computer and the notions of logic associated with it, from Hilbert, Gödel and von Neumann, to the ‘computational turn’ envisaged by Alan Turing in the late 30s:

“alan turing addressed david hilbert’s, kurt gödel’s and john von neumann’s problem of how mathematics could be proved in its correctness and competence in the abstract, by logical conclusions and evidence. he did not produce his proof through a new chain of logical reasoning, but by the working method of a machine that could operate mathematically, i.e. carry out processes and achieve effects. this calculating machine became our computer. the effect explains the law. that was pretty much the end of mathematics as theory. mathematical assumptions cannot be proved conclusively, their correctness cannot be definitively grounded in logic. today mathematics are created and confirmed by computer. this is not the end of mathematics, but the end of a kind of mathematics that justifies itself by consequential logic, by compelling conclusions.”13


From now, on, proof was the equivalent of effect in a new sense, i.e. of what can effectively be proven by a computational ‘model’ . Importantly, this model gains an independent stature, foregrounding the interdependence of design and creativity. The model turns the inherited principle of “Verum et factum convertuntur,” meaning that humans can “ (…) have rational knowledge only about that of which we are the cause, about what we have ourselves produced”14 into an ever more dissipative and creative artificiality, confronting knowledge in a new way. This artificiality is most often related to application, that is to effectuated – effective, computation, or, simply, effect, to be seen and acknowledged in the world e.g. as design. Hence if something can be modeled it can be known, but not really submitted to proof in the inherited sense of causality or, spiritual insight. Second, this form of effect go along with a stunning increase in the dissemination of the artificial, to the extent that some observers, notably Herbert Simon, from the 70s onwards see the artificial as the quintessentially human, in need of a particular scientific (for Simon computational/evolutionary) effort.15

The notion of design is thus changed by means of concrete effect: by means of direct implementations, (e.g. creating new fields such as ‘web-design’) or by means of a more principal ‘effect’ of implementation (e.g. changing the idea of knowledge). Whereas the idea of the purposive prevails by way of various formulas of the causal in the modern tradition (e.g. in the ‘programmatics’ for ‘form follows function’, ‘the grid’, ‘the box’, ‘planning’, ‘methodology’ etc.), the new idea is different. Design becomes not a program for something to be carried out, but designed complexity cum emergence. Or better, something particularly associated with the complex in both a sense of model and made: there cannot be complexity if there is not design, and to Aicher it proves, that if the world is complex it is no less of our making, albeit this must be acknowledged in a new way. Out of the ‘breach’ appearing in between causality and cosmology pours a new condition of making:

“we are becoming aware that man, whether for good or bad, has stepped outside nature. he is bound to it, but he builds a second world over it, that of his own constructions. our world is no longer nature embedded in the cosmos. in a pubertal rush of self-decision we have detached ourselves from alliance with universals and follow our own ends. these turn out to be as daredevil as they are fatal and we would have to accept it if, because of our constitutive autonomy, mankind were to cease to exist in the next century.“16
Complexity cannot be apprehended fully without acknowledging the made. Build into the discovery of complexity is the precondition of made effect, or to put it differently: of creativity in an unacknowledged sense of designed artifice. Jean-Pierre Dupuy argues that von Neumann who, with Ross Ashby pioneered the contemporary notion of complexity, came to acknowledge that the problem of computing – the automaton as it were in the 40s and 50s, was liaisoned with a new form of ‘dissipative’ complexity, de facto pointing to a crisis of the artificial:17 designed artifice, at large. The principled “surprise” of complexity, as John L. Casti would call it much later, is also a surprise of made self-organization.18

The discovery of complexity implied not the direct, naive, image of man,19 focused upon in the first decades of the computational heritage but a radical extension of a different problem of the human: the made as a peculiar specificity of humans, for designers, for the ‘creatures’ that are to carry the burden of knowing the complex by implemented model. The creative, understood as peculiar ‘subcontinent’ in Western modernity (rationalism and romanticism alike), something more or less obscure, ‘from the depth of the human soul’ (Kant)20 returns with a vengeance: to the discovery of the complex comes to cling a necessary acknowledgement of effect cum creation in a wider and different sense.

The focus on effect is thus more than a historical accident, as Aicher points out: it is a principal condition for the acknowledgement of complexity. You can not have one without the other, as the song goes. In sofar complexity is bounded and conditioned, the issue of the creative may be seen as one such condition, as an (perhaps) emphatic boundary, which distances the idea of creation from the inherited subcontinent. So a first approximation of what may be understood as the design of complexity can very well be an expanded view on the condition and conditionings of creativity. From a design-prospect this must raise the issue of constitutive meaning: any understanding of complexity beyond the formal expression – modeling, must resort to the issue of how the made can be addressed by the maker, that is: to what meanings the made can aspire and attain, or to put it differently, what significations can be attached to the constitutive meaning of design eo ipso.

This may be at odds with certain interpretations of complexity that will emphasize that human doings cannot be resolved at the level of design, that in the human strata of the real ‘effect’ must be apprehended as a result of “human action but not of human design,”21 that is, on this ‘evolutionary’ level the complex is beyond human grasp because of its complexity, and can only be leaned on in couplings. Second, and of equal importance, it points to what we may term the predicament of how to understand the artificial on the level of complexity: that is, when objects are composed in, or made into systems, and systems are composed or made into something complex, by effects of design. To put it differently: the design of complexity is a complex endeavor but it is so because it is predicated emphatically on design in a new sense, as pertaining to new conditions of the human.

Complexity is in itself, so to speak, creation, design at large, or more correctly: design under conditions of the human specified by way of complexity, as indicated above. The notion of the complex is aligned with formal organizations – dimensionings by way of modeling, of matter, energy, and information in various ways (e.g. within the computer-processor, or for that matter, the Internet) - the understanding of which necessarily becomes an understanding of how creation is possible for humans, in a ‘human strata of the real’ (Castoriadis).

In L'auto-organisation. De la physique au politique (1983)22 Paul Dumouchel & Jean-Pierre Dupuy debates whether a human social world can be grasped within the ‘paradigm’ of self-organization underwriting complexity: that is, how the theme of self-organization seems to become a distinctive dimensioning of a human strata of the real, by an ontologically constitutive “ordre de contenu.”23 To this end Dumouchel & Dupuy indicates a peculiar autonomous paradoxality, which they present in three versions:24 1) The issue of “closure” of autonomy, where self-organization comes to attain a closed self-reliance by way of paradoxical constitution by the external, e.g. in religious societies. 2) The autonomous self-organization of modernity where the internal is seen to constitute a peculiar autonomy in a binding relation to the external, e.g. nature. 3) The open or revolutionary autonomy as conceived by Cornelius Castoriadis, which raises the question of social and historical – human, form’s independence ex nihilo, that is, a translation of paradoxality into relations between formal organization and a constitutive creative imagination, i.e. “the imaginary institution of society.”

Thus anything that happens to be in the social and historical world which makes up the reality of humans can be seen as inconceivable without recourse to imagination, in Castoriadis’s terms: " (...) without it any determination (…) remain incomplete and finally incomprehensible.”25 The imaginary must be seen as a significatively distributed instanciation in a basic and profound sense of what Castoriadis terms “self-creation,” the instituting of a “complete system of the world,”26 wherein collective imaginary significations create an organizational strata of the real in a ‘proper’ human sense: “Social imaginary significations create a proper world for the society considered – in fact, they are this world (…).”27

To Castoriadis this constitution may be conceived of, as what is traditionally theorized as the formation of images - forms - in the most general sense including ideas of invention and creation, but it must also pertain to something else: a "radical imagination" which is constitutive exactly because it is placed "before the distinction between 'real' and 'fictitious'"; "(...) it is because radical imagination exists that 'reality' exists for us - exists tout court - and exists as it exists."28 Moreover this raises a constitutive issue of constraints. Radical imagination is in principle unbounded and may lead anywhere, but in practice it doesn’t. Creation, to put it short, is in principle limitless but in praxis subjected to a number of constraints which Castoriadis summons schematically as “external” (e.g. nature), “internal” (e.g. dispositions of subjects), “historical” (e.g. forms of sociality), and, finally, “intrinsic” constraints relating in particular to how the specificity of the imaginary institution is made “coherent” and “complete.”29




3. The imaginary of the artificial: interfaces and knowledge
I argue a possible genealogy in between the speculation of the computational heritage and the naivety of engineering. This pertains principally to an expanded issue of a constitutive “ordre de contenu,” in Castoriadis terms, the “imaginary institution of society.” One highly apt way of approaching this in terms of a design of complexity, is to focus on the rich issue of interface within the computational heritage. In this regard, Wulf Halbach has presented an interpretation of one of the ‘inaugural’ moments, the “Turing-test”30 which is particularly interesting in this context, because it indicates that within the very problematic defining the machine early on, it is possible to discern imminent issues of knowledge: of ‘knowing’ how to create the artifice of computing, that is the creative – yet knowledgeable, ‘forming’ of the computer.

The wellknown genius of this idea is not only to envision a machine tricking the human, but to reverse the burden of proof to the human side who is to prove the machine not human. However, Halbach argues, the set up is flawed. Turing introduced a definitory distinction based on a ‘made up’, yet principal distance: a symbolic language and a telex machine (teleprinter) facilitating the test. Thus the comparison is in no sense direct, but inherently dependent on a principal ‘mediation’ enabling the test.31 The architecture of the test is not only de facto counter-productive to a comprehensive comparison and can thus not lead to the envisaged field of parallellity between humans and machines. On the contrary, it presents a novel problematic, which should prove crucial to subsequent attempts at Human Computer Interface and Human Factor Research.32 The unacknowledged price for establishing Turing’s test was a new liminality or boundary (in German, interestingly termed the “Schnittstelle” [literally: ‘cutting edge’]) later to be termed GUI, “graphical user interface.” This issue proved that not only were machinic computation and intellectual powers quite different, but that direct “ (…) couplings [Koppelungen] between humans and machines – as differing from between machines (…) are not possible.”33 According to Halbach the test made clear that human consciousness can only be coupled with a machine through a peculiar “capture” (Halbach uses the German term “gefesselt”, i.e. ‘captured’ or ‘bounded’).34 Mediation – the apparently innocent telex, jumped into the foreground:

“Despite all the efforts with Human Computer Interfaces and Human Computer Interaction are confronted with the problem that the coupling [Koppelung] between to different systems (man and machine) employ different process-codes in their communication. To put it differently: “The hardest part of communication is the last four inches”.”35
Of course this point is abundantly affirmed by any practical design-effort in the field, from the first chaotic cablings of machines over the implementation of what is still incorrectly termed “peripherals” (keyboard, mouse, screen etc.) to the “user revolution”36 towards the late 80s. To put it differently, the innocent telex points to at least two set of creative constraints for computer-design: what I will term ‘extro-version’ and ‘intro-version’:


  1. Extro-version: The telex proves to be a peripheral assuming a central role for the entire system. Not comparison of intellects or radicalized parallelism prevails, but the extravagance of a secondary machine (the telex) to be displayed and handled as if it were the actual center of the computing device. One may imagine with what excitement everybody would wait by the telex to see what appeared, to see, whether the test was going one way or the other. Thus, the telex was to highlight not only the practice of mediation but the importance of interfaced use, or as it was to become in the late 80s, “situated” use.37

  2. Intro-version: Second, the telex functions as a kind of ‘build-in’ device, despite its clearly peripheral character. Because of the tests ambiguity, as rightfully remarked by Halbach, the telex is not only an unacknowledged form of mediation, it also displays anonymity: once the communication is established the telex withdraw into the background, and the test moves into the foreground. This anonymity takes on its own life as an interface use which “disappears” in the virtual as it would be known from the first virtual reality systems in the late 80s, or as a vision of a totally distributed ubiquitousness, build in everywhere, at ease, ready to hand.

Turing’s trivial telex thus delineates not only a future battlefield for design-strategies, as bewitnessed in the confrontations between ubiquitousness and virtual reality in the 90s,38 it also marks out a future of strategical experimenting with how computing can be made to appear as sensible, that is, be rendered ‘knowledgeable’ in terms of use, of designed ‘useability’. The issues of extro-version and intro-version thus comes to reappear on several levels: a) as different modes and strategies of design, b) as different modes of assuming and distributing computing artifice, c) as different visions of how technology may and can establish relations in the world. It is possible to argue – perhaps with a slight overstatement – that from Turing’s trivial telex to the prospect of general pervasiveness of the 1990s runs a direct, fundamental and yet not superseded issue of interface which is essential to any human understanding of phenomena relying on mechanical computation in the contemporary sense. But, importantly, from this follows also, that the strategic options cannot really be seen as forms of knowledge, appearing by a principal relation to forms of ill-defined, incomprehensive states of unknowledge etc., but as differing modes of creative addendum to the world: that is, of getting to ‘know’ how to create an artificial form

However, this should only be considered a first step in a critical interpretation of the test. An even less acknowledged, but comparatively more important issue must be focused. One not of interface in the ‘strict’ sense, but of creation per se. That is, not one of how the test is actually proceeding when communicating symbols between the different ‘sides’ but one of how the telex introduces a creative idea of how the two ‘sides’ can be coupled, and moreover, coupled at all. I.e. how exactly this solution de facto becomes the only possible solution, or, at least, the most feasible (in any sense for Turing and many of his adherents and critics). The most radical aspect of Halbach’s interpretation of the Turing test is the indication that behind the issue of mediated architecture, behind the telex, lies a problem which must become increasingly complicated, and in turn change the whole agenda, not least with the increasing proliferation of computer technology in various contexts: what I have termed the imaginary of the artificial.

Within the constraints de facto impressed by the set up of the test, no other architecture, or, no other family of architectures, could be imagined in the late-40s, or, at least, we may say post factum, no other ‘came to mind’. It ‘had’ to be the telex, i.e. an electronic/electric communication-device of a certain nature, with a certain history, with, so to speak, an anonymous and unreflected approval (e.g. during WW2). Now, this is not a trivial remark, even if it plays a trivial role in Turing’s argument. It is perhaps the most important aspect of the whole experience. It can – I believe, not be overstated.

The telex had to take its place not only because of lack of better device, but because of a specific imaginary act, which apparently took exactly this form, given the conditions for testing in the late 40s. What appears is thus an imaginary act, and moreover, not any wild fancy, but, on the contrary, an issue of bounded imaginary constitution – under “constraint” (Castoriadis) creating a solution with a number of aspects, and moreover, with a number of implications. What we encounter is at least twofold: an act of creative genius and immediately, sticking to this creative articulation, a horizon of constraint. This relation between creative solution and creative constraint is conditioning not only a practical detail, but in fact the entire framework for the technology in question. Thus it is not at all innocent, but vital to the establishment of the machine as constitutive meaning, as “ordre de contenu.”

While it may be argued that the prime reason for the Turing test is the intellectual exploration of the potential of a new machine, of universal computation in situ, as indeed it was going to be from the 50s onwards reaching unparalleled heights with connectionism, networking, etc. – early and later complexity in their various instances, all this remain bound to interface: that is, to the envisioning of how a machine may operate, i.e. adapt, or better, be ‘knowledgeably’ invented, in the world.

Thus imagination proves not only to be indispensable for function, and thus for computation, from Turing to the Deep Web in the late 90s, but for the emergence of knowledge per se. Moreover, and importantly, ‘behind’ conjectural knowledge is to be found not primarily unknowledge – not necessarily implicit gaps in scientific procedure, as reviewed by the problems of methodology, from the early calls for complexity over ill-defined or wicked problem to tacit knowledge and more recent appraisals of situated use, but a principal factum of a creative addendum. The real act of genius in Turing’s experiment is not the humiliation of humanism, but the bringing to the fore mediation as artificially embedded imagination, conversely, proving to us that computation cannot be distanced from imagination, even if this has been a reoccurring theme from strong AI to the fascinations of the ‘cyborg’. Or, in impertinent terms: the genius of Turing was not to set up the test, but to demonstrate that apparently he could not imagine otherwise.

The comparison between machine and man was a creative bringing together of disparate elements in a visionary, yet desperate attempt at understanding the prospects of mechanical computation, and the result is no less significant. The creativity of subsequent design of computer-based systems portray a unique, yet diverse complexity: ‘knowing the interface’ between humans and machines is still under the spell of unknowledge – not of design, that is, but of imagination.

Anders Michelsen, assistant professor and coordinator of visual culture studies at the Department of Arts and Cultural Studies, University of Copenhagen, since 2002.
His research-topics and interests lie within the interdisciplinary field emerging from contemporary art and culture, design and technology, history and globalization, especially as related to visual culture, design, computer media, and creative imaginaries. He is preparing a project on new fundamentalisms and visual culture. He has co-authored, co-edited and contributed to a number of books, anthologies, journals, catalogues, and papers in Denmark and elsewhere. He has been advisory editor of Atlantica Revista de Arte y Pensamiento, CAAM, Gran Canaria since the mid-90s (http://www.caam.net/en/atlantica.htm). Since the early 90s he has worked as a free lance art critic and curator of art and design exhibitions. Full list of publications since 2002 and samples for download on: http://www.hum.ku.dk/kunst/Ansatte/Ansatte_Anders_Michelsen.html

Anders Michelsen

Department of Arts and Cultural Studies

University of Copenhagen


Karen Blixen Vej 1.4

DK-2300 Copenhagen S


Phone: + 45 35 32 82 27 (direct)

Fax: + 45 35 32 82 22



E-mail: amichel@hum.ku.dk





1 Michael Bergman, The Deep Web. Surfacing Hidden Value. White Paper. 2000 BrightPlanet.com LLC 2000. Current version on http://brightplanet.com/technology/deepweb.asp

2 Ibid., p.iii, iiif

3 C.f Lautrup’s statements in Robin Engelhardt, Christian Madsbjerg, “Nettet bugner af ukendt liv” [The Web is swelling of unknown life] in Information [Danish Newspaper], 7. august 2000.

4 Ibid.

5 Ezio Manzini: Artefacts. Vers une nouvelle écologie de l'environnement artificiel. Les Essais. Paris: Centre Georges Pompidou 1991, p.44, p.52. Warm thanks to Peter Murphy for comments to earlier versions this draft, and for pointing my attention the term ‘artifice’.

6 Cf. Cornelius Castoriadis, The Imaginary Institution of Society. Cambridge: Polity Press 1987.

7 Robin Engelhardt, Christian Madsbjerg, “90 internetsider pr. Verdensborger” [90 webpages pr. Global Citizen] in Information [Danish Newspaper], in Information 07. august 2000.

8 Cf. John L. Casti: Complexification. New York: HarperPerennial 1994; John L. Casti: Would-Be Worlds, New York: John Wiley & Sons, Inc. 1997; Edgar Morin, La méthode. I. La Nature de la Nature. Éditions du Seuil 1977; Jean-Pierre Dupuy, The Mechanization of the Mind. On the Origins of Cognitive Science. Princeton: Princeton University Press 2000.

9 Otl Aicher, ”the world as design” in: Otl Aicher, the world as design. Berlin: ernst & sohn verlag für architektur und technische wissenschaften in co-operation with the otl aicher archives 1994.

10 Ibid, p.179.

11 Ibid.

12 Ibid., pp.179-180.

13 Ibid., p.181.

14 Dupuy, Op.cit., p.27ff, p.28.

15 Cf. Herbert Simon, The Sciences of the Artificial. Third Edition. Cambridge Mass.: The MIT Press 1996.

16 Aicher, Op.cit., p.182.

17 Dupuy, Op.cit., p. 140ff.

18 Casti, Op.cit.

19 Cf. Philippe Breton, À l’image de l’homme. Du Golem aux creatures virtuelles. Paris: Éditions du Seuil 1995

20 Cf. for instance, Gillian Robinson and John Rundell (eds), Rethinking Imagination. Culture and Creativity. London and New York: Routledge 1994

21 Friedrich von Hayek, cit. aft. Dupuy, Op.cit., p.157.

22 Paul Dumouchel & Jean-Pierre Dupuy (dir.) Colloque de Cerisy: L'auto-organisation. De la physique au politique. Paris: Éditions du Seuil 1983

23 Ibid., p.17.

24 Ibid., p.21 ff.

25 Castoriadis, The Imaginary Institution of Society, p.131.

26 Cornelius Castoriadis, “The Imaginary: Creation in the Social-Historical Domain,” in Castoriadis, World in Fragments. Writings on Politics, Society, Psychoanalysis, and the Imagination (David Ames Curtis (ed)). Stanford: Stanford University Press p.13.

27 Cornelius Castoriadis, “Radical Imagination and the Social Instituting Imaginary,” in Castoriadis, The Castoriadis Reader (David Ames Curtis (ed)). Oxford: Blackwell Publishers Ltd. 1997, p.336.

28 Castoriadis, “Radical Imagination and the Social Instituting Imaginary,” p.319ff, p.321.

29 Ibid., p.335-336.

30 Wulf R. Halbach, Interfaces. Medien- und Kommunikationstheoretische Elemente einer Interface-Theorie. München: Wilhelm Fink Verlag 1994, p.140ff.

31 Ibid., p.144f

32 Ibid., p.144f, 151ff.

33 Ibid., p.143.

34 Ibid., p.151.

35 Ibid., p.152.

36 Cf. for instance, Janet Abbate,. Inventing the Internet. Cambridge Mass.: The MIT Press 1999.

37 Cf. Lucy Suchman, Plans and situated actions: the problem of human-machine communication. Cambridge: Cambridge University Press 1987.

38 Cf. Marc Weiser, “Ubiquitous Computing” (August 16, 1993). http://www.ubiq.com/hypertext/weiser/UbiCompHotTopics.html. (19:12 2004).




Download 104.85 Kb.

Share with your friends:




The database is protected by copyright ©ininet.org 2024
send message

    Main page