Notes for lecture on Computer Art Practices Algorists and Algorithmic Art



Download 245 Kb.
Page2/5
Date06.08.2017
Size245 Kb.
#27291
1   2   3   4   5
Part of the reason for this is that he faces a problem with existing plotters. This is because plotters have been made obsolete by large-format inkjets, which (as Hébert sees it) are faster but do not have such high quality output. The plotter’s main clients – CAD users in the main, who needed large-scale plans and drawings – now value the speed of inkjets and care little for the durability of plotter prints. Supplies of plotters, spare parts and pens are steadily dwindling. Hébert said he had bought some of the last remaining stocks of pens for his HP plotter, and that he has no backup for his pencil plotter (a rare item in itself).
Hébert has also considered a possible, slightly interactive improvement to Ulysses, which would be to use imaging software and a mounted camera to recognise the position of rocks on the surface of the sandbox. Currently, these are added after the design has been completed, but if their position could be recognised, the ball could modify its course to avoid them, hence making the design dependent on their positioning.
Outside plotters and sand drawings, JPH also makes images with cellular automata, which follow their own paths to create dense stochastic images, and symmetry studies which he prints on a laser printer or Iris, though as mentioned he does not attach much value to this output, seeing it as a sketch compared to the one-off plotter drawing.
Hébert’s partly physical, partly digital work could also be seen as a metaphor for Computer Art as a whole, touching on Art and Science; partly a visual art, partly an exercise in programming; using a form of composition to generate new shapes through the computer.
This intimate involvement of the machine in the creative act might be abhorrent to some artists, who prefer a tactile medium to these mutable shadows. Seen on a screen, the digital image fascinates us yet we know we cannot touch it, cannot appreciate it except through this glass intermediary.
But when we see the physical traces of this process, held in stasis for our inspection, then we may begin to appreciate the subtleties of Computer Art. For Hébert, the computer is effectively a channel for his physicalised art.
Finally, Hébert’s exhibitions in association with the Computing Commons Laboratory at Arizona State University point to ways that digital artists may find outlets for their work. Instead for striving for years to convince existing galleries, or working as academics in order to support their interests, digital artists should be exploring partnerships with similarly-minded organisations and individuals in the technology industry.
This would certainly include academic bodies, but other sources of funding would also arise. The more that digital artists strike out in new directions, the more likely they are to move away from traditional art models – even art prizes and competitions – and find new ways of distributing their art and supporting themselves.

Paul Brown
During my 35-year career as an artist my principal concern has been the systematic

exploration of surface. Since 1974 my main tool has been the computational and

generative process. I have established a significant international reputation in this field of

work and was recently described by Mitchell Whitelaw as... one of the ... pioneers of a-life

art (Metacreation - Art and Artificial Life, MIT Press, 2004, pp.146, 148-152).
My work is based in a field of computational science called Cellular Automata or CA’s.

These are simple systems that can propagate themselves over time. CA’s are part of the

origins of the discipline known as Artificial Life or A-life. I have been interested in CA’s and

their relationship to tiling and symmetry systems since the 1960's. Over the past 30 years I

have applied these processes to time-based artworks, prints on paper and large-scale public

artworks.


In my artwork I attempt to create venues that encourage the participant to engage both

visually and physically with the work. Because my work emerges (in the computational

sense) from game-like processes I include elements of play in order to capture and sustain

the participant's attention.


Rather than being constructed or designed, these works "evolve". I look forward to a

future where computational processes like the ones that I build will themselves make

artworks without the need for human intervention. The creation of such processes is

something that has always fascinated me.



4^15 – Studies in Perception

kinetic painting, size variable, 2006

Unlike much of my recent work, which uses artificial life agents to

“drive” the action, in 4^15 everything – the kinetic movement, colour

attributes etc... - is completely random. The emphasis of the work is

on human cognition. It explores the ability of the visual cortex and

brain to find serendipitous and “meaningful” associations in what is

merely “well dressed noise”.

This work continues my 40-year interest in art and technology. A

computational entity is used to drive the work and ensure that its long-

term behaviour is both interesting and non-repetitive. In choosing to

describe these works as Kinetic Painting I am acknowledging my

roots in the European kinetic, conceptual and systems movements

and also paying homage to the US/French artist, Frank Malina, who

first used the term to describe his electro-mechanical works in the mid

1950s.

In my artwork I attempt to create venues that encourage the



participant to engage with the work. Because my work emerges (in

the computational sense) from game-like processes I include

elements of play in order to capture and sustain the participant's

attention. Rather than being constructed or designed, these works

"evolve". I look forward to a future where computational processes will

make artworks autonomously without the need for humans.



Daniel Brown
Daniel Brown

Multimedia Designer (1977-)

Designing Modern Britain - Design Museum

Until 26 November 2006

View print version
One of the pioneering generation of self-taught web designers, DANIEL BROWN is noted for the humour and playfulness of his interactive animations often inspired by nature.
Like many web designers Daniel Brown discovered the medium - and drew his early inspiration - from the video games he had played since childhood. He then sought to refine the frenzied, sometimes brutal aesthetics of those games by creating interactive images for the web which would have the same sensory effect on the user as listening to a beautiful piece of music.
Born in Liverpool in 1977, Brown grew up among computers, both by playing video games and watching his father at work as a pioneer of computer graphics. After his father left Liverpool, a family friend the late Roy Stringer, who worked in the Learning Methods Unit at the city's John Moores University, allowed Brown to use the computers there.
The Learning Methods Unit was then developing early interactive learning tools on CD-Rom and the internet. After Brown left school, Stringer gave him a job at Amaze, the design company spun out of the Unit. Brown also developed a personal site, www.noodlebox.com. When it was launched in 1997, noodlebox introduced a fluid, playfulness to web design, in contrast to the pragmatic, often sterile visual style which then dominated the medium.
Now based in London where he works for the SHOWstudio web site, Daniel Brown has harnessed subsequent advances in technology to imbue new work - such as Bits and Pieces - with light, texture and the illusion of three-dimensionality. Often inspired by nature, his projects have a spontaneity and freshness even when he revisits old themes like Flowers and Butterflies. His goal in his interactive work is to elicit an instinctive response from the user by making them forget the technology.
Daniel Brown has replaced noodlebox with Play/Create, a site on which he posts his experimental work and that of other designers. After participating in the Design Museum's Web Wizards exhibition in 2002, he featured in the Great Brits, the survey of new British design organised by the Design Museum and British Council in Paul Smith's Milan headquarters during the April 2003 Milan Furniture Fair. Daniel Brown won the Design Museum's Designer of the Year prize in 2004.
See Daniel Brown's work at:

play-create.com



showstudio.com
Q. How would you describe what you do?
A. Many of the pieces are investigations into unique immersive interactive experiences. The idea was to use computer game-like technology, but to use it for more artistic and aesthetic purposes. Whereas games mostly deal with superficial male fantasies, my work aims to subtly portray emotion, beauty and aesthetics.
Q. When did you first become interested in computers? And how did that interest translate into digital art and design?
A. I mainly became involved with computers through playing games as a child. Later my imagination was inspired by seeing the work my father and his contemporaries were doing in the early days of computer graphics. I was practically born with computers around me and I think that gave me a different perception of them: not as tools that made existing tasks easier, but as fundamentally new media.
Q. You have often cited Roy Stringer of John Moores University's Learning Methods Unit as a mentor, how did he influence you and your work?
A. While I looked to my father for inspiration in my younger years, by the age of ten, he was abroad which meant I had no access to the more powerful computers then becoming available, such as the Apple Macintosh. Roy, as a family friend, invited me to use the computers in his office on evenings and weekends. It seems a very trivial gesture now but, back then, a basic Apple cost £7,000. Later, as I became more experienced at using the machines, Roy spotted potential in my work and became a mentor to me. Roy taught me two things, which almost seem contradictory, but in doing so ultimately defined how I work. One was to challenge everything, to investigate from the ground up and seek new and better ways of achieving things. The other was to make sure what I created was not simply pandering to superficiality. That was really necessary.
Q. The Amaze projects of the late 1990s, such as the MTV Music Mixer and Immunology, are now seen as landmarks in the development of digital media technology. What was your involvement with those projects and other early work with Amaze?
A. The MTV mixer was created in the early days of Amaze Research. Back then, Roy led a team experimenting with new user interfaces and methods of navigating information space. One of the ideas that had sprung out of Roy's work was the Navihedron, a device for authoring and navigating non-linear closed systems. We were playing with them in all kinds of ways. One of the nice examples we came up with for demonstrating the principle was to use sounds rather than abstract labels, and from that came the MTV mixer. Immunology, more formally, was one of the first products I believe was truly authored for non-linear media, rather than just taking an indexed version of the book onto CD-ROM. The author of the book on which it is based (also called Immunology, by David Male) actually worked with us to create fifteen new models of the information, so it truly gave the end users an ability to work through via whichever path suited them.
Q. Similarly, when you unveiled Noodlebox in 1997, what was your concept of the site? And did you have any sense of how influential it would be?
A. At the time, I had the feeling that web sites were still perceived as page-based, almost brochure-like experiences. Back then, it was claimed by designers that this was a limitation of the technology; but while I was inclined to agree that the technology leaned in that direction, it was being overlooked that other things were possible. I wanted to create an experience which was more engaging, more like a computer game to use, and more like a product in perception, like a CD of music that one can actually hold. The same way that people talk about music CDs, I wanted people to see Noodlebox as a piece of content rather than as a website.
Q. What is the inspiration for your work?
A. Whereas in the early days I took inspiration from the technology movement - a techno-Japanese-Bladerunner-style idea of the future - more recently I have looked to non-digital things for inspiration: film, fashion, photography and nature. I think that a lot of digital art is self-conscious. The medium and subject material are both "digital". I am trying to look past the notion of digital by taking it for granted. I see my work as more like an interactive short film or an interactive music video than as being "digital". I like the notion that people look at my work in the same way as they admire music: purely instinctively, rather than objectively. It should be entertaining, not intellectual.
Q. How conscious are you of the work of other digital artists and designers? Which have been most influential over you?
A. While I certainly support other online digital artists for stimulating the debate about what digital art and design is, I have never particularly looked to them for inspiration. In my youth, computer games meant much more to me. I even would go as far as to say that many computer games are art. More recently, the beauty and craftsmanship of fashion and photography have inspired the delicacy of my work. However, two people who have always inspired me are Golan Levin from MIT's Computational Aesthetics Group and James Tindall of www.thesquarerootof-1.com, who is a longtime friend. Although not strictly digital, the artist Bill Viola's Flat Screen work has inspired me because of the way it is presented as an enclosed product hanging on the wall: literally as a virtual window.
Q. Unlike many digital artists and designers, your education was vocational - in that you learnt on the job at the LMU and Amaze, rather than by studying at college - what are the advantages of being self-taught? And the disadvantages?
A. I think the formal education of digital art and design has some issues surrounding it. Essentially, students are required to learn two things at once: their craft and the use of the computer. I have rarely seen this handled well. Courses either focus on computer skills and don't fully investigate the issues surrounding art and design. Or they are focussed on art and design with some supplementary "How to make a website" programme attached. Invariably this leads to rather weak results and the only really good students I've seen coming out of these courses have obviously worked of their own initiative. Furthermore, I think that having learnt vocationally has given me a much more pragmatic instinct. Rather than calling upon established practise, Roy taught me how to look at things from all angles and to solve problems from first principles. I am a dedicated believer in lifelong learning, in always setting myself new things to learn. Whereas I fear universities give their students the idea that they leave with all the skills they'll ever need and set practises that don't lend themselves to true creativity.
Q. What do you see as the main challenges facing digital artists and designers?
A. I think the main challenge is recognition of the value of one's work. I recently had someone complain about my work, claiming it was bad because it wasn't obvious what the web site was trying to sell. I replied to this person politely, even advising him that it was a piece of art and wasn't a business and got the standard "You can't take criticism" in response! My point is this. What that person did was the equivalent of saying the a music CD was bad because it was useless at explaining economics. It's not meant to! Music is now a long established medium, which people understand how to appreciate. I don't think new media - as in what we are doing - yet has a language to be understood in such a way.
Q. How did joining SHOWstudio influence your work?
Joining SHOWstudio was a confirmation of a general direction I had been going in for a number of years. In the early era of CD-rom multimedia and the web, a lot of the good work was made by people with both design and technology skills. A very cottage industry. And this was understandable, after all, the only other option was to have work made by technicians that had no design sense; or technology projects ‘art directed’ by a designer still thinking in a print video medium. However, I felt there were limitations to that - I began to envisage projects with great audio, great photography, great animation and great illustration. But I was not a great musician, photographer, animator or illustrator. So by that point I felt in order to progress, I needed to specialise in interactivity, and bridge that gap between technology and artists in other fields. SHOWstudio came at a perfect time for that; and was exactly the kind of agenda Nick Knight (the photographer who founded it) was looking to pursue. Besides, apart from new media, my other passion was fashion, fashion branding and culture. So from that point on, I saw new media as a collaborative medium and not just an isolated field/industry as it had been.
Q. Can you describe the evolution of a recent project?
A. Overall, the interesting thing for me has been the transition of the web audience from a technically minded audience to a mass, populist one. My work has ironically shifted away from being obsessed with high technology and being technically advanced. The audience now wouldn’t necessarily even notice the amount of effort put into the technology behind a piece, and it still surprises me how people enjoy the simplest things in my work; the colour, the sound, the humour, the emotion. It’s those things I have focused on more recently. I am prepared to forego being cutting edge if necessary.
Q. How have advances in technology influenced your work?
A. The last two years have seen fantastic change. Camera phones and faster home computers means people can now engage and interact in visual, expressive ways - even a year ago the limit to online mass participation was text-input. The integration of powerful computers into the home environment through PlayStation, Microsoft and so forth has created an increasingly adult, cultured audience for a new interactive entertainment medium; exactly the people at whom the programming of SHOWstudio and play-create is aimed.
Q. Where do your find inspiration? From the work of other multimedia designers and artists? Or different fields?
A. Mostly different fields. Visually, the fashion industry has always produced the most stimulating imagery to me. You’ll find find my shelves filled with copies of Arena+, i-D, Jalouse, Pop and V. But there’s a flatness to it, a one-dimensionality of being stuck in a moment in time. People like Chris Cunningham and Michel Gondry have gone on to produce music video and time based work with as strong an engagement. It is these things, combined with interactive technology that is where I see Play/Create looking.
Q. What are you working on now?
A. Wrapping up several Play/Create and Software As Furniture pieces.

And two games, which are intended to be interactive music videos which parody the idea of what a computer game is. One of the games toys with the reality that playing computer games is merely physically making choreographed patterns on the screen despite any overlaid violent storyline. It looks at what success and challenge mean: when a game gets progressively harder as the user ‘succeeds’. On the Software As Furniture front, a long overdue interactive Flowers piece will continue the series.


© Design Museum, 2006

Casey Reas and Ben Fry – Processing

http://www.rhizome.org/editorial/2960

Created by Casey Reas and Ben Fry, Processing is an open source programming language and environment for people who want to program images, animation, and interactions. It is used by students, artists, designers, researchers, and hobbyists for learning, prototyping, and production. It is created to teach fundamentals of computer programming within a visual context and to serve as a software sketchbook and professional production tool. Processing is an alternative to proprietary software tools in the same domain.


I first discovered Processing in 2003 at ITP while exploring different options for creating a set of tutorials about generative algorithms. We quickly realized that Processing could transform our approach to teaching programming and have adopted it as the language learned by all incoming students. I’m thrilled to have this chance to talk to Casey and Ben a little about the origins of Processing, their philosophy, work, and plans for the future. – Daniel Shiffman
How did you each discover computation? What was the first program you wrote and in what language?
Casey Reas: I was very lucky that my dad brought an Apple II into the house in the 1980s. These early home computers encouraged programming and there were books on programming in Basic written for kids. I don't remember if I started with Basic or Logo, but I learned a little with both. I hit a wall and I wasn't motivated to learn more. (I love playing video games on the computer more than writing my own small programs.) I was introduced to Lingo when I was in college, but I only wrote simple scripts for moving back and forth in the timeline and turning on and off sprites. When I shifted from working in print to the Web in 1995, I fell in love with the potential for making and writing software. I engaged fully with C in 1998 when I took classes at NYU extension, something clicked, and I started to really learn for the first time. I quickly moved on to C++, then later to Java and Perl at MIT.
Ben Fry: I started with an Apple II+ and an IBM PC that my Dad brought home from the university, though I can't remember which was first. I learned BASIC on each, and that evolved into other machines (a whole string of Macs starting with the original 128K version) and languages (Pascal, C, C++, PostScript, Perl, Java...) The first program of consequence was a stock market game (ah, the embarrassment) that I sold for $250 when I was in seventh grade.

screen-outline-500px.jpg



Image: Ben Fry, On the Origin of Species: The Preservation of Favoured Traces, 2009 (Still)
Tell us a little bit about the origins of Processing. Where and when did you have your first conversation about creating it?
CR: It was sometime in June 2001, as I was finishing up at MIT. We made of list of the basic specs for the environment and drawing functions. It was one 8 1⁄2 x 11 inch typed page. By the fall, Ben had something working and the first workshop took place Japan in August, 2001.
BF: Yeah, revisions 0003 and 0005 were used for a workshop at Musashinio Art University (MUSABI). I spent the first part of the week teaching Design By Numbers and then some of the students tried “Proce55ing”.
When looking at other programming environments geared towards visuals (Design by Numbers, Logo, etc.) what kinds of things did you want to emulate and what did you want to do differently?
CR: For us, the big idea of Processing is the tight integration of a programming environment, a programming language, a community-minded and open-source mentality, and a focus on learning -- created by artists and designers, for their own community. The focus is on writing software within the context of the visual arts. Many other programming environments embodied some of these aspects, but not all.

William Latham
William Latham started working with computers in 1984 after completing his degree in Fine Art. Latham’s contribution to the field, following the ideas of Karl Sims, is that he has taken the idea of evolving forms and freely developed it into a distinctive artistic style, which incorporates natural and artificial elements. Latham is also interesting for having gradually moved away from straightforward art and into computer games, which incorporate ideas and code taken from his earlier art work.
Latham was interested in this evolution of form even before he discovered computers. Using a sequence of rules for the transformation of shapes, he sketched out these huge canvases of multiplying, changing forms.
[Plate XXXIII: Latham’s drawings and their computer-generated counterparts]
The logic and consistency of Latham’s possible worlds arises from, as much as anything else, his concept of an evolutionary approach to the making of sculpture. The complexity and vitality of the forms he devises comes about from the step by step accretion of “operations” on simple initial shapes such as cones, spheres or toruses [Lansdown: “The Possible Worlds of William Latham”]
Lansdown believes that Latham and others working with him (e.g. Stephen Todd and Mike King) have shown us “another form of sculpture”. This is derived from the illusory yet real appearance of his works: their seeming materiality which is defeated by the obvious departures from our physical reality. Even this system could produce unexpected results:
Simple as the rules of FormSynth were, they seemed to have a creative power of their own. Even though Latham had created and applied the rules, they produced imaginative forms he had not expected. [Todd and Latham 1992:2]
In 1987 Latham was appointed Artist (Research Fellow) to IBM at Winchester, and here he began working with Stephen Todd on a system called Form Build. This built on Form Synth and allowed simple construction rules such as bulging and hollowing objects. As he worked with this, building up a library of form, Latham realised that some of his long sequences could be condensed into new rules, such as those for growing tendrils and horns.
Later on, Latham began to breed these forms together, by identifying their basic components as “genes”, and allowing these to be recombined and modified to produce these trees of form. As he says: “Mutator derives its methods from processes of nature, and was partly inspired by a simulation of natural selection”. [Todd and Latham 1992]
This system has an overall appearance that could be called “organic”, and seemingly aims toward natural, yet fantastic forms. So a stylistic decision by Latham was made at the level of the program itself, ensuring that all images bear his imprint, to a degree. The aesthetic of these images, whilst inspired by nature and by science-fiction, remains very much their own. These are forms that would have been inconceivable without the computer to perform all the possible changes, transformations and developments that Latham foresaw.
But the artist has to remain as director. Latham sees the artist’s role as similar to a gardener, selecting and changing the forms, guiding their development and arriving at images which were previously inconceivable. Even Latham could not foresee all the possible outcomes. Although using commercial software can produce images quite unlike pre-digital techniques, there is still some continuity between them.
Completely new artforms may only arise when the artist actually programs the computer themselves. In this way, new aesthetics can develop, as Latham seems to prove. Because he had been interested in evolving forms even before he used computers, he was able to apply the most distinctive computer-quality of all: the modelling of dynamic processes.
These artistic systems are not wholly deterministic, running an image through pre-set parameters until it reaches perfection. Indeed, Latham realised early on that the most interesting outcomes of his program were quite unforeseen by him: his evolutionary program could arrive at unexpected conclusions. Even if an artist programs the computer from the start, there will always be an important element of mystery in the working of the software. [returns to earlier theme on unpredictability – develop previous themes]
The results of an operation which is open-ended but circumscribed by the programmer can still be unpredictable. Jean-Pierre Hébert, though valuing the control his software grants him over the image creation process, sees such indeterminacies as an essential part of the final work. Of course, he uses the computer as a controller to inscribe an image on physical materials – and it is in this transition from digital instructions to physical form where the most interesting chance elements can occur.
Such quirks render the computer less mechanistic (and predictable) and more ‘artistic’, because the outcome of certain operations cannot always be foreseen. This unpredictability can be harnessed in the same way as the chemical reactions of pigments, or the densities of stone. In other words, an artist develops a feel for its working and gradually incorporates its idiosyncrasies into their work, which itself changes subtly or overtly to accommodate these properties.
This is evident in Formsynth and Mutator, where Latham’s choice of operations performed on the initial shapes guided their eventual appearance. Latham’s stylistic involvement was, in a sense, pre-visual; it affected the starting point and development of all images generated through the program rather than just a single artwork. Although it was a modification of the program’s underlying code, it had visual consequences because in this way Latham determined the visual environment in which his shapes could develop. Latham compares the artist to a gardener, guiding the growth of a plant rather than creating an image from scratch. This is itself a new development for art. [Todd and Latham 1992:12]
Latham’s Organic Art images are the product of evolutionary processes, and thus indirect products of his artistic vision. “Indirect” in the sense that Latham developed the program to evolve shapes along particular visual lines, but its continued operation is not dependent on his intervention. Like Cohen’s AARON, the widely distributed Organic Art software could continue to create Lathamesque images long after his demise, with varying inputs and changes from computer users. The encoding of his evolutionary process in software allowed him to make it portable, and then distribute it widely as PC software. Again, this widely distributed software may produce pictures not directly conceived by the artist, but inherent within the parameters of the software. Latham is responsible for assembling these elements according to his vision and requirements, but the final image is the result of the software’s own working out of these possibilities. Hébert has drawn on
the wide field of printing techniques and creative opportunities that one [finds] in a good print studio [and] the wild opportunities happening in artists' collaborations.
Unlike AARON, however, with its complex relation to Cohen’s creative input, Latham’s software has a straightforward input procedure and generates images from his initial input parameters. Harold Cohen’s AARON is not so straightforwardly instructed; it seemingly derives its own decisions about what to draw from its understanding of art. Cohen sees his current work as a “collaboration” with AARON and is confident the software will be producing his art long after his death.
There are two different forces at work here. Firstly, there is the artist’s control exercised by writing or mastering the appropriate software to create images. Secondly, there is the serendipitous aspect of accidental discovery inherent in an open-ended program where absolute control yields to experimentation and chance discoveries. For instance, in William Latham’s work, the evolutionary nature is the result of a programmer’s control in setting up the initial conditions, then exercising further choice over the outcomes of these experiments.
On the other hand, the Algoristic artist Roman Verostko, who uses plotters to realise his images, sees imperfections in the printing as an impediment to realising his art. He values exactitude in execution.
Harold Cohen and AARON
Whereas Hébert has moved towards a physical realisation for his art, Harold Cohen has moved away from it in some respects. Cohen was a noted abstract painter during the 1960s, but following a retrospective at the Whitechapel in 1967 he became disenchanted with the British art scene and went to America to teach at UC San Diego. There, he was introduced to computers and gradually developed the program AARON, which has been the focus of his artistic activity for over twenty-five years. AARON developed from a fascination with the process of line-making and how enclosed forms, or shapes, were drawn on paper; initially Cohen did not approach his programming as an artistic activity, but rather a disciplined research into the grammar of conceptual space. He initially saw AARON as a program that emulated what humans did; then as an autonomous entity. As the program’s complexity increased, Cohen added more forms to its repertoire, eventually ending up with human figures and giving the program an increasingly sophisticated understanding of their positioning in space. Part of its fascination lay in the way that AARON’s images looked subjectively like sketches:
The earliest versions of AARON could do very little more than to distinguish between figure and ground, closed forms and open forms, and to perform various simple manipulations on those structures. […] AARON's drawings had a distinctly freehand, ad-hoc look quite at odds with popular assumptions at that time about machines […]to judge from the responses of art-museum audiences, AARON's marks quite evidently functioned as images [Cohen 1995]
The process of AARON’s picture-making is outlined by Ed Burton. The program receives its knowledge of the world indirectly, in that Cohen encodes the structures it uses to develop images: for instance, the human body as described in AARON is composed of parts in relative sizes, with a range of movement and posture and information on how they fit together. At a higher level, AARON can compose the objects in a scene such that they are positioned relative to each other. Burton emphasizes the tree-like structure of this process, and that the knowledge of structure is combined with AARON’s “procedural knowledge” of drawing, to arrive at a form. [Mealing ed. 1997:59]
[Plate XXVII: AARON output]
The relation of rule to result is not as straightforward here as in, say, Helaman Ferguson’s sculptures which are solutions to specific equations; nor as in Hébert ’s algorist pieces. Cohen’s rules do not prescribe the exact form but rather its overall outcome: a series of branching “ifs” where results and outcome spawn yet more questions. His approach is heuristic and systems-based: not really mathematical. In this way, it is quite conceivable that no two AARON pictures could ever be alike, yet they can all share a similar visual style. To what extent is this style the outcome of Cohen’s direct programming, or is it somehow inherent in the picture-descriptions in ways its creator may not have anticipated. This reflects Cohen’s thoughts in a 1973 essay from the very beginning of his program,
Human art-making […] is characterized by a fluently changing pattern of decisions based on the artist’s awareness of the work in progress [McCorduck 1991:45]
AARON could be described as an independently creative extension of Cohen’s artistic method, rendered into software and executed by the computer. Insofar as AARON can create pictures, it does so in a style that has emerged over the course of Cohen’s programming and development of AARON’s technique. In its development of Artificial Intelligence theories, AARON also shows how a relatively simple set of rules can lead to complex and intelligible results; an artistic emergence remarked upon by McCorduck:
In AARON, a central idea of artificial intelligence is exemplified: the program is able to generate the illusion of a complete and coherent set of images out of a comparatively simple and sparse lower representation. [McCorduck 1991:28]
Of particular interest is Cohen’s notion that the AARON pictures represent a “collaboration” between himself and the program. The forms incorporated into its core have evolved from his understanding of the art-making process, and their structuring into higher forms and scenes also represents his input. Cohen’s agency is at the instructional end, but rather than issue specifications (or orders, like Moholy-Nagy) he gives outlines, patterns to be incorporated as the program sees fit. (Must ask if it operates on an “evolutionary” algorithm of fitness) This is what McCorduck refers to when she sets forth a few of the rules. Yet the program’s size implies these rules are exhaustively described. AARON’s room for manoeuvre – “creativity” if you will – comes from the application of these rules in visual form.
Cohen posed me the example of facial construction: the nose can vary within a plausible range, as with the eyes, etc; all considerations taken together still allows an infinite range of possible faces. If this plausibility factor could be deduced by the machine, this would move it somewhat towards autonomy. When the program is about to be executed, HC changes the variables within the code and the program begins to draw, allowing space for overlapping parts of the picture in its initial image block. However, the problem is that due to the size of the program, modifications to one part can seriously affect the others.
This was graphically demonstrated when HC modified one command to fill in the irises in the eyes of figures drawn by AARON: the net result was that the program drew up to this point, then crashed. It would seem the modification necessitated changes in other places to make it work. Changing these other parameters required Cohen to work on parts of the program he had not seen for up to a year: with so many lines of code, he had to return to his own knowledge of its workings and think back to his previous modifications and changes. In this way, the program is an integral part of him, and his art, and it would be hard to separate the two: AARON is an external manifestation of how he conceives of and executes his art.
Although the resultant picture is in no way predictable, it does contain certain features that mark it out as being an “AARON” work, many of which have emerged in the course of Cohen’s programming and modification. Indeed, Cohen goes so far as to say the program will continue to turn out “originals” long after his death, enabling him to have a posthumous show of new works! However, Larry Cuba rather derisively termed AARON a “Harold Cohen simulator”, pointing out that in his opinion Cohen was only modelling his own internalised drawing process; any thought of its universal applicability shout be regarded sceptically. In fairness, Cohen is quick to acknowledge this aspect of his work. So, is AARON simply a high-level mechanism with a high degree of seeming artistry, or does it span the gap into a truly creative computer artist that can produce visually meaningful (as opposed to intentionally random) results?
The output of AARON’s pictures is interesting in itself: it began with pictures drawn on paper with plotters and a turtle, and has evolved to include oil paints along with a knowledge of colour. Cohen stopped using the painting machines because of their engaging nature and the way that people were requesting pictures of them, confusing the external manifestation with the AARON software. The turtle he once used for drawing pictures was also very involving. The machines’ drawing process had the feel of a performance, and people were delighted when the painting machine washed up its own cups. Cuba speculates that the appeal of Cohen was seeing the plotter creating the drawing in the gallery.
References to the “robot” artist annoyed Cohen because its physical existence is beside the point. It was only intended as an exhibition device, not as the focus of the art. AARON as a program embodies a process, not a physical machine. In its latest incarnation, however, a version of AARON as a screensaver for the PC is being distributed by Ray Kurzweil as a trial download from his website.
Cohen contends that by offering AARON across the Internet, he is not only extending his previous premise of enabling everyone to own the means to make art, but he has also cloned the artist, in the sense that he has reproduced an art-making process. He now claims that instead of mass-producing ART, he has cloned the artist. Every instance of every AARON program downloaded from Kurzweil’s website will make different pictures forever. True, all the pictures are evidently the results of the same process, having a similar feel, but the fact they have any observable style is itself a consequence of Cohen’s approach. Cohen thinks the notion of “authorism” has outdated implications which have to be updated. AARON’s release as a software package has wiped out any value of the program’s uniqueness. Now anyone can have a copy, perhaps the value only remains in plottings and prints from HC’s own machine: those produced by other people will tend not to be included in the same subjective category.
On the other hand, Cohen’s collaborations with AARON also take the form of large-scale pictures, generated by the program and projected onto canvas which Cohen himself works up and paints. This is a more literal, physical interpretation of “collaboration”, going a stage further than the painting machine that once rendered AARON’s art. Cohen sees the canvases as permanent traces, or at least more long-term traces, of AARON’s activity than the drawings he used to make with the plotter or the ephemeral screen-based results of the AARON screensaver. He maintains, however, that the central importance of both activities – the canvas and the screen image – is that they are unique works of art: no two AARON works are exactly the same, and may this be regarded as originals.
By giving anyone with a PC the ability to make originals, Cohen claims he has turned Walter Benjamin’s idea of the mass-produced image on its head. Instead of copying the final image, with attendant notions of destroying the “aura” of the unique object, he has instead cloned the artist, or at least the process by which unique images are generated. Jean-Pierre Hébert has reservations about this last point: he sees AARON not so much as an artistic clone as a “mummy”, in the sense of an undead creature that has the appearance of life but no motivating spark. It is certainly true that AARON operates as a bounded system, with only as much knowledge as Cohen is able to encode. Yet, As McCorduck says
[…] The program AARON, [Cohen] believes, stands in relation to its individual drawings the way a platonic ideal stands in relation to its earthly instantiations. […] Cohen has found a way to work his will upon and through the paradigm rather than upon a single instantiation simply means that his level of involvement is much higher, conceptually speaking, than has ever before been possible for the visual artist. [Since the program is responsible for the performances] it is as if a score could play itself [McCorduck 1989:22]
If one considers Robin Baker’s criteria for a program to be to be recognised as creative, it seems that AARON satisfies the first point:
1) The conceptual space of the programmer is extended or broken

by a creative program. [In other words, it creates something beyond the boundaries of what was originally programmed into it.]

2) It should have judgment and be able to recognise its own work
Undoubtedly, AARON has increased Cohen’s frame of reference, the extent of his art and his experiences with image making. However, his insistence on the collaborative aspects suggests that AARON will not fulfil the second point – even in all its multiple instances, the program will not become conscious of its artistic role, or even the relative merits of a “good” picture (one that its owner decides to keep) with a “bad” picture, which is cancelled. Nor could it ever extrapolate its own aesthetic rules.
[Plate XXVIII: AARON, Man]
Judged as a work of art (especially as a work of computer art, though Cohen would dispute that term), AARON presents the interesting spectacle of not merely rerunning the image set out in a program, but improvising upon a set of rules in different ways every time.
A further consideration with AARON is the legal status of the indirect work of art. It is interesting to note that the current Copyrights, Designs and Patents Act 1988 has provisos for As to the production of original pieces of art using computers, certain details in the 1988 Act relating to the artistic category of “photographs” may provide some clues to a satisfactory legal definition, with mechanical considerations in addition to the aesthetics:
Authors’ rights systems tend to give copyright only to “photographic works”, that is the results of careful and distinctive arrangement (scene-setting, lighting, angle, etc.), involving an element of aesthetic judgment which is personal to the photographer (and/or some “director”, rather than the mere cameraman). […] this will exclude not only casual snapshots […] but also press photography. [Cornish 1999:390]
The implication is that aesthetic consideration of artistic photography arises from the conjunction of several factors and is not always simply the result of the person holding the camera. This may carry over into computer “art”, where the computer’s construction of the image according to the artist’s direction has raised questions about the artist’s level of involvement. The 1988 Act considers the general category of “Computer-generated works”, as discussed by Cornish:
Computer-generated works

[…] the author of a literary, dramatic, musical or artistic work is its creator in a real sense. He or she (but not it) is the person who, by exercising labour, skill and judgment, gives expression to ideas of the appropriate kind. [Yet] the 1988 Act acknowledges that works of all these types may be computer-generated; and it provides that, where the circumstances are such that there is no human author of such a work, the author shall be taken to be the person by whom the arrangements necessary for creation of the work are undertaken. [Cornish 1999:390] [Emphasis mine]


This is deliberately vague, even more so than the preceding sections on artistic works. Does it recognise the artist per se, or the programmer, or even some patron who ordered the artwork? Subjectively, it seems sensible to recognise that the computer has no independent creative power and is only undertaking to execute an idea presented to it, or worked on using it, by the artist. However, in systems like those created by Harold Cohen, William Latham and others, where the computer executes images according to complex rules that make it seem to be acting independently, determining the “author” is almost impossible by traditional standards of the term. Harold Cohen’s AARON system gains its apparent artistic independence through Cohen’s skill at describing processes and habits of thought as rules that direct the system. He considers how this makes it different from straightforward computer graphics software:
AARON is an autonomous intelligent entity: not very autonomous, or very intelligent, or very knowledgeable, but very different, fundamentally different, from programs designed to be “just” tools. Electronic paint boxes, for example. And its use is equally different from the way computer artists use electronic paint boxes: I don't work with the program, that is to say, I work on the program. The goal is the program's autonomy, not the making of a better - orthodox - tool. [Cohen 1986]
Cohen, who once insisted on being the artist in all instances of AARON’s work, is now happy to share credit with his system. Thus the legal ramifications of setting up an artistic system might be compared with the concept of a “work” in the musical sense:
The legally recognized work of art must have substance and must exist in the real world. But what type of physical and temporal existence does it have to have? What does the law do when confronted, for example, with music, which cannot be said to exist in the notes alone, nor in the performance, nor in the perception of the performance? [Karlen 1981:51]
It is at this point that one must consider the disjuncture between the digital information that underpins the computer image and the visual qualities of the image itself. Remembering Binkley’s thoughts, one could say that because the computer imposes no visual form in and of itself, it is not so much a medium as an instrument, albeit one that can “play” of its own accord. So the artistry lies in guiding the instrument’s movement. In this sense, the computer artist can achieve a similar position to a composer, who relies on others to perform his work. The legal arguments surrounding the computer artist should recognise this fact:
There must be overt behaviour manifesting substantial skill and/or labour which results in some form of detectable notation […] For example, in the case of dance the copyright law will recognize choreographic notation; in the case of architecture, architectural plans. Even for the visual arts such as painting, there is no reason to deny legal protection […] for the plans alone. [Karlen 1981:51]
In this way, computer art could be seen in a similar light to other artforms that cross the boundary between physical and symbolic, like dance and music. The instructions making up the computer image can only be executed by the computer, unless realised in material form in which case they have departed the computer’s realm and now exist independently of it, in the physical world. These considerations will be examined later, when the existence of computer artforms is discussed.
----
From an interview with Cohen:



Download 245 Kb.

Share with your friends:
1   2   3   4   5




The database is protected by copyright ©ininet.org 2024
send message

    Main page