Data-Mysticism, Algorithmic Ecologies & The Human-Executable – Interview with Mitchell Whitelaw for Neural Magazine #40
Tuesday, 26 March 2013
Limits To Growth – Mitchell Whitelaw
Well known within the digital media and generative arts community for his research and writing as well as his own artistic practice, Mitchell Whitelaw has recently updated his online folio of essays, artworks and data visualisation projects. ‘Ten Questions Concerning Generative Computer Art’ [PDF] recently linked from his site, authored by a group of artists and academics including Mitchell, will be of particular interest to practitioners of generative art who are engaged in some of the key questions and theoretical discussions attached to the movement. The paper is not afraid to ask some philosophically weighted and ontologically biased questions such as: What is it like to be a computer that makes art?
‘In this paper we pose ten questions we consider the most important for understanding generative computer art. For each question, we briefly discuss the implications and suggest how it might form the basis for further discussion’
Limits To Growth – Mitchell Whitelaw
In October 2011 I interviewed Mitchell for the 40th edition for Neural Magazine – ‘The Generative Unexpected’ making for a port of entry into Mitchell’s thoughts on generative art as well as musings on his personal artworks.
PP: It might be argued that some of the main themes infused in generative art are those to do with a kind of techno-utopianism and futurism. Have you come across any generative artworks that deal with dystopian themes or have a sense of anachronism about them? More importantly are the technologies and software used in creating these artworks inherently defining their aesthetics?
It’s true that there’s a flavour of the techno-utopian to a lot of digital generative art, especially in the online digital scene. The founding principle of generative art is, inescapably, the generative capacity of its own system, so perhaps it is optimistic by definition? Online culture – or the real-time social media flow of projects, memes and links that we tend to bathe in – is also techno-utopian at its core, still strongly influenced by the West-Coast startup culture of the companies involved. But with a bit of digging some more diversity emerges; the work of my friend Jon McCormack for example, is highly reflective about the nature / technology relationship – though it sometimes conceals its ambivalence under a very beautiful surface. Another Australian artist – Murray McKeich – makes work that is both anachronistic and dystopian, like his pZombies, gruesome avatars for generative agency composited from scanned rubbish.
Fugu – Jon McCormack, Ben Porter, James Wetter
On the other hand the flipside of techno-utopia is real richness and generative excess – the ability of formal systems to reveal terrains of sublime complexity. At best this “maximalist” strand of generative practice can induce a state of wonder, little chinks of access to the unthinkable complexity of the real material world.
Do the technologies define aesthetics? They certainly shape the aesthetics powerfully – but at least now the field of technology is more open and malleable for artists than ever before. It might be that the most important new works in this field are coding platforms or communities, rather than art or design projects. Processing won a Golden Nica, after all. But in this field monolithic “technologies” are increasingly breaking down – Processing for example is very influential, and there is certainly a Processing “look”, but with a new framework or library appearing every other week, we can’t blame technology for limited diversity in the field.
PP: Much generative art is concerned with certain kinds of abstraction and systematised multiplicity of form without a framework of proposition, resolution and conclusion. Do you think there is any room for a sense of narrative in generative art? Could you give me examples of generative artworks that deal with narrative successfully?
Achilles – Brandon Morse
I would argue that every generative artwork involves a framework of proposition, resolution and conclusion. It is the formal and procedural structure of the generative system that creates the work: a set of entities, attributes, relationships, process, rules, constraints, and visualisations (more here). The problem, for the way generative art is both made and received, is that that system is often hard to get at – it’s an abstract thing, which the artist may or may not describe or publish. A lot of work in the digital generative scene operates in an image culture where “look” is valued over process or concept. So although it’s sometimes hard to access, I would argue that there is often a narrative inside even the most “retinal” generative art – it’s the narrative of the system. Sometimes it’s fairly clear – for example Brandon Morse’s wonderful procedural animations of collapsing structures (also another dystopian work!). For me Morse’s work is wonderfully poignant because it works by resemblance – it reminds us of real things collapsing – but it also works by metonymy, referring to the idealised world of computer graphics and simulation; so it seems like the simulation itself is image Achilles (2009) – photo by Paul Prudence).
PP: Each year we see different algorithms come into fashion as tools for the generative artist. Perlin Noise, Circle Packing, Voronoi, Reaction-Diffusion and Sub-divisioning algorithms are good examples. How important is it for an artwork to hide traces of the software and algorithm that was used to generate it it? Can you predict what the next big algorithm might be? Or do you see any new potential in an old or overlooked algorithm?
If you need to hide the traces of your algorithm, change your algorithm. I too am fascinated by the algo-memetic fashion parade that moves through digital design and generative art. This relates to the question of look vs system; these systems seem to reproduce using their appearance as a sort of lure – it’s a bit like sexual selection in a memetic ecology, survival of the prettiest. As a result people seem to apply them without any understanding or interest in the system or process. I wrote last year about the Voronoi algorithm along these lines. So algo-fashions will come and go, but for me the most rewarding work is always a result of deep engagement with the generative system – taking a system and hacking it into something else entirely, or deriving new systems. Erwin Driessens and Maria Verstappen for example have a long track record of inventing algorithms that you can’t just grab off the shelf – their Breed and Ima Traveller works are sort of mutant cellular automata – but really they don’t fit any clear template. Nervous System also implement new systems: they go to the scientific literature in biology, or even run their own physical trials, and implement models from scratch. There aren’t many designers currently with the ability to do that. Jonathan McCabe is another good example of this; his multi-scale Turing patterns are a genius hack of a very old algorithm. Jonathan’s Origami Butterfly process is completely new (and equally distinctive).
Breed – Erwin Driessens & Maria Verstappen
So there isn’t a Platonic shelf somewhere stocked with generative algorithms for designers to select from. The space of potential generative systems is unimaginably massive. Make one up, or at least hack an existing one into something else. Even very simple changes to existing systems can be very productive. For years I have been playing with systems based on Murray Eden’s growth model – perhaps the simplest (and first) ever model of biological growth. There’s much more to explore.
PP: What is the role of serendipity and non-determinism in the formulation of a successful generative artwork?
When teaching generative art my colleague Tim Brook initially bans his students from using randomness. I don’t do the same, but I can see the logic of it: randomness adds meaningless variation. Used directly, it’s just that – meaningless variation that can give a false impression of richness. But it can be very handy – for example when exploring the range of outcomes of a complex system, randomising its parameters can throw up useful samples of the generative space of that system. Again it’s about understanding the system. Serendipity is another thing; I think most generative artists work hard to cultivate serendipity, to entice systems into a state where pleasant surprises emerge. Many artists hand-pick “candidates” from large populations of generated works – seeking out those serendipitous moments. Although variation is fundamental to generative work, it’s interesting to observe reactions to Written Images, where each volume is a unique variant of the collected works, with no opportunity for artists to pick favourites. Not having final control over each artefact is still a bit scary (for me at least).
Watching The Sky – Mitchell Whitelaw
PP: In your Watching The Sky piece there is almost a tendency to study the image in a forensic manner, to try and decode the work, and to find environmental patterns in relation to patterns in the work. This method of analysis is in almost direct contrast to the usual manner in which a data visualisation might be constructed, where an artist decides on a specific representational system beforehand to create clarity and make a point. Perhaps you could comment a bit more on how data visualisation might move forward in this respect.
I am drawing on other work here – especially the early work of Lisa Jevbratt, like her classic 1:1. Jevbratt outlines a sort of data-mysticism, a view of data as a reservoir of unknown potential, and shows fine-grained patterns without concern for “readability”. In Watching the Sky (and related work) I just use images as a data source; this is a simple ploy to introduce richness by working with rich, unstructured data – and data with a complex (but legible) relationship to the world. That work has certainly shaped my thinking on visualisation. Maintaining the “unstructured” complexity of the image as a data source – rather than reducing it to statistical features – is a great way to provide contextual cues. The commonsExplorer project I did with Sam Hinton – a visual explorer for Flickr Commons streams – uses tiny cropped “core samples” that offer tell tale clues about the source images.
The other idea at work here (and in Jevbratt’s work) is a sense of data as (a) material; as something with texture or grain that can be felt as much as analysed. I have experimented with making these ideas literal in data-form projects like Weather Bracelet and Measuring Cup.
PP: In one of your papers you discuss synaesthesia and cross-modality in contemporary audio visuals. It seems that an important criteria for a successful synaesthetic artworks is in meaningful, metaphorical or conceptual cross-wiring of sound and video – and not just a mechanical translation between the two. What other criteria are important in a successful cross-modal artwork?
Cross-modal or “coupled” audiovisuals exemplify one of the key questions of digital media – we could call it the mapping problem. If the basic materials of the work are digital – that is, abstract patterns that can travel through any number of different substrates – then how do we make them perceivable? Or, how do we choose a mapping, a way of making data available to perception? Manovich calls this the “built-in existential angst” of digital media. So of course there are an infinity of possible ways to connect sound and image – either mapping one into the other, or generating both from some common data source. I actually like mechanical or automatic mappings. Because they are stable and consistent they let us soak in the relationship, the map itself; and these automatic maps are often quite subtle and fine-grained, compared to more composed or intentional relationships. In Robin Fox’s work for example a simple (polar) oscilloscope display creates images from audio signals – but Fox explores the mapping in depth, working out how to “play” it, reverse-engineering the audio signal to create images and revealing surprising correspondences. Of course automatic mappings can be incredibly boring – how many modified graphic equaliser visualisations do we need to see – but I think this is often because the mapping is filtered through too many abstractions and interventions; it becomes a set of parameters.
Casey Reas – Process Compendium [A]
PP: There has been a huge influence of generative art in recent years on traditional drawing techniques such as painting and sculpture. In reverse direction, what ways, if any, can generative artists learn from traditional plastic arts?
The link there for me is a sense of “procedurality” or “processuality”. In Casey Reas’ work we can see a strong relationship between computational and non-computational procedures such as those of Sol Le Witt. In teaching programming to designers, I have students write and execute a Le Witt style procedure, with pencil and paper. Digital generative systems are just formal procedures, executed by machines. Treating processes as human-executable helps unpack the black boxes of generative systems mentioned earlier, and hopefully reveal them as contingent and hackable. Otherwise: the joy of materiality. Generative art and design covets the lush tangibility of traditional media; and with the wave of interest in fabrication we are seeing ever more generative work realised in “off-screen” forms. The challenge then, for pasty code-artist types, is to match the craft skills of hands-on makers in realising the work.
PP: What early interests did you have that might have lead you to your current path as an artist and academic in this field?
Music – which I don’t do much of any more, but it was a big part of my world for a long time. Music (or Western music anyway) is systematised and symbolic, but also immediate and affective. That combination has always interested me. Reading Gödel, Escher, Bach – as well as lots of popular science stuff on complex systems – was influential. I was playing around with computers from around the time of the Apple II; later I convinced my father to buy an Amiga 1000, ostensibly to be used in his architecture business. It didn’t ever do much architecture but I used it to make lots of bad graphics and music. Also I grew up in an outer suburb, surrounded by wild bushland; I’m a romantic nature boy at heart.
PP: Can you tell me a bit about how the dual role of essayist/writer and artist works in your situation. The dialectical relationship must create a certain amount of self-reflexivity on both sides?
Writing is fundamentally another kind of making – when it works, text and ideas are a pretty heady medium. So to some extent it’s all practice, or at least speculation, experimentation, thinking of various sorts. When it works best, the practical work can trial or extend the writing, and the writing can contextualise, interpret and unpack the art work. “Practice led research” works for me as an approach – especially if you don’t split art-making and writing along neat practice / theory lines.
PP: Can you tell me about any projects you have planned for the future, any new books in the pipeline or art projects in progress?
Since 2008 I’ve been researching and developing interactive visualisations of cultural collections datasets, working with partners including the National Archives of Australia and most recently the National Gallery of Australia. The work is challenging and rewarding; I enjoy the way data vis can span the poetic and the prosaic, and the immersive richness of large data sets. That line of work has been pulling me away from “art”, which is fine with me – I generally find the edges and interfaces around creative digital culture and practice more interesting than the portion of it inside gallery walls. But the writing is also ticking over, mostly on digital materiality and the aesthetics of computational art and design. There’s a new book in there somewhere, I hope.