MachineMachine /stream - search for allegory https://machinemachine.net/stream/feed en-us http://blogs.law.harvard.edu/tech/rss LifePress therourke@gmail.com <![CDATA[Sonic Acts 2017: The Noise of Becoming: On Monsters, Men, and Every Thing in Between]]> https://machinemachine.net/portfolio/sonic-acts-2017-the-noise-of-becoming-on-monsters-men-and-every-thing-in-between/

UPDATE: My talk is also now available in The Noise of Being publication, published by Sonic Acts in September 2017 A talk I delivered at Sonic Acts Festival 2017: The Noise of Being, in which I refigure the sci-fi horror monster The Thing from John Carpenter’s 1982 film of the same name:

The Thing is a creature of endless mimetic transformations, capable of becoming the grizzly faced men who fail to defeat it. The most enduring quality of The Thing is its ability to perform self-effacement and subsequent renewal at every moment, a quality we must embrace and mimic ourselves if we are to outmanoeuvre the monsters that harangue us.

This talk was part of a panel featuring Laurie Penny and Ytasha Womack, entitled Speculative Fiction: Radical Figuration For Social Change. You can see their wonderful talks here:

Laurie Penny: Feminism Against Fascism Ytasha Womack: Afrofuturism: Imagination and Humanity

full text follows (+ references & slides) An Ontology of Every Thing on the Face of the Earth John Carpenter’s 1982 film, The Thing, is a claustrophobic science fiction thriller exhibiting many hallmarks of the horror genre. The film depicts a sinister turn for matter where the chaos of the replicating, cancerous cell is expanded to the human scale and beyond. We watch as an alien force terrorises an isolated Antarctic outpost. The creature exhibits an awesome ability to imitate; devouring any form of life it comes across, whilst simultaneously giving birth to an exact copy in a burst of bile and protoplasm. The Thing copies cell by cell in a process so perfect, that the resultant simulacrum speaks, acts, and even thinks like the original. The Thing is so relentless and its copies so perfect, that the outpost’s Doctor, Blair, is sent mad at the implications: If a cell gets out it could imitate everything on the face of the Earth… and it’s not gonna stop! [1] This text is also available in The Noise of Being publication (published September 2017) Based on John W. Campbell’s 1938 novella, Who Goes There?, Carpenter’s film revisits a gothic trope that is numerous in its incarnations. In Campbell’s novella, The Thing is condensed as much from the minds of the men as from its own horrific, defrosting bulk. A slowly surfacing nightmare that transforms alien matter into earthly biology also has the effect of transferring the inner, mental lives of the men into the resultant condensation. John W. Campbell knew that The Thing could become viscous human flesh, but in order to truly imitate its prey the creature must infect inner life separately, pulling kicking and screaming ghosts out of their biological – Cartesian – machines. As a gothic figure, Campbell’s Thing disrupts the stable and integral vision of human being: self-same bodies housing ‘unitary and securely bounded’ [2] subjectivities, identical and extensive through time. His characters confront their anguish at being embodied: their nightmares are literally made flesh. To emphasise the otherness of each human’s flesh, Campbell’s story is inhabited exclusively with male characters. The absence of women makes the conflict between each of the men feel more rudimentary, but it also centres the novel’s horror on the growing realisation that to be human is also to be alien to oneself. Differences between sexes within the single species homo sapiens are bypassed, allowing the alien entity to exhibit the features of human female ‘otherness’ alongside a gamut of horrific bodily permutations. Perhaps, as Barbara Creed, [3] Rosi Braidotti, [4] and others [5] have argued, The Thing signifies the intrinsic absence of the mother figure: the female body’s capacity to be differentiated from itself in the form of pregnancy; to open up and usher forth into the world a creature other to itself. This Thingly quality is given credence by Julia Kristeva in a passage that could equally refer to The Thing as to the development of a fetus during pregnancy: Cells fuse, split, and proliferate; volumes grow, tissues stretch, and the body fluids change rhythm, speeding up or slowing down. With the body, growing as a graft, indomitable, there is another. And no one is present, within that simultaneously dual and alien space, to signify what is going on. [6] The Thing does exhibit demeanours of copulation and fertility, but also of disease, fragmentation, dismemberment, and asexual fission. In the novella, during a drug induced nightmare Dr. Copper sits bolt upright and blurts out ‘Garry – listen. Selfish – from hell they came, and hellish shellfish – I mean self – Do I? What do I mean?,’ McReady [7] turns to the other men in the cabin, ‘Selfish, and as Dr. Copper said – every part is a whole. Every piece is self-sufficient, and animal in itself.’ [8] The Thing is aberrant at a level more fundamental than allusions to pregnancy can convey. Dr. Copper’s inability to articulate what The Thing is, indicates a categorical nightmare he and the men are suffering. As in the work of Mary Douglas, [9] The Thing’s nightmarish transformation denies the very concept of physical and categorical purity. The Thing’s distributed biology calls to mind the Hardt and Negri’s vision of the early Internet (ARPANET), designed, according to them: …to withstand military attack. Since it has no center and almost any portion can operate as an autonomous whole, the network can continue to function even when part of it has been destroyed. The same design element that ensures survival, the decentralisation, is also what makes control of the network so difficult. [10] The image of mankind’s outright destruction, via totalising narratives such as nuclear war, viral pandemic, or meteor strike is undermined by the paradigm of a Thingly technological infrastructure designed to avoid ‘absolute’ assault. Decentralisation is a categorical horror in its capacity to highlight our self-same, constantly threatened and weak, embodied selves. But shift the lens away from the self-same human subject, and the image of a distributed, amorphous network of autonomous cells immediately becomes a very good description of how biological life has always been constituted. The metaphysical dualism of the sexes, as Kelly Hurley concludes, is an inadequate paradigm of such horrific embodiment, rather any and all ‘ontological security’ [11] is challenged through a ‘collapsing of multiple and incompatible morphic possibilities into one amorphous embodiment.’ [12] The Thing is neither male nor female, two nor one, inside nor outside, living nor dead. If it does settle into a form that can be exclaimed, screamed or defined in mutually incompatible words, it does so only for a moment and only in the mind of its onlooker as they scrabble to deduce its next amorphous conflation. The Thing is a figure performing ontogenesis (something coming to be) rather than ontology (something that already is). [13] ‘The very definition of the real,’ as Jean Baudrillard affirmed, has become ‘that of which it is possible to give an equivalent reproduction.’ [14] Does The Thing ‘produce’ something other than human life, or ‘reproduce’ human life in its entirety, and what, if anything, would be the difference? In a text on bio and necropolitics, Eugene Thacker undertakes an examination of the ‘difference between “Life” as an ontological foundation, and “the living,” or the various specific instantiations of Life.’ [15] Thacker highlights a passage in Poetics where Aristotle speaks of mimesis giving rise to the art of poetry in human beings: We take delight in viewing the most accurate possible images of objects which in themselves cause distress when we see them (e.g. the shapes of the lowest species of animal, and corpses). Recognition of mimetic forms can instill a certain degree of displeasure if that form depicts a carcass or something considered equally abhorrent. But this is often tinged with what Aristotle calls the ‘extremely pleasurable’ dual capacities of recognising an imitation as such, whilst at the same time recognising what it is the form is imitative of. The horror of The Thing is bound to this endless ontogenetic re-forming, its limitless capacity to imitate and become without necessarily settling into a final, stable and agreeable categorical – that is, ontological – form. The men of the Antarctic encampment grasp in their minds at the forms ushering from The Thing but can never keep up with its propensity toward the next shapeless-shape, bodiless-limb, or ontogenetic-extrudence. The Thing is a phenomenon, to use Eugene Thacker’s words once more, that is ‘at once “above” and “below” the scale of the human being,’ [16] throwing, as Rosi Braidotti puts it, ‘a terminal challenge towards a human identity that is commonly predicated on the One.’ [17] The ‘other’ of The Thing never settles down, always falling outside the dialectical circle. As Helene Cixous remarks in The Newly Born Woman, with the ‘truly “other” there is nothing to say; it cannot be theorized. The “other” escapes me.’ [18] The figure of The Thing bursts into popular culture at the meeting point between dream and flesh, and has been pursued ever since by men whose individuality is considered inseparable from their self-same embodiment. By modifying the rules through which dominant norms such as gender binaries operate, The Thing can be conceived as an incarnation of détournement: an intervention that hijacks and continually modifies the rules of engagement. ‘The radical implication [being] that [all] meaning is connected to a relationship with power.’ [19] Considered through Michel Foucault’s definition of bio-power, or the bio-political, The Thing is the process of sex and sexuality severed from the humans who are forced to proliferate ‘through’ it. Above all, the men set against this propagation – this mobilisation of images of ‘other’ – scramble to protect the normative image of the human they hold most dear: the mirage of ‘man’. Becoming World The filmic Thing is a fictional device enabled by animatronic augmentations coated with fleshy stand-ins, KY Jelly, and occasionally, real animal offal. As John Carpenter described his rendition of the creature in a 2014 interview, ‘It’s just a bunch of rubber on the floor.’ [20] Bringing The Thing ‘to life’ is an activity that performs the collapse ‘between “Life” as an ontological foundation, and “the living,” or the various specific instantiations of Life.’ [21] The animatronic Thing exists in the space between stable forms; it is vibrant, expressive technology realised by dead matter; and human ingenuity made discernible by uncanny machinic novelty. Ontological uncertainty finds fluidity in language on a page, in the ability to poetically gesture towards interstitiality. But on-screen animatronics, rubber, and KY Jelly are less fluid, more mimetically rooted by the expectations of the audience reveling in, and reviled by, their recognition of The Thing’s many forms. Upon its release critical reactions to John Carpenter’s The Thing were at best muted and at worst downright vitriolic. The special effects used to depict the creature were the focus of an attack by Steve Jenkins’. Jenkins attacks the film essentially for its surrealist nature… he writes that: “with regard to the effects, they completely fail to ‘clarify the weirdness’ of the Thing”, and that “because one is ever sure exactly how it [the alien] functions, its eruptions from the shells of its victims seem as arbitrary as they are spectacular’.” [22] In short, the reviews lingered on two opposing readings of The Thing’s shock/gore evocations: that they go too far and thus tend towards sensational fetishism, or that they can’t go far enough, depicting kitsch sensibilities rather than alien otherness. Jenkins’ concern that the special effects do not ‘clarify’ The Thing’s ‘weirdness’ is contradictory, if not oxymoronic. The implication is that Things could never be so weird as to defy logical function, and that all expressions should, and eventually do, lend themselves to being read through some parochial mechanism or other, however surreal they may at first seem. That The Thing’s nature could actually defy comprehensibility is not considered, nor how impossible the cinematic depiction of that defiance might be. Rather, the critical view seems to be that every grisly eruption, bifurcation, and horrific permutation on screen must necessarily express an inner order temporarily hidden from, but not inaccessible to, its human onlookers. This critical desire for a ‘norm’ defies the same critical desire for ‘true’ horror. Our will to master matter and technology through imitative forms is the same will that balks at the idea that imitative forms could have ontologies incommensurable with our own. The Thing is ‘weird’: a term increasingly applied to those things defying categorisation. A conviction, so wrote the late Mark Fisher, ‘that this does not belong, is often a sign that we are in the presence of the new… that the concepts and frameworks which we have previously employed are now obsolete.’ [23] In reflecting on the origins of this slippery anti-category, Eugene Thacker reminds us that within horror, ‘The threat is not the monster, or that which threatens existing categories of knowledge. Rather, it is the “nameless thing,” or that which presents itself as a horizon for thought… the weird is the discovery of an unhuman limit to thought, that is nevertheless foundational for thought.’ [24] In The Thing the world rises up to meet its male inhabitants in a weird form and, by becoming them, throws into question the categorical foundations of the born and the made, of subject and object, natural and synthetic, whole and part, human and world, original and imitation. What remains is an ongoing process of animation rendered horrific by a bifurcation of ontologies: on one side the supposed human foundation of distinction, uniqueness and autonomy; on the other, a Thingly (alien and weird) propensity that dissolves differentiation, that coalesces and revels in an endless process of becoming.  As in Mikhail Bakhtin‘s study of the grotesque, the ‘human horizon’ in question is that of the ‘canon,’ [25] a norm to which all aberrations are to be compared: The grotesque body… is a body in the act of becoming. It is never finished, never completed; it is continually built, created, and builds and creates another body. Moreover, the body swallows the world and is itself swallowed by the world. [26] The Thingly is neither self-same nor enclosed unto itself. It is a plethora of openings, conjoinings and eruptions that declare ‘the world as eternally unfinished: a world dying and being born at the same time.’ [27] The bodily horror performed by The Thing is an allegory of this greater interstitial violation: the conceptual boundary between the world-for-us and the world-without-us is breached not as destruction, or even invasion, but ultimately through our inability to separate ourselves from a world that is already inherently alien and weird. [28] ‘A monstrosity’ to hijack the words of Claire Colebrook, ‘that we do not feel, live, or determine, but rather witness partially and ex post facto.’ [29] How these processes are comprehended, or more precisely, how the perception of these processes is interpreted, is more important than the so called ‘difference’ between the world which existed before and the world which remains after. Eugene Thacker clarifies this point in his analysis of the etymology of the word ‘monster’: A monster is never just a monster, never just a physical or biological anomaly. It is always accompanied by an interpretive framework within which the monster is able to be monstrum, literally “to show” or “to warn.” Monsters are always a mat­ter of interpretation. [30] Becoming Weird In a 1982 New York Times movie section, critic Vincent Canby poured yet more scorn on John Carpenter’s ‘Thing’ remake: The Thing is a foolish, depressing, overproduced movie that mixes horror with science fiction to make something that is fun as neither one thing or the other… There may be a metaphor in all this, but I doubt it… The Thing… is too phony looking to be disgusting. It qualifies only as instant junk. [31] Chiming with his critic peers, Canby expresses his desire that the monster show its nature – be monstrum – only in respect of some ‘norm’; [32] some ‘interpretive framework’, [33] that the narrative will eventually uncover. By setting up ‘junk’ as a kitschy opposite to this supposedly palatable logic, Canby unwittingly generates a point from which to disrupt the very notion of the interpretive framework itself. The Thing is more than a metaphor. Canby’s appeal to ‘instant junk’ can be read as the monstrum, the revealing of that which constitutes the norm. The monster stands in for difference, for other, and in so doing normalises the subject position from which the difference is opposed: the canon. In the case of The Thing that canon is first and foremost the human male, standing astride the idea of a world-for-us. The ‘us’ is itself monopolised, as if all non-male ontogenetic permutations were cast out into the abject abyss of alien weirdness. In reclaiming ‘junk’ as a ‘register of the unrepresentable’ [34] a Thingly discourse may share many of the tenets of queer theory. As Rosi Braidotti makes clear, referring to the work of Camilla Griggers: ‘Queer’ is no longer the noun that marks an identity they taught us to despise, but it has become a verb that destabilizes any claim to identity, even and especially to a sex-specific identity. [35] The queer, the weird, the kitsch, are among the most powerful of orders because they are inherently un-representable and in flux. The rigid delineations of language and cultural heteronormativity are further joined in the figure of The Thing by a non-anthropic imaginary that exposes a whole range of human norms and sets into play a seemingly infinite variety of non-human modes of being and embodiment. Rosi Braidotti refers to the work of Georges Canguilhem in her further turn outwards towards the weird, ‘normality is, after all, the zero-degree of monstrosity,’ [36] signalling a post-human discourse as one which, by definition, must continually question – perhaps even threaten – the male, self-same, canonised, subject position: We need to learn to think of the anomalous, the monstrously different not as a sign of pejoration but as the unfolding of virtual possibilities that point to positive alternatives for us all… the human is now displaced in the direction of a glittering range of post-human variables. [37] In her book on The Death of The Posthuman (2014), Claire Colebrook looks to the otherwise, the un-representable, to destabilise the proposition of a world being for anyone. She begins by considering the proposed naming of the current geological era ‘The Anthropocene,’ [38] a term that designates a theoretical as well as scientific impasse for human beings and civilisation, in which human activity and technological development have begun to become indistinguishable, and/or exceed processes implicit within what is considered to be the ‘natural’ world. As if registering the inevitable extinction of humans isn’t enough, The Anthropocene, by being named in honour of humans, makes monsters of those times – past and present – which do not contain humans. Its naming therefore becomes a mechanism allowing the imagination of ‘a viewing or reading in the absence of viewers or readers, and we do this through images in the present that extinguish the dominance of the present.’ [39] The world ‘without bodies’ that is imaged in this move, Colebrook argues, is written upon by the current state of impending extinction. Humans are then able to look upon the future world-without-us in a state of nostalgia coloured by their inevitable absence. Here the tenets of the horror genre indicated by Eugene Thacker are realised as a feature of a present condition. The world-in-itself has already been subsumed by The Thingly horror that is the human species. For even the coming world-without-us, a planet made barren and utterly replaced by The Thingly junk of human civilisation, will have written within its geological record a mark of human activity that goes back well before the human species had considered itself as a Thing ‘in’ any world at all. In an analysis of the etymology of the Anthropocene, McKenzie Wark also turns to theory as a necessary condition of the age of extinction: All of the interesting and useful movements in the humanities since the late twentieth century have critiqued and dissented from the theologies of the human. The Anthropocene, by contrast, calls for thinking something that is not even defeat. [40] The Anthropocene, like ‘queer’ or ‘weird’, should be made into a verb, and relinquished as a noun. Once weirded in this way it becomes a productive proposition, Wark goes on, quoting Donna Haraway, ‘another figure, a thousand names of something else.’ [41] In the 2014 lecture quoted by Wark, Haraway called for other such worldings through the horrific figure of capitalism, through arachnids spinning their silk from the waste matter of the underworld, or from the terrible nightmares evoked in the fiction of the misogynist, racist mid 20th century author H.P. Lovecraft: The activation of the chthonic powers that is within our grasp to collect up the trash of the anthropocene, and the exterminism of the capitalocene, to something that might possibly have a chance of ongoing. [42] That weird, ongoing epoch is the Chthulucene, a monstrum ‘defined by the frightening weirdness of being impossibly bound up with other organisms,’ [43] of what Haraway calls, ‘multi-species muddles.’  [44] The horror of ‘the nameless thing’ is here finally brought to bear in Haraway’s Capitalocene and Chthulucene epochs. Haraway’s call for ‘a thousand names of something else’ is Thingly in its push towards the endlessly bifurcated naming, and theoretical subsuming. The anthro-normalisation casts out infinitely more possibilities than it brings into play. Although Donna Haraway makes it clear that her Chthulucene is not directly derivative of H.P. Lovecraft’s Cthulhu mythos, her intentional mis-naming and slippery non-identification exemplifies the kind of amorphous thinking and practice she is arguing for. Haraway’s Chthulucene counters Lovecraft’s Cthulhu with an array of chthonic, non-male, tentacular, rhizomatic, and web spinning figures that attest to the monstrum still exposed by Lovecraft’s three quarters of a century old work. The continued – renewed – fascination with Lovecraft’s weird ‘others’ thus has the capacity to expose a dread of these times. As writer Alan Moore has attested: [I]t is possible to perceive Howard Lovecraft as an almost unbearably sensitive barometer of American dread. Far from outlandish eccentricities, the fears that generated Lovecraft’s stories and opinions were precisely those of the white, middle-class, heterosexual, Protestant-descended males who were most threatened by the shifting power relationships and values of the modern world… Coded in an alphabet of monsters, Lovecraft’s writings offer a potential key to understanding our current dilemma, although crucial to this is that they are understood in the full context of the place and times from which they blossomed. [45] The dominant humanistic imagination may no longer posit white cis-males as the figure that ‘must’ endure, but other uncontested figures remain in the space apparently excavated of Lovecraft’s affinities. To abandon what Claire Colebrook calls ‘the fantasy of one’s endurance,’ may be to concede that the post-human is founded on ‘the contingent, fragile, insecure, and ephemeral.’ [46] But, as Drucilla Cornell and Stephen D. Seely suggest, it is dangerous to consider this a ‘new’ refined status for the beings that remain, since ‘this sounds not like the imagination of living beyond Man, but rather like a meticulous description of the lives of the majority of the world under the condition of advanced capitalism right now.’ [47] As Claire Colebrook warns, post-humanism often relinquishes its excluded others – women, the colonised, nonhuman animals, or ‘life itself’ [48] – by merely subtracting the previously dominant paradigm of white heteropatriarchy, whilst failing to confront the monster the that particular figure was indicative of: Humanism posits an elevated or exceptional ‘man’ to grant sense to existence, then when ‘man’ is negated or removed what is left is the human all too human tendency to see the world as one giant anthropomorphic self-organizing living body… When man is destroyed to yield a posthuman world it is the same world minus humans, a world of meaning, sociality and readability yet without any sense of the disjunction, gap or limits of the human. [49] As in Haraway and Wark’s call for not just ‘naming, but of doing, of making new kinds of labor for a new kind of nature,’ [50] contemporary criticism and theory must be allowed to take on the form of the monsters it pursues, moulding and transforming critical inquiries into composite, hybrid figures that never settle in one form lest they become stable, rigid, and normalised. In fact, this metaphor itself is conditioned too readily by the notion of a mastery ‘Man’ can wield. Rather, our inquiries must be encouraged ‘to monster’ separately, to blur and mutate beyond the human capacity to comprehend them, like the infinite variety of organisms Haraway insists the future opens into. The very image of a post-humanism must avoid normalising the monster, rendering it through analysis an expression of the world-for-us. For Eugene Thacker this is the power of the sci-fi-horror genre, to take ‘aim at the presuppositions of philosophical inquiry – that the world is always the world-for-us – and [make] of those blind spots its central concern, expressing them not in abstract concepts but in a whole bestiary of impossible life forms – mists, ooze, blobs, slime, clouds, and muck.’ [51] Reflecting on the work of Noël Carroll, [52] Rosi Braidotti argues that if science fiction horror ‘is based on the disturbance of cultural norms, it is then ideally placed to represent states of crisis and change and to express the widespread anxiety of our times. As such this genre is as unstoppable as the transformations it mirrors.’ [53]  

References [1] John Carpenter, The Thing, Film, Sci-Fi Horror (Universal Pictures, 1982). [2]  Kelly Hurley, The Gothic Body: Sexuality, Materialism, and Degeneration at the Fin de Siècle (Cambridge University Press, 2004), 3. [3]  B. Creed, ‘Horror and the Monstrous-Feminine: An Imaginary Abjection.’ Screen 27, no. 1 (1 January 1986): 44–71. [4]  Rosi Braidotti, Metamorphoses: Towards a Materialist Theory of Becoming (Wiley, 2002), 192–94. [5]  Ian Conrich and David Woods, eds., The Cinema Of John Carpenter: The Technique Of Terror (Wallflower Press, 2004), 81. [6]  Julia Kristeva, quoted in Jackie Stacey, Teratologies: A Cultural Study of Cancer (Routledge, 2013), 89. [7]  The character McReady becomes MacReady in Carpenter’s 1982 retelling of the story. [8]  Campbell, Who Goes There?, 107. [9]  Noël Carroll, The Philosophy of Horror, Or, Paradoxes of the Heart (New York: Routledge, 1990). [10] Michael Hardt and Antonio Negri, Empire, New Ed (Harvard University Press, 2001), 299. [11] Braidotti, Metamorphoses, 195. [12] Kelly Hurley, ‘Reading like an Alien: Posthuman Identity in Ridley Scott’s Aliens and David Cronenberg’s Rabid,’ in Posthuman Bodies, ed. Judith M. Halberstam and Ira Livingston (Bloomington: John Wiley & Sons, 1996), 219. [13] This distinction was plucked, out of context, from Adrian MacKenzie, Transductions: Bodies and Machines at Speed (A&C Black, 2006), 17. MacKenzie is not talking about The Thing, but this distinction is, nonetheless, very useful in bridging the divide between stable being and endless becoming. [14] Jean Baudrillard, Simulations, trans. Paul Foss, Paul Patton, and Philip Beitchman (Semiotext (e) New York, 1983), 146. [15] Eugene Thacker, ‘Nekros; Or, The Poetics Of Biopolitics,’ Incognitum Hactenus 3, no. Living On: Zombies (2012): 35. [16] Ibid., 29. [17] Braidotti, Metamorphoses, 195. [18] Hélène Cixous, The Newly Born Woman (University of Minnesota Press, 1986), 71. [19] Nato Thompson et al., eds., The Interventionists: Users’ Manual for the Creative Disruption of Everyday Life (North Adams, Mass. : Cambridge, Mass: MASS MoCA ; Distributed by the MIT Press, 2004), 151. [20] John Carpenter, BBC Web exclusive: Bringing The Thing to life, Invasion, Tomorrow’s Worlds: The Unearthly History of Science Fiction, 14 November 2014. [21] Thacker, ‘Nekros; Or, The Poetics Of Biopolitics,’ 35. [22] Ian Conrich and David Woods, eds., The Cinema Of John Carpenter: The Technique Of Terror (Wallflower Press, 2004), 96. [23] Mark Fisher, The Weird and the Eerie, 2016, 13. [24] Eugene Thacker, After Life (University of Chicago Press, 2010), 23. [25] Mikhail Mikhaĭlovich Bakhtin, Rabelais and His World (Indiana University Press, 1984), 321. [26] Ibid., 317. [27] Ibid., 166. [28] This sentence is a paraphrased, altered version of a similar line from Eugene Thacker, ‘Nine Disputations on Theology and Horror,’ Collapse: Philosophical Research and Development IV: 38. [29] Claire Colebrook, Sex After Life: Essays on Extinction, Vol. 2 (Open Humanities Press, 2014), 14. [30] Eugene Thacker, ‘The Sight of a Mangled Corpse—An Interview with’, Scapegoat Journal, no. 05: Excess (2013): 380. [31] Vincent Canby, ‘“The Thing” Is Phony and No Fun,’ The New York Times, 25 June 1982, sec. Movies. [32] Derrida, ‘Passages: From Traumatism to Promise,’ 385–86. [33] Thacker, ‘The Sight of a Mangled Corpse—An Interview with,’ 380. [34] Braidotti, Metamorphoses, 180. [35] Ibid. [36] Ibid., 174. [37] Rosi Braidotti, ‘Teratologies’, in Deleuze and Feminist Theory, ed. Claire Colebrook and Ian Buchanan (Edinburgh: Edinburgh University Press, 2000), 172. [38] A term coined in the 1980s by ecologist Eugene F. Stoermer and widely popularized in the 2000s by atmospheric chemist Paul Crutzen. The Anthropocene is, according to Jan Zalasiewicz et al., ‘a distinctive phase of Earth’s evolution that satisfies geologist’s criteria for its recognition as a distinctive statigraphic unit.’ – Jan Zalasiewicz et al., ‘Are We Now Living in the Anthropocene,’ GSA Today 18, no. 2 (2008): 6. [39] Claire Colebrook, Death of the PostHuman: Essays on Extinction, Vol. 1 (Open Humanities Press, 2014), 28. [40] McKenzie Wark, ‘Anthropocene Futures’ Versobooks.com, 23 February 2015. [41] Ibid. [42] Donna Haraway, ‘Capitalocene, Chthulucene: Staying with the Trouble’ (University of California at Santa Cruz, 5 September 2014). [43] Leif Haven, ‘We’ve All Always Been Lichens: Donna Haraway, the Cthulhucene, and the Capitalocene,’ ENTROPY, 22 September 2014. [44] Donna Haraway, ‘SF: Sympoiesis, String Figures, Multispecies Muddles’ (University of Alberta, Edmonton, Canada, 24 March 2014). [45] H. P Lovecraft, The New Annotated H.P. Lovecraft, ed. Leslie S Klinger (Liveright, 2014), xiii. [46] Claire Colebrook, Sex After Life: Essays on Extinction, Vol. 2 (Open Humanities Press, 2014), 22. [47] Drucilla Cornell and Stephen D Seely, The Spirit of Revolution: Beyond the Dead Ends of Man (Polity press, 2016), 5. [48] Ibid., 3–4. [49] Claire Colebrook, Death of the PostHuman: Essays on Extinction, Vol. 1 (Open Humanities Press, 2014), 163–64. [50] Wark, ‘Anthropocene Futures.’ [51] Thacker, In the Dust of This Planet, 9. [52]   Carroll, The Philosophy of Horror, Or, Paradoxes of the Heart. [53]   Braidotti, Metamorphoses, 185 (my emphasis).

]]>
Sun, 26 Feb 2017 04:43:01 -0800 https://machinemachine.net/portfolio/sonic-acts-2017-the-noise-of-becoming-on-monsters-men-and-every-thing-in-between/
<![CDATA[Four Notes Towards Post-Digital Propaganda | post-digital-research]]> http://post-digital.projects.cavi.dk/?p=475

“Propaganda is called upon to solve problems created by technology, to play on maladjustments and to integrate the individual into a technological world” (Ellul xvii).

How might future research into digital culture approach a purported “post-digital” age? How might this be understood?

1.

A problem comes from the discourse of ‘the digital’ itself: a moniker which points towards units of Base-2 arbitrary configuration, impersonal architectures of code, massive extensions of modern communication and ruptures in post-modern identity. Terms are messy, and it has never been easy to establish a ‘post’ from something, when pre-discourse definitions continue to hang in the air. As Florian Cramer has articulated so well, ‘post-digital’ is something of a loose, ‘hedge your bets’ term, denoting a general tendency to criticise the digital revolution as a modern innovation (Cramer).

Perhaps it might be aligned with what some have dubbed “solutionism” (Morozov) or “computationalism” (Berry 129; Golumbia 8): the former critiquing a Silicon Valley-led ideology oriented towards solving liberalised problems through efficient computerised means. The latter establishing the notion (and critique thereof) that the mind is inherently computable, and everything associated with it. In both cases, digital technology is no longer just a business that privatises information, but the business of extending efficient, innovative logic to all corners of society and human knowledge, condemning everything else through a cultural logic of efficiency.

In fact, there is a good reason why ‘digital’ might as well be an synonym for ‘efficiency’. Before any consideration is assigned to digital media objects (i.e. platforms, operating systems, networks), consider the inception of ‘the digital’ inception as such: that is information theory. If information was a loose, shabby, inefficient method of vagueness specific to various mediums of communication, Claude Shannon compressed all forms of communication into a universal system with absolute mathematical precision (Shannon). Once information became digital, the conceptual leap of determined symbolic logic was set into motion, and with it, the ‘digital’ became synonymous with an ideology of effectivity. No longer would miscommunication be subject to human finitude, nor be subject to matters of distance and time, but only the limits of entropy and the matter of automating messages through the support of alternating ‘true’ or ‘false’ relay systems.

However, it would be quite difficult to envisage any ‘post-computational’ break from such discourses – and with good reason: Shannon’s breakthrough was only systematically effective through the logic of computation. So the old missed encounter goes: Shannon presupposed Alan Turing’s mathematical idea of computation to transmit digital information, and Turing presupposed Shannon’s information theory to understand what his Universal Turing Machines were actually transmitting. The basic theories of both have not changed, but the materials affording greater processing power, extensive server infrastructure and larger storage space have simply increased the means for these ideas to proliferate, irrespective of what Turing and Shannon actually thought of them (some historians even speculate that Turing may have made the link between information and entropy two years before Bell Labs did) (Good).

Thus a ‘post-digital’ reference point might encompass the historical acknowledgment of Shannon’s digital efficiency, and Turing’s logic but by the same measure, open up a space for critical reflection, and how such efficiencies have transformed not only work, life and culture but also artistic praxis and aesthetics. This is not to say that digital culture is reducibly predicated on efforts made in computer science, but instead fully acknowledges these structures and accounts for how ideologies propagate reactionary attitudes and beliefs within them, whilst restricting other alternatives which do not fit their ‘vision’. Hence, the post-digital ‘task’ set for us nowadays might consist in critiquing digital efficiency and how it has come to work against commonality, despite transforming the majority of Western infrastructure in its wake.

The purpose of these notes is to outline how computation has imparted an unwarranted effect of totalised efficiency, and to label this effect the type of description it deserves: propaganda. The fact that Shannon and Turing had multiple lunches together at Bell labs in 1943, held conversations and exchanged ideas, but not detailed methods of cryptanalysis (Price & Shannon) provides a nice contextual allegory for how digital informatics strategies fail to be transparent.

But in saying this, I do not mean that companies only use digital networks for propagative means (although that happens), but that the very means of computing a real concrete function is constitutively propagative. In this sense, propaganda resembles a post-digital understanding of what it means to be integrated into an ecology of efficiency, and how technical artefacts are literally enacted as propagative decisions. Digital information often deceives us into accepting its transparency, and of holding it to that account: yet in reality it does the complete opposite, with no given range of judgements available to detect manipulation from education, or persuasion from smear. It is the procedural act of interacting with someone else’s automated conceptual principles, embedding pre-determined decisions which not only generate but pre-determine ones ability to make choices about such decisions, like propaganda.

This might consist in distancing ideological definitions of false consciousness as an epistemological limit to knowing alternatives within thought, to engaging with a real programmable systems which embeds such limits concretely, withholding the means to transform them. In other words, propaganda incorporates how ‘decisional structures’ structure other decisions, either conceptually or systematically.

2.

Two years before Shannon’s famous Masters thesis, Turing published what would be a theoretical basis for computation in his 1936 paper “On Computable Numbers, with an Application to the Entscheidungsproblem.” The focus of the paper was to establish the idea of computation within a formal system of logic, which when automated would solve particular mathematical problems put into function (Turing, An Application). What is not necessarily taken into account is the mathematical context to that idea: for the foundations of mathematics were already precarious, way before Turing outlined anything in 1936. Contra the efficiency of the digital, this is a precariousness built-in to computation from its very inception: the precariousness of solving all problems in mathematics.

The key word of that paper, its key focus, was on the Entscheidungsproblem, or decision problem. Originating from David Hilbert’s mathematical school of formalism, ‘decision’ means something more rigorous than the sorts of decisions in daily life. It really means a ‘proof theory’, or how analytic problems in number theory and geometry could be formalised, and thus efficiently solved (Hilbert 3). Solving a theorem is simply finding a provable ‘winning position’ in a game. Similar to Shannon, ‘decision’ is what happens when an automated system of function is constructed in such a sufficiently complex way, that an algorithm can always ‘decide’ a binary, yes or no answer to a mathematical problem, when given an arbitrary input, in a sufficient amount of time. It does not require ingenuity, intuition or heuristic gambles, just a combination of simple consistent formal rules and a careful avoidance of contradiction.

The two key words there are ‘always’ and ‘decide’. The progressive end-game of twentieth century mathematicians who, like Hilbert, sought after a simple totalising conceptual system to decide every mathematical problem and work towards absolute knowledge. All Turing had to do was make explicit Hilbert’s implicit computational treatment of formal rules, manipulate symbol strings and automate them using an ’effective’ or “systematic method” (Turing, Solvable and Unsolvable Problems 584) encoded into a machine. This is what Turing’s thesis meant (discovered independently to Alonzo Church’s equivalent thesis (Church)): any systematic algorithm solved by a mathematical theorem can be computed by a Turing machine (Turing, An Application), or in Robin Gandy’s words, “[e]very effectively calculable function is a computable function” (Gandy).

Thus effective procedures decide problems, and they resolve puzzles providing winning positions (like theorems) in the game of functional rules and formal symbols. In Turing’s words, “a systematic procedure is just a puzzle in which there is never more than one possible move in any of the positions which arise and in which some significance is attached to the final result” (Turing, Solvable and Unsolvable Problems 590). The significance, or the winning position, becomes the crux of the matter for the decision: what puzzles or problems are to be decided? This is what formalism attempted to do: encode everything through the regime of formalised efficiency, so that all of mathematically inefficient problems are, in principle, ready to be solved. Programs are simply proofs: if it could be demonstrated mathematically, it could be automated.

In 1936, Turing had showed some complex mathematical concepts of effective procedures could simulate the functional decisions of all the other effective procedures (such as the Universal Turing Machine). Ten years later, Turing and John von Neumann would independently show how physical general purpose computers, offered the same thing and from that moment on, efficient digital decisions manifested themselves in the cultural application of physical materials. Before Shannon’s information theory offered the precision of transmitting information, Hilbert and Turing developed the structure of its transmission in the underlying regime of formal decision.

Yet, there was also a non-computational importance here, for Turing was also fascinated by what decisions couldn’t compute. His thesis was quite precise, so as to elucidate that if no mathematical problem could be proved, a computer was not of any use. In fact, the entire focus of his 1936 paper, often neglected by Silicon Valley cohorts, was to show that Hilbert’s particular decision problem could not be solved. Unlike Hilbert, Turing was not interested in using computation to solve every problem, but as a curious endeavour for surprising intuitive behaviour. The most important of all, Turing’s halting, or printing problem was influential, precisely as it was undecidable; a decision problem which couldn’t be decided.

We can all picture the halting problem, even obliquely. Picture the frustrated programmer or mathematician starting at their screen, waiting to know when an algorithm will either halt and spit out a result, or provide no answer. The computer itself has already determined the answer for us, the programmer just has to know when to give up. But this is a myth, inherited with a bias towards human knowledge, and a demented understanding of machines as infinite calculating engines, rather than concrete entities of decision. For reasons that escape word space, Turing didn’t understand the halting problem in this way: instead he understood it as a contradictory example of computational decisions failing to decide on each other, on the account that there could never be one totalising decision or effective procedure. There is no guaranteed effective procedure to decide on all the others, and any attempt to build one (or invest in a view which might help build one), either has too much investment in absolute formal reason, or it ends up with ineffective procedures.

Undecidable computation might be looked at as a dystopian counterpart against the efficiency of Shannon’s ‘digital information’ theory. A base 2 binary system of information resembling one of two possible states, whereby a system can communicate with one digit, only in virtue of the fact that there is one other digit alternative to it. Yet the perfect transmission of that information, is only subject to a system which can ‘decide’ on the digits in question, and establish a proof to calculate a success rate. If there is no mathematical proof to decide a problem, then transmitting information becomes problematic for establishing a solution.

3.

What has become clear is that our world is no longer simply accountable to human decision alone. Decisions are no longer limited to the borders of human decisions and ‘culture’ is no longer simply guided by a collective whole of social human decisions. Nor is it reducible to one harmonious ‘natural’ collective decision which prompts and pre-empts everything else. Instead we seem to exist in an ecology of decisions: or better yet decisional ecologies. Before there was ever the networked protocol (Galloway), there was the computational decision. Decision ecologies are already set up before we enter the world, implicitly coterminous with our lives: explicitly determining a quantified or bureaucratic landscape upon which an individual has limited manoeuvrability.

Decisions are not just digital, they are continuous as computers can be: yet decisions are at their most efficient when digitally transferred. Decisions are everywhere and in everything. Look around. We are constantly told by governments and states that are they making tough decisions in the face of austerity. CEOs and Directors make tough decisions for the future of their companies and ‘great’ leaders are revered for being ‘great decisive leaders’: not just making decisions quickly and effectively, but also settling issues and producing definite results.

Even the word ‘decide’, comes from the Latin origin of ‘decidere’, which means to determine something and ‘to cut off.’ Algorithms in financial trading know not of value, but of decision: whether something is marked by profit or loss. Drones know not of human ambiguity, but can only decide between kill and ignore, cutting off anything in-between. Constructing a system which decides between one of two digital values, even repeatedly, means cutting off and excluding all other possible variables, leaving a final result at the end of the encoded message. Making a decision, or building a system to decide a particular ideal or judgement must force other alternatives outside of it. Decisions are always-already embedded into the framework of digital action, always already deciding what is to be done, how it can be done or what is threatening to be done. It would make little sense to suggest that these entities ‘make decisions’ or ‘have decisions’, it would be better to say that they are decisions and ecologies are constitutively constructed by them.

The importance of neo-liberal digital transmissions are not that they become innovative, or worthy of a zeitgeist break: but that they demonstrably decide problems whose predominant significance is beneficial for self-individual efficiency and accumulation of capital. Digital efficiency is simply about the expansion of automating decisions and what sort of formalised significances must be propagated to solve social and economic problems, which creates new problems in a vicious circle.

The question can no longer simply be ‘who decides’, but now, ‘what decides?’ Is it the cafe menu board, the dinner party etiquette, the NASDAQ share price, Google Pagerank, railway network delays, unmanned combat drones, the newspaper crossword, the javascript regular expression or the differential calculus? It’s not quite right to say that algorithms rule the world, whether in algo-trading or in data capture, but the uncomfortable realisation that real entities are built to determine provable outcomes time and time again: most notably ones for cumulating profit and extracting revenue from multiple resources.

One pertinent example: consider George Dantzig’s simplex algorithm: this effective procedure (whose origins began in multidimensional geometry) can always decide solutions for large scale optimisation problems which continually affect multi-national corporations. The simplex algorithm’s proliferation and effectiveness has been critical since its first commercial application in 1952, when Abraham Charnes and William Cooper used it to decide how best to optimally blend four different petroleum products at the Gulf Oil Company (Elwes 35; Gass & Assad 79). Since then the simplex algorithm has had years of successful commercial use, deciding almost everything from bus timetables and work shift patterns to trade shares and Amazon warehouse configurations. According to the optimisation specialist Jacek Gondzio, the simplex algorithm runs at “tens, probably hundreds of thousands of calls every minute” (35), always deciding the most efficient method of extracting optimisation.

In contemporary times, nearly all decision ecologies work in this way, accompanying and facilitating neo-liberal methods of self-regulation and processing all resources through a standardised efficiency: from bureaucratic methods of formal standardisation, banal forms ready to be analysed one central system, to big-data initiatives and simple procedural methods of measurement and calculation. The technique of decision is a propagative method of embedding knowledge, optimisation and standardisation techniques in order to solve problems and an urge to solve the most unsolvable ones, including us.

Google do not build into their services an option to pay for the privilege of protecting privacy: the entire point of providing a free service which purports to improve daily life, is that it primarily benefits the interests of shareholders and extend commercial agendas. James Grimmelmann gave a heavily detailed exposition on Google’s own ‘net neutrality’ algorithms and how biased they happen to be. In short, PageRank does not simply decide relevant results, it decides visitor numbers and he concluded on this note.

With disturbing frequency, though, websites are not users’ friends. Sometimes they are, but often, the websites want visitors, and will be willing to do what it takes to grab them (Grimmelmann 458).

If the post-digital stands for the self-criticality of digitalisation already underpinning contemporary regimes of digital consumption and production, then its saliency lies in understanding the logic of decision inherent to such regimes. The reality of the post-digital, shows that machines remain curiously efficient whether we relish in cynicism or not. Such regimes of standardisation and determined results, were already ‘mistakenly built in’ to the theories which developed digital methods and means, irrespective of what computers can or cannot compute.

4.

Why then should such post-digital actors be understood as instantiations of propaganda? The familiarity of propaganda is manifestly evident in religious and political acts of ideological persuasion: brainwashing, war activity, political spin, mind control techniques, subliminal messages, political campaigns, cartoons, belief indoctrination, media bias, advertising or news reports. A definition of propaganda might follow from all of these examples: namely, the systematic social indoctrination of biased information that persuades the masses to take action on something which is neither beneficial to them, nor in their best interests: or as Peter Kenez writes, propaganda is “the attempt to transmit social and political values in the hope of affecting people’s thinking, emotions, and thereby behaviour” (Kenez 4) Following Stanley B. Cunningham’s watered down definition, propaganda might also denote a helpful and pragmatic “shorthand statement about the quality of information transmitted and received in the twentieth century” (Cunningham 3).

But propaganda isn’t as clear as this general definition makes out: in fact what makes propaganda studies such a provoking topic is that nearly every scholar agrees that no stable definition exists. Propaganda moves beyond simple ‘manipulation’ and ‘lies’ or derogatory, jingoistic representation of an unsubtle mood – propaganda is as much about the paradox of constructing truth, and the irrational spread of emotional pleas, as well as endorsing rational reason. As the master propagandist William J. Daugherty wrote;

It is a complete delusion to think of the brilliant propagandist as being a professional liar. The brilliant propagandist […] tells the truth, or that selection of the truth which is requisite for his purpose, and tells it in such a way that the recipient does not think that he is receiving any propaganda…. (Daugherty 39).

Propaganda, like ideology works by being inherently implicit and social. In the same way that post-ideology apologists ignore their symptom, propaganda is also ignored. It isn’t to be taken as a shadowy fringe activity, blown apart by the democratising fairy-dust of ‘the Internet’. As many others have noted, the purported ‘decentralising’ power of online networks, offer new methods for propagative techniques, or ‘spinternet’ strategies, evident in China (Brady). Iran’s recent investment into video game technology only makes sense, only when you discover that 70% of Iran’s population are under 30 years of age, underscoring a suitable contemporary method of dissemination. Similarly in 2011, the New York City video game developer Kuma Games was mired in controversy when it was discovered that an alleged CIA agent, Amir Mirza Hekmati, had been recruited to make an episodic video game series intending to “change the public opinion’s mindset in the Middle East.” (Tehran Times). The game in question, Kuma\War (2006 – 2011) was a free-to-play First-Person Shooter series, delivered in episodic chunks, the format of which attempted to simulate biased re-enactments of real-life conflicts, shortly after they reached public consciousness.

Despite his unremarkable leanings towards Christian realism, Jacques Ellul famously updated propaganda’s definition as the end product of what he previously lamented as ‘technique’. Instead of viewing propaganda as a highly organised systematic strategy for extending the ideologues of peaceful warfare, he understood it as a general social phenomenon in contemporary society.

Ellul outlined two types: political and sociological propaganda: Political propaganda involves government, administrative techniques which intend to directly change the political beliefs of an intended audience. By contrast, sociological propaganda is the implicit unification of involuntary public behaviour which creates images, aesthetics, problems, stereotypes, the purpose of which aren’t explicitly direct, nor overtly militaristic. Ellul argues that sociological propaganda exists; “in advertising, in the movies (commercial and non-political films), in technology in general, in education, in the Reader’s Digest; and in social service, case work, and settlement houses” (Ellul 64). It is linked to what Ellul called “pre” or “sub-propaganda”: that is, an imperceptible persuasion, silently operating within ones “style of life” or permissible attitude (63). Faintly echoing Louis Althusser’s Ideological State Apparatuses (Althusser 182) nearly ten years prior, Ellul defines it as “the penetration of an ideology by means of its sociological context.” (63) Sociological propaganda is inadequate for decisive action, paving the way for political propaganda – its strengthened explicit cousin – once the former’s implicitness needs to be transformed into the latter’s explicitness.

In a post-digital world, such implicitness no longer gathers wartime spirits, but instead propagates a neo-liberal way of life that is individualistic, wealth driven and opinionated. Ellul’s most powerful assertion is that ‘facts’ and ‘education’ are part and parcel of the sociological propagative effect: nearly everyone faces a compelling need to be opinionated and we are all capable of judging for ourselves what decisions should be made, without at first considering the implicit landscape from which these judgements take place. One can only think of the implicit digital landscape of Twitter: the archetype for self-promotion and snippets of opinions and arguments – all taking place within Ellul’s sub-propaganda of data collection and concealment. Such methods, he warns, will have “solved the problem of man” (xviii).

But information is of relevance here, and propaganda is only effective within a social community when it offers the means to solve problems using the communicative purview of information:

Thus, information not only provides the basis for propaganda but gives propaganda the means to operate; for information actually generates the problems that propaganda exploits and for which it pretends to offer solutions. In fact, no propaganda can work until the moment when a set of facts has become a problem in the eyes of those who constitute public opinion (114).

]]>
Wed, 11 Dec 2013 15:42:45 -0800 http://post-digital.projects.cavi.dk/?p=475
<![CDATA[Digital Metaphors: Editor’s Introduction | Alluvium]]> http://www.alluvium-journal.org/2013/12/04/digital-metaphors-editors-introduction/

Metaphor wants to be…

‘[...] metaphors work to change people’s minds. Orators have known this since Demosthenes. [...] But there’s precious little evidence that they tell you what people think. [...] And in any case, words aren’t meanings. As any really good spy knows, a word is a code that stands for something else. If you take the code at face value then you’ve fallen for the trick.’ (Daniel Soar, “The Bourne Analogy”).

Tao Lin’s recent novel Taipei (2013) is a fictional document of life in our current digital culture. The protagonist, Paul — who is loosely based on the author — is numb from his always turned on digitally mediated life, and throughout the novel increases his recreational drug taking as a kind of compensation: the chemical highs and trips are the experiential counterpoint to the mundanity of what once seemed otherworldly — his online encounters. In the novel online interactions are not distinguished from real life ones, they are all real, and so Paul’s digital malaise is also his embodied depressive mindset. The apotheosis of both these highs and lows is experienced by Paul, and his then girlfriend Erin, on a trip to visit Paul’s parents in Taipei. There the hyper-digital displays of the city — ‘lighted signs [...] animated and repeating like GIF files, attached to every building’ (166) — launch some of the more explicit mediations on digital culture in the novel: Paul asked [Erin] if she could think of a newer word for “computer” than “computer,” which seemed outdated and, in still being used, suspicious in some way, like maybe the word itself was intelligent and had manipulated culture in its favor, perpetuating its usage (167). Here Paul intimates a sense that language is elusive, that it is sentient, and that, in the words of Daniel Soar quoted above as an epitaph, it tricks us. It seems to matter that in this extract from Taipei the word ‘computer’ is conflated with a sense of the object ‘computer’. The word, in being ‘intelligent’, has somehow taken on the quality of the thing it denotes — a potentially malevolent agency. The history of computing is one of people and things: computers were first the women who calculated ballistics trajectories during the Second World War, whose actions became the template for modern automated programming. The computer, as an object, is also-always a metaphor of a human-machine relation. The name for the machine asserts a likeness between the automated mechanisms of computing and the physical and mental labour of the first human ‘computers’. Thinking of computing as a substantiated metaphor for a human-machine interaction pervades the way we talk about digital culture. Most particularly in the way we think of computers as sentient — however casually. We often speak of computers as acting independently from our commands, and frequently we think of them ‘wanting’ things, ‘manipulating’ culture, or ourselves.

Pre-Electronic Binary Code Pre-electronic binary code: the history of computing offers us metaphors for human-machine interaction which pervade the way we talk about digital culture today [Image by Erik Wilde under a CC BY-SA license]

Julie E. Cohen, in her 2012 book Configuring the Networked Self, describes the way the misplaced metaphor of human-computer and machine-computer has permeated utopian views of digitally mediated life: Advocates of information-as-freedom initially envisioned the Internet as a seamless and fundamentally democratic web of information [...]. That vision is encapsulated in Stewart Brand’s memorable aphorism “Information wants to be free.” [...] Information “wants” to be free in the same sense that objects with mass present in the earth’s gravitational field “want” to fall to the ground. (8) Cohen’s sharp undercutting of Brand’s aphorism points us toward the way the metaphor of computing is also an anthropomorphisation. The metaphor implicates a human desire in machine action. This linguistic slipperiness filters through discussion of computing at all levels. In particular the field of software studies — concerned with theorising code and programming as praxis and thing — contains at its core a debate on the complexity of considering code in a language which will always metaphorise, or allegorise. Responding to an article of Alexander R. Galloway’s titled “Language Wants to Be Overlooked: On Software and Ideology”, Wendy Hui Kyong Chun argues that Galloway’s stance against a kind of ‘anthropomorphization’ of code studies (his assertion that as an executable language code is ‘against interpretation’) is impossible within a discourse of critical theory. Chun argues, ‘to what extent, however, can source code be understood outside of anthropomorphization? [...] (The inevitability of this anthropomorphization is arguably evident in the title of Galloway’s article: “Language Wants to Be Overlooked” [emphasis added].)’ (Chun 305). In her critique of Galloway’s approach Wendy Chun asserts that it is not possible to extract the metaphor from the material, that they are importantly and intrinsically linked.[1] For Julie E. Cohen the relationship between metaphor and digital culture-as-it-is-lived is a problematic tie that potentially damages legal and constitutional understanding of user rights. Cohen convincingly argues that a term such as ‘cyberspace’, which remains inextricable from its fictional and virtual connotations, does not transition into legal language successfully; in part because the word itself is a metaphor, premised on an imagined reality rather than ‘the situated, embodied beings who inhabit it’ (Cohen 3). And yet Cohen’s writing itself demonstrates the tenacious substance of metaphoric language, using extended exposition of metaphors as a means to think more materially about the effects of legal and digital protocol and action. In the following extract from Configuring the Networked Self, Cohen is winding down a discussion of the difficulty of forming actual policy out of freedom versus control debates surrounding digital culture. Throughout the discussion Cohen has emphasised the way that both sides of the debate are unable to substantiate their rhetoric with embodied user practice; instead Cohen identifies a language that defers specific policy aims.[2] Cohen’s own use of metaphor in this section — ‘objections to control fuel calls [...]’, ‘darknets’ (the latter in inverted commas) — is made to mean something grounded, through a kind of allegorical framework. I am not suggesting that allegory materialises metaphor — allegory functioning in part as itself an extended metaphor — but it does contextualise metaphor.

Circuit Board 2 How tenacious is metaphoric language? The persistence of computational metaphors in understanding digital culture could harm legal and constitutional understandings of user rights [Image by Christian under a CC BY-NC-ND license]

This is exemplified in Cohen’s description of the ways US policy discussions regarding code, rights and privacy of the subject are bound to a kind of imaginary, and demonstrate great difficulty in becoming concrete: Policy debates have a circular, self-referential quality. Allegations of lawlessness bolster the perceived need for control, and objections to control fuel calls for increased openness. That is no accident; rigidity and license historically have maintained a curious symbiosis. In the 1920s, Prohibition fueled the rise of Al Capone; today, privately deputized copyright cops and draconian technical protection systems spur the emergence of uncontrolled “darknets.” In science fiction, technocratic, rule-bound civilizations spawn “edge cities” marked by their comparative heterogeneity and near imperviousness to externally imposed authority. These cities are patterned on the favelas and shantytowns that both sap and sustain the world’s emerging megacities. The pattern suggests an implicit acknowledgment that each half of the freedom/control binary contains and requires the other (9-10). I quote this passage at length in order to get at the way in which the ‘self-referential nature’ of policy discussion is here explained through a conceptual, and specifically literary, framing. Technology is always both imagined and built: this seems obvious, but it justifies reiteration because the material operations of technology are always metaphorically considered just as they are concretely manifest. The perilous circumstance this creates is played on in Cohen’s writing as she critiques constitutional policy that repeatedly cannot get at the embodied subject that uses digital technology; thwarted by the writing and rewriting of debate. In Cohen’s words this real situation is like the science fiction that is always-already seemingly like the real technology. Whether William Gibson’s ‘cyberspace’, a programmer’s speculative coding, or a lawyer’s articulation of copyright, there is no easy way to break apart the relationship between the imaginary and the actual of technoculture. Perhaps then what is called for is an explosion of the metaphors that pervade contemporary digital culture. To, so to speak, push metaphors until they give way; to generate critical discourse that tests the limits of metaphors, in an effort to see what pretext they may yield for our daily digital interactions. The articles in this issue all engage with exactly this kind of discourse. In Sophie Jones’ “The Electronic Heart”, the history of computing as one of women’s labour is used to reconfigure the metaphor of a computer as an ‘electronic brain’; instead asking whether cultural anxieties about computer-simulated emotion are linked to the naturalization of women’s affective labour. In “An Ontology of Everything on the Face of the Earth”, Daniel Rourke also considers computers as a sentient metaphor: uncovering an uncanny symbiosis between what a computer wants and what a human can effect with computing, through a critical dissection of the biocybernetic leeching of John Carpenter’s 1982 film The Thing. Finally, in “The Metaphorics of Virtual Depth”, Rob Gallagher uses Marcel Proust’s treatment of novelistic spacetime to generate a critical discourse on spatial and perspectival metaphor in virtual game environments. All these articles put into play an academic approach to metaphors of computing that dig up and pull out the stuff in between language and machine. In his introduction to Understanding Digital Humanities David M. Berry has argued for such an approach: [what is needed is a] ‘critical understanding of the literature of the digital, and through that [to] develop a shared culture through a form of Bildung’ (8).

Elysium A wheel in the sky: Neil Blomkamp's futuristic L.A. plays on the territorial paranoia of the U.S. over alien invasion and dystopian metaphors of digitally-mediated environments [Image used under fair dealings provisions]

I am writing this article a day after seeing Neill Blomkamp’s film Elysium (2013). Reading Cohen’s assertion regarding the cyclical nature of US digital rights policy debates on control and freedom, her allegory with science fiction seems entirely pertinent. Elysium is set in 2154; the earth is overpopulated, under-resourced, and a global elite have escaped to a man-made (and machine-made) world on a spaceship, ‘Elysium’. Manufacturing for Elysium continues on earth where the population, ravaged by illness, dreams of escaping to Elysium to be cured in “Med-Pods”. The movie focuses on the slums of near future L.A. and — perhaps unsurprisingly given Blomkamp’s last film District 9 (2009) — plays on the real territorial paranoia of the U.S. over alien invasion: that the favelas of Central and South America, and the political structures they embody, are always threatening ascension. In Elysium the “edge city” is the whole world, and the technocratic power base is a spaceship garden, circling the earth’s orbit. ‘Elysium’ is a green and white paradise; a techno-civic environment in which humans and nature are equally managed, and manicured. ‘Elysium’, visually, looks a lot like Disney’s Epcot theme park — which brings me back to where I started. In Tao Lin’s Taipei Paul’s disillusionment with technology is in part with its failure to be as he imagined, and his imagination was informed by the Disney-fied future of Epcot. In Taipei: Paul stared at the lighted signs, some of which were animated and repeated like GIF files, attached to almost every building to face oncoming traffic [...] and sleepily thought how technology was no longer the source of wonderment and possibility it had been when, for example, he learned as a child at Epcot Center [...] that families of three, with one or two robot dogs and one maid, would live in self-sustaining, underwater, glass spheres by something like 2004 or 2008 (166). Thinking through the metaphor of Elysium has me thinking toward the fiction of Epcot (via Tao Lin’s book). The metaphor-come-allegories at work here are at remove from my digitally mediated, embodied reality, but they seep through nonetheless. Rather than only look for the concrete reality that drives the metaphor, why not also engage with the messiness of the metaphor; its potential disjunction with technology as it is lived, and its persistent presence regardless.

CITATION: Zara Dinnen, "Digital Metaphors: Editor's Introduction," Alluvium, Vol. 2, No. 6 (2013): n. pag. Web. 4 December 2013, http://dx.doi.org/10.7766/alluvium.v2.6.04

]]>
Wed, 11 Dec 2013 15:42:41 -0800 http://www.alluvium-journal.org/2013/12/04/digital-metaphors-editors-introduction/
<![CDATA[Noise; Mutation; Autonomy: A Mark on Crusoe’s Island]]> http://machinemachine.net/text/research/a-mark-on-crusoes-island

This mini-paper was given at the Escapologies symposium, at Goldsmiths University, on the 5th of December Daniel Defoe’s 1719 novel Robinson Crusoe centres on the shipwreck and isolation of its protagonist. The life Crusoe knew beyond this shore was fashioned by Ships sent to conquer New Worlds and political wills built on slavery and imperial demands. In writing about his experiences, Crusoe orders his journal, not by the passing of time, but by the objects produced in his labour. A microcosm of the market hierarchies his seclusion removes him from: a tame herd of goats, a musket and gunpowder, sheafs of wheat he fashions into bread, and a shelter carved from rock with all the trappings of a King’s castle. Crusoe structures the tedium of the island by gathering and designing these items that exist solely for their use-value: “In a Word, The Nature and Experience of Things dictated to me upon just Reflection, That all the good Things of this World, are no farther good to us, than they are for our Use…” [1] Although Crusoe’s Kingdom mirrors the imperial British order, its mirroring is more structural than anything else. The objects and social contrivances Crusoe creates have no outside with which to be exchanged. Without an ‘other’ to share your labour there can be no mutual assurance, no exchanges leading to financial agreements, no business partners, no friendships. But most importantly to the mirroring of any Kingdom, without an ‘other’ there can be no disagreements, no coveting of a neighbours ox, no domination, no war: in short, an Empire without an outside might be complete, total, final, but an Empire without an outside has also reached a state of complete inertia. Crusoe’s Empire of one subject, is what I understand as “a closed system”… The 2nd law of thermo dynamics maintains that without an external source of energy, all closed systems will tend towards a condition of inactivity. Eventually, the bacteria in the petri dish will multiply, eating up all the nutrients until a final state of equilibrium is reached, at which point the system will collapse in on itself: entropy cannot be avoided indefinitely. The term ‘negative entropy’ is often applied to living organisms because they seem to be able to ‘beat’ the process of entropy, but this is as much an illusion as the illusion of Crusoe’s Kingdom: negative entropy occurs at small scales, over small periods of time. Entropy is highly probable: the order of living beings is not. Umberto Eco: “Consider, for example, the chaotic effect… of a strong wind on the innumerable grains of sand that compose a beach: amid this confusion, the action of a human foot on the surface of the beach constitutes a complex interaction of events that leads to the statistically very improbable configuration of a footprint.” [2] The footprint in Eco’s example is a negative entropy event: the system of shifting sands is lent a temporary order by the cohesive action of the human foot. In physical terms, the footprint stands as a memory of the foot’s impression. The 2nd law of thermodynamics establishes a relationship between entropy and information: memory remains as long as its mark. Given time, the noisy wind and chaotic waves will cause even the strongest footprint to fade. A footprint is a highly improbable event. Before you read on, watch this scene from Luis Buñuel’s Robinson Crusoe (1954):

The footprint, when it first appears on the island, terrifies Crusoe as a mark of the outsider, but soon, realising what this outsider might mean for the totality of his Kingdom, Robinson begins the process of pulling the mark inside his conceptions: “Sometimes I fancied it must be the Devil; and reason joined in with me upon this supposition. For how should any other thing in human shape come into the place? Where was the vessel that brought them? What marks were there of any other footsteps? And how was it possible a man should come there?” [3] In the novel, it is only on the third day that Crusoe re-visits the site to compare his own foot with the print. The footprint is still there on the beach after all this time, a footprint Crusoe now admits is definitely not his own. This chain of events affords us several allegorical tools: firstly, that of the Devil, Crusoe believes to be the only rational explanation for the print. This land, which has been Crusoe’s own for almost 2 decades, is solid, unchanging and eternal. Nothing comes in nor goes beyond its shores, yet its abundance of riches have served Crusoe perfectly well: seemingly infinite riches for a Kingdom’s only inhabitant. Even the footprint, left for several days, remains upon Crusoe’s return. Like the novel of which it is a part, the reader of the mark may revisit the site of this unlikely incident again and again, each time drawing more meanings from its appearance. Before Crusoe entertains that the footprint might be that of “savages of the mainland” he eagerly believes it to be Satan’s, placed there deliberately to fool him. Crusoe revisits the footprint, in person and then, as it fades, in his own memory. He ‘reads’ the island, attributing meanings to marks he discovers that go far beyond what is apparent. As Susan Stewart has noted: “In allegory the vision of the reader is larger than the vision of the text; the reader dreams to an excess, to an overabundance.” [4] Simon O’Sullivan, following from Deleuze, takes this further, arguing that in his isolation, a world free from ‘others’, Crusoe has merged with, become the island. The footprint is a mark that must be recuperated if Crusoe’s identity, his “power of will”, is to be maintained. An outsider must have caused the footprint, but Crusoe is only capable of reading in the mark something about himself. The evocation of a Demon, then, is Crusoe’s way of re-totalising his Empire, of removing the ‘other’ from his self-subjective identification with the island. So, how does this relate to thermodynamics? To answer that I will need to tell the tale of a second Demon, more playful even than Crusoe’s. In his 1871 essay, Theory of Heat, James Clerk Maxwell designed a thought experiment to test the 2nd law of Thermodynamics. Maxwell imagines a microscopic being able to sort atoms bouncing around a closed system into two categories: fast and slow. If such a creature did exist, it was argued, no work would be required to decrease the entropy of a closed system. By sorting unlikely footprints from the chaotic arrangement of sand particles Maxwell’s Demon, as it would later become known, appeared to contradict the law Maxwell himself had helped to develop. One method of solving the apparent paradox was devised by Charles H. Bennet, who recognised that the Demon would have to remember where he placed the fast and slow particles. Here, once again, the balance between the order and disorder of a system comes down to the balance between memory and information. As the demon decreases the entropy of its environment, so it must increase the entropy of its memory. The information required by the Demon acts like a noise in the system. The laws of physics had stood up under scrutiny, resulting in a new branch of science we now know as ‘Information Theory’. Maxwell’s Demon comes from an old view of the universe, “fashioned by divine intervention, created for man and responsive to his will” [5]. Information Theory represents a threshold, a revelation that the “inhuman force of increasing entropy, [is] indifferent to man and uncontrollable by human will.” [6] Maxwell’s Demon shows that the law of entropy has only a statistical certainty, that nature orders only on small scales and, that despite any will to control, inertia will eventually be reached. Developed at the peak of the British Empire, thermodynamics was sometimes called “the science of imperialism”, as Katherine Hayles has noted: “…to thermodynamicists, entropy represented the tendency of the universe to run down, despite the best efforts of British rectitude to prevent it from doing so… The rhetoric of imperialism confronts the inevitability of failure. In this context, entropy represents an apparently inescapable limit on the human will to control.” [7] Like Maxwell, Crusoe posits a Demon, with faculties similar in kind to his own, to help him quash his “terror of mind”. Crusoe’s fear is not really about outsiders coming in, the terror he feels comes from the realisation that the outsiders may have been here all along, that in all the 20 years of his isolation those “savages of the mainland” may have visited his island time and again. It is not an outside ‘other’ that disturbs and reorganises Crusoe’s Kingdom. A more perverse logic is at work here, and once again Crusoe will have to restructure his imperial order from the inside out. Before you read on, watch another scene from Luis Buñuel’s Robinson Crusoe (1954):

Jacques Rancière prepares for us a parable. A student who is illiterate, after living a fulfilled life without text, one day decides to teach herself to read. Luckily she knows a single poem by heart and procures a copy of that poem, presumably from a trusted source, by which to work. By comparing her memory of the poem, sign by sign, word by word, with the text of the poem she can, Rancière believes, finally piece together a foundational understanding of her written language: “From this ignoramus, spelling out signs, to the scientist who constructs hypotheses, the same intelligence is always at work – an intelligence that translates signs into other signs and proceeds by comparisons and illustrations in order to communicate its intellectual adventures and understand what another intelligence is endeavouring to communicate to it… This poetic labour of translation is at the heart of all learning.” [8] What interests me in Rancière’s example is not so much the act of translation as the possibility of mis-translation. Taken in light of The Ignorant Schoolmaster we can assume that Rancière is aware of the wide gap that exists between knowing something and knowing enough about something for it to be valuable. How does one calculate the value of what is a mistake? The ignoramus has an autonomy, but it is effectively blind to the quality and make-up of the information she parses. If she makes a mistake in her translation of the poem, this mistake can be one of two things: it can be a blind error, or, it can be a mutation. In information theory, the two ways to understand change within a closed system are understood to be the product of ‘noise’. The amount of change contributed by noise is called ‘equivocation’. If noise contributes to the reorganisation of a system in a beneficial way, for instance if a genetic mutation in an organism results in the emergence of an adaptive trait, then the equivocation is said to be ‘autonomy-producing’. Too much noise is equivalent to too much information, a ‘destructive’ equivocation, leading to chaos. This balance is how evolution functions. An ‘autonomy-producing’ mutation will be blindly passed on to an organism’s offspring, catalysing the self-organisation of the larger system (in this case, the species). All complex, what are called ‘autopoietic’ systems, inhabit this fine divide between noise and inertia.  Given just the right balance of noise recuperated by the system, and noise filtered out by the system, a state of productive change can be maintained, and a state of inertia can be avoided, at least, for a limited time. According to Umberto Eco, in ‘The Open Work’: “To be sure, this word information in communication theory relates not so much to what you do say, as to what you could say… In the end… there is no real difference between noise and signal, except in intent.” [9] This rigid delineator of intent is the driving force of our contemporary, communication paradigm. Information networks underpin our economic, political and social interactions: the failure to communicate is to be avoided at all costs. All noise is therefore seen as a problem. These processes, according to W. Daniel Hillis, define, “the essence of digital technology, which restores signal to near perfection at every stage.” [10] To go back to Umberto Eco then, we appear to be living in a world of “do say” rather than “could say”. Maintenance of the network and the routines of error management are our primary economic and political concern: control the networks and the immaterial products will manage themselves. The modern network paradigm acts like a Maxwell Demon, categorising information as either pure signal or pure noise. As Mark Nunes has noted, following the work of Deleuze and Guattari: “This forced binary imposes a kind of violence, one that demands a rationalisation of all singularities of expressions within a totalising system… The violence of information is, then, the violence of silencing or making to speak that which cannot communicate.” [11] To understand the violence of this binary logic, we need go no further than Robinson Crusoe. Friday’s questions are plain spoken, but do not adhere to the “do say” logic of Crusoe’s conception. In the novel, Crusoe’s approach to Friday becomes increasingly one sided, until Friday utters little more than ‘yes’ and ‘no’ answers, “reducing his language to a pure function of immediate context and perpetuating a much larger imperialist tradition of levelling the vox populi.”[12] Any chance in what Friday “could say” has been violently obliterated. The logic of Ranciere’s Ignoramous, and of Crusoe’s levelling of Friday’s speech, are logics of imperialism: reducing the possibility of noise and information to an either/or, inside/outside, relationship. Mark Nunes again: “This balance between total flow and total control parallels Deleuze and Guattari’s discussion of a regime of signs in which anything that resists systematic incorporation is cast out as an asignifying scapegoat “condemned as that which exceeds the signifying regime’s power of deterritorialisation.” [13] In the system of communication these “asignifying” events are not errors, in the common sense of the word. Mutation names a randomness that redraws the territory of complex systems. The footprint is the mark that reorganised the Empire. In Ranciere’s parable, rather than note her intent to decode the poem, we should hail the moment when the Ignoramus fails, as her autonomous moment. In a world where actants “translate signs into other signs and proceed by comparison and illustration” [14] the figures of information and communication are made distinct not by the caprice of those who control the networks, nor the desires of those who send and receive the messages, but by mutation itself. Michel Foucault, remarking on the work of Georges Canguilhem, drew the conclusion that the very possibility of mutation, rather than existing in opposition to our will, was what human autonomy was predicated upon: “In this sense, life – and this is its radical feature – is that which is capable of error… Further, it must be questioned in regard to that singular but hereditary error which explains the fact that, with man, life has led to a living being that is never completely in the right place, that is destined to ‘err’ and to be ‘wrong’.” [15] In his writings on the history of Heredity, The Logic of Life, Francois Jacob lingers on another Demon in the details, fashioned by Rene Descartes in his infamous meditation on human knowledge. François Jacob positions Descartes’ meditation in a period of explosive critical thought focussed on the very ontology of ‘nature’: “For with the arrival of the 17th Century, the very nature of knowledge was transformed. Until then, knowledge had been grafted on God, the soul and the cosmos… What counted [now] was not so much the code used by God for creating nature as that sought by man for understanding it.” [16] The infinite power of God’s will was no longer able to bend nature to any whim. If man were to decipher nature, to reveal its order, Descartes surmised, it was with the assurance that “the grid will not change in the course of the operation”[17]. For Descartes, the evil Demon, is a metaphor for deception espoused on the understanding that underlying that deception, nature had a certainty. God may well have given the world its original impetus, have designed its original make-up, but that make-up could not be changed. The network economy has today become the grid of operations onto which we map the world. Its binary restrictions predicate a logic of minimal error and maximum performance: a regime of control that drives our economic, political and social interdependencies. Trapped within his imperial logic, Robinson Crusoe’s levelling of inside and outside, his ruthless tidying of Friday’s noisy speech into a binary dialectic, disguises a higher order of reorganisation. As readers navigating the narrative we are keen to recognise the social changes Defoe’s novel embodies in its short-sighted central character. Perhaps, though, the most productive way to read this fiction, is to allegorise it as an outside perspective on our own time? Gathering together the fruits of research, I am often struck by the serendipitous quality of so many discoveries. In writing this mini-paper I have found it useful to engage with these marks, that become like demonic footprints, mutations in my thinking. Comparing each side by side, I hope to find, in the words of Michel Foucault: “…a way from the visible mark to that which is being said by it and which, without that mark, would lie like unspoken speech, dormant within things.” [18]    

References & Bibliography [1] Daniel Defoe, Robinson Crusoe, Penguin classics (London: Penguin Books, 2001).

[2] Umberto Eco, The open work (Cambridge: Harvard University Press, n.d.).

[3] Defoe, Robinson Crusoe.

[4] Susan Stewart, On longing: narratives of the miniature, the gigantic, the souvenir, the collection (Duke University Press, 1993).

[5] N. Katherine Hayles, “Maxwell’s Demon and Shannon’s Choice,” in Chaos bound: orderly disorder in contemporary literature and science (Cornell University Press, 1990).

[6] Ibid.

[7] Ibid.

[8] Jacques Rancière, The emancipated spectator (London: Verso, 2009).

[9] Umberto Eco, The open work (Cambridge: Harvard University Press, n.d.). (My emphasis)

[10] W Hillis, The pattern on the stone?: the simple ideas that make computers work, 1st ed. (New York: Basic Books, 1999).

[11] Mark Nunes, Error: glitch, noise, and jam in new media cultures (Continuum International Publishing Group, 2010).

[12] Susan Stewart, On longing: narratives of the miniature, the gigantic, the souvenir, the collection (Duke University Press, 1993).

[13] Nunes, Error.

[14] Rancière, The emancipated spectator.

[15] Michel Foucault, “Life: Experience and Science,” in Aesthetics, method, and epistemology (The New Press, 1999).

[16] François Jacob, The logic of life: a history of heredity?; the possible and the actual (Penguin, 1989).

[17] Ibid.

[18] Michel Foucault, The order of things?: an archaeology of the human sciences., 2003.

]]>
Wed, 07 Dec 2011 08:50:14 -0800 http://machinemachine.net/text/research/a-mark-on-crusoes-island
<![CDATA[A Labyrinth (No Minotaur)]]> http://www.geiab.org/GEIAB_DEUX/index.php?lang=eng&revue=showit&rn=4&article_id=91#begin

My sprawling review of the Goldsmiths Art MFA Degree Show, 2011 Originally published by Groupe d’Etudes Interdisciplinaires en Arts Britanniques The labyrinth. Turning; coiling. An allegory of improbable human journeys. Physical; mental; spiritual. Beyond; behind; within. But underneath the mythos and symbolism labyrinths are simple structures. The maze is corners, mere corners. Unfurl them all and the labyrinth becomes a cul de sac; a doorless hallway; a vanishing point leading nowhere. Browsing an MFA final show can feel like an endless hall. No matter how many artworks you peruse, how many studio spaces you violate, how many £3 lukewarm beers on which you ruminate there’s always another curtain asking you to draw it back. I don’t mean to begin this review on a downer, indeed, given a few more paragraphs I hope to have you cursing yourself for missing this year’s Goldsmiths Postgraduate Degree Show. What I do want to do is move you away from the grand figure, the thread of Ariadne convincing you with its singular lineage that degree shows tell you something about the institutions that house them. Goldsmiths’ reputation, were I to spend 1,000 words bullying and poking at it, might tell us more in fact about the figure of the labyrinth than it does about the artists who have scrawled its name all over their curriculum vitae.

Consonants and vowels featured highly in this year’s degree show; ‘Nada’ carved in giant, pink wooden lettering marked a studio of ‘Nonsensical objects I made with my neighbours’, with no indication as to the identity of the artist (or the neighbours). The admission “I was going to install a video piece here but I fucked up” is scrawled in black ink on the cupboard of an electrical circuit breaker. Located on its own floor this year, the Art Writing MFA showcased words and sounds in ways the Fine Art show could not manage alone. Behind one particularly black curtain the text “This image has nothing to do with the video that shall begin imminently” overlays a freeze-frame of old age pensioners in a work by Liam Rogers. As the image finally ebbs away droll, haunting bass tones punctuate a narrative milieu: two black cats lounging in digital shadow; an extreme close-up of a flea, trapped between strands of human hair; a strutting chicken and the voice of Ayn Rand “I will not die, it’s the world that will die.” In another recess of the Laurie Grove Bath studios Noam Edry’s politically anarchic sketches and suggestive graffiti were being photographed, constantly and throughout the opening event, by two neutral looking observers. Upon entering the room my bag was searched by a mock custodian. To one side, beside a massage therapist actively working on the spine of a fellow ‘member of the public’, an arrow on the wall labeled “Groovy Little War Mix” pointed to a monitor propped-up on chunks of rubble. On its screen the letters G-O-L-D-S and M exploded in successive puffs of computer enhanced tom-foolery. Clutching university issue headphones to my ears I watched a performer dressed as a giant date taunt one of the MFA’s directors into dancing with her. Before I could move on to the next room (an imaginary ICA show on comedian Andy Kauffman, compiled by the Curating MA) a team of volunteers enthused me into having a Turkish coffee. Titles and scrawlings; etchings and subtitles continued to surround me. “Remember Taj Mahal, India” Johann Arens’ video work implored: “Close your eyes.” Caught between two HD flat-screen televisions (two eyes? two halves of the brain?) Arens’ work ‘Effect Rating’ engineers a confusion between the object and its representation. In this case, the object was the human brain, slowly conveyered into the centre of a donut-shaped MRI machine. The film blurs ‘actual’ footage and foam mock-ups of an MRI scan into a meditation on neuroscience and the art-object. Like the corpus callosum separating my cerebral hemispheres, I longed to be scalpelled in two, each half of me finally free to rove the rest of the show unhindered. In the basement, hidden by shadow, I followed my ears to another series of video works, this time by Jill Vanepps. Horrific flesh-puppet-orifices attempted to penetrate one another with elongated, furry tendrils. Two Davids (Cronenberg and Lynch) seemed to fight for recognition in these dark works meditating on the (dis)order of female puberty. A projector restricted with layers of tape and Vaseline punched me with its flickering half-light: “Witchlike” a woman’s voice said, “of low intelligence.” I listened, “Style…” alone, “comes out of conviction…” until other bodies came to linger with me in the dirge. This was an experience I wasn’t willing to share. Before I moved on to the more official looking Ben Pimlott building, I paused to consider the physics of Hirofumi Isoya’s sculptural works. Like computer generated frames, suspended in real space, Isoya’s works ‘After brick slips’ and ‘Test on a mimic facade of an experimental house’ monumentalise the equal-and-opposite-reaction. Made-up of a bed of smashed tiles with a wire mesh extended in a peak above it, each work isolates the physics of destruction in single, free-standing, art objects. Being a child of the freeze-frame, of time-lapse photography and ultra-high-speed video I had little trouble figuring the events that created these fragmented craters of tile and cement. Had I not the technical grammar I might well have seen in these works the splash of a hailstorm on the surface of a lake, or the arching curvature of a daphodil: each inverted wire trumpet spoke of wrecking-balls and flower petals just the same. Making sure the Goldsmiths brand still adorned its roof (they were CGI explosions weren’t they?) I entered the Ben Pimlott building. Winding its concrete staircase to the 3rd, 4th, 5th and 6th floors the second labyrinth of the evening seemed to offer its secrets more readily than the first. Spaces felt more open, corners more isolated, free-standing structures more free to stand. My beer was icy cold. Jie Hye Yeom’s works were the first to grab my attention. Video pieces projected on or nearby a series of awkward objects: a red ball with a 5-foot circumference; a grey plastic sheet quivering in the projector fan; a giant brain made out of builder’s insulation foam. From inside a long metal cylinder ‘The ffond’ coughed and spluttered from its projector, heating up the surrounding air that was then blasted into my face. A synthetic voice with a strong American accent narrates as the artist’s journey through the ffond, an imaginary engineering marvel connecting two distinct points on the Earth’s surface. The words “Where is here?” flash up, written in both Korean and English. A stooping old woman guides her through foreign wreckage, “Can you help me get to Korea?” In another work, ‘Solmier’, partially blinded by headgear made of baguettes, Jie Hye Yeom is guided through an African village by a giggling group of children. At the edge of the forest the artist stops, her mission accomplished. With glee the children gather around to eat her mask. On the floor above a cartoon tapestry welcomes me into a two-tiered space shared by Soheila Sokhanvari and Hans Diernberger. Parodying the work of Jeff Koons a taxidermied pony rests, snug, in a sculptural figure of a beanbag, or perhaps a balloon. As I nervously turned on my heels to leave a well dressed woman urges her children, in hushed tones, to leave the thing’s backside alone. In the centre of Diernberger’s space a rectangular recess sweeps the floor. Within it, prefigured on a video loop, we can see the head of a trampolinist directly from above. Bouncing carefully (presumably so as not to knock the camera mounted above her) she taunts us with a warm-up, the final elastic bound never arriving. On the top floor of the Ben Pimlott building the tone of the show takes a swerve as I reach the Art Writing MFA Postgraduate Show. A text by Tone Gellein asks me to unfold it in 4-dimensions. Sealed in a pretty glass cabinet are a series of etchings, like some blueprint for machines from other, equally improbable worlds. ‘Catalogue for Detecting Mystery Riders’ the wall exclaims, a work by Emily Whitebread. In another darkened video room (perhaps the 20th of the day) I wait for the loop of Jennifer Jarmen’s work to repeat. A dual-screen conversation ensues between Jennifer and a voiceless friend; between a ventriloquist and his dummy. The unmistakable voice of scientist V.S. Ramachandran ponders the role of the mirror in phantom limb patient therapy. As one video interrupts the other I feel the severed halves of my cerebellum stitch back into place once again. As the crowd began to trickle from the studios the night came closing in. On Tuesday morning the deconstruction will begin. Temporary walls will be torn down. A hundred projectors will be taken back to their dusty cupboards to lie forgotten for another season. Fragile sculptures will be dismantled and lugged home, piece by piece, on the number 21 bus. Perhaps amongst everything I’ve seen, every studio I’ve poked my head into or artist-contact-card I’ve stuffed into my wallet, a few works will make it into private galleries, or be mentioned in articles and essays like this one. In the pub someone asks me which works I think they’ll be. I shrug nonchalantly, “That’s up to the market, not you or me.” As I finish speaking a laugh erupts behind me. From my pocket, and trailing along the pub floor, comes a long reel of string. “Silly me,” I say to no-one in particular, as I begin to follow it back out of the pub, back through the grey South London streets, back to the labyrinth of the Goldsmiths’ Postgraduate Degree Show.

]]>
Fri, 26 Aug 2011 12:03:00 -0700 http://www.geiab.org/GEIAB_DEUX/index.php?lang=eng&revue=showit&rn=4&article_id=91#begin
<![CDATA[A Diatribe from the Remains of Dr. Fred McCabe]]> http://www.3quarksdaily.com/3quarksdaily/2010/07/a-diatribe-from-the-remains-of-dr-fred-mccabe.html

About a month ago in handling the remains of one Dr. Fred McCabe I found rich notes of contemplation on the subject of information theory. It appears that Fred could have written an entire book on the intricacies of hidden data, encoded messages and deceptive methods of transmission. Instead his notes exist in the form of a cryptic assemblage of definitions and examples, arranged into what Dr. McCabe himself labelled a series of ‘moments’.

I offer these moments alongside some of the ten thousand images Dr. McCabe amassed in a separate, but intimately linked, archive. The preface to this abridged compendium is little capable of preparing one for the disarray of material, but by introducing this text with Fred's own words it is my hope that a sense of the larger project will take root in the reader’s fertile imagination.

The Moment of the Message: A Diatribe

by Dr. Fred McCabe

More than ten thousand books on mathematics and a thousand books on philosophy exist for every one upon information. This is surprising. It must mean something.

I want to give you a message. But first. I have to decide how to deliver the message.

This is that moment.

I can write it down, or perhaps memorise it – reciting it in my head like a mantra, a prayer chanted in the Palace gardens. And later, speaking in your ear, I will repeat it to you. That is, if you want to hear it.

I could send it to you, by post, or telegram. After writing it down I will transmit it to you. Broadcasting on your frequency in the hope that you will be tuned in at the right moment. Speaking your language. Encoded and encrypted, only you will understand it.

I have a message for you and I want you to receive it. But first. I have to decide what the message is.

This is that moment:

This is the moment of the message

From the earliest days of information theory it has been appreciated that information per se is not a good measure of message value. The value of a message appears to reside not in its information (its absolutely unpredictable parts) but rather in what might be called its redundancy—parts predictable only with difficulty, things the receiver could in principle have figured out without being told, but only at considerable cost in money, time, or computation. In other words, the value of a message is the amount of work plausibly done by its originator, which its receiver is saved from having to repeat.

This is the moment my water arrived at room temperature

The term enthalpy comes from the Classical Greek prefix en-, meaning "to put into", and the verb thalpein, meaning "to heat".

For a simple system, with a constant number of particles, the difference in enthalpy is the maximum amount of thermal energy derivable from a thermodynamic process in which the pressure is held constant.

This is the moment the wafer became the body of Christ

The Roman Catholic Church got itself into a bit of a mess. Positing God as the victim of the sacrifice introduced a threshold of undecidability between the human and the divine. The simultaneous presence of two natures, which also occurs in transubstantiation, when the bread and wine become the body and blood of Christ, threatens to collapse the divine into the human; the sacred into the profane. The question of whether Christ really is man and God, of whether the wafer really is bread and body, falters between metaphysics and human politics. The Pope, for all his failings, has to decide the undecidable.

This is the moment black lost the game

A ko fight is a tactical and strategic phase that can arise in the game of go.

Some kos offer very little gain for either player. Others control the fate of large portions of the board, sometimes even the whole board, and the outcome of those kos can determine the winner of the game. For this reason, finding and using ko threats well is a very important skill.

This is the moment Robinson Crusoe becomes the first, English language novel

According to Abel Chevalley, a novel is: "a fiction in prose of a certain extent”, defining that "extent" at over 50,000 words. Some critics distinguish the novel from the romance (which has fantastic elements), the allegory (in which characters and events have political, religious or other meanings), and the picaresque (which has a loosely connected sequence of episodes). Some critics have argued that Robinson Crusoe contains elements of all three of these other forms.

This is the moment Sarah Conner takes control

A paper clip is usually a thin wire in a looped shape that takes advantage of the elasticity and strength of the materials of its construction to compress and therefore hold together two or more pieces of paper by means of torsion and friction. Some other kinds of paper clip use a two-piece clamping system.

In fiction, a paper clip often takes the place of a key as means of breaking and entering, or, in Sarah Conner’s case, as means of escape.

This is the moment they found the missing piece of DNA

In northern Spain 49,000 years ago, 11 Neanderthals were murdered. Their tooth enamel shows that each of them had gone through several periods of severe starvation, a condition their assailants probably shared. Cut marks on the bones indicate the people were butchered with stone tools. About 700 feet inside the cave, a research team excavated 1,700 bones from that cannibalistic feast. Much of what is known about Neanderthal genetics comes from those 11 individuals.

This is the moment Bill Clinton lied (to himself)

A microexpression is a brief, involuntary facial expression shown on the face according to emotions experienced. They usually occur in high-stake situations, where people have something to lose or gain. Unlike regular facial expressions, it is difficult to fake microexpressions. Microexpressions express the seven universal emotions: disgust, anger, fear, sadness, happiness, surprise, and contempt. They can occur as fast as 1/25 of a second.

This is the moment I strained my iris

Idiopathic is an adjective used primarily in medicine meaning arising spontaneously or from an obscure or unknown cause. From Greek idios (one's own) and pathos (suffering), it means approximately "a disease of its own kind."

This is the moment everything changed

In ordinary conversation, everything usually refers only to the totality of things relevant to the subject matter. When there is no expressed limitation, everything may refer to the universe or the world.

Every object and entity is a part of everything, including all physical bodies and in some cases all abstract objects. Everything is generally defined as the opposite of nothing, although an alternative view considers "nothing" a part of everything.

This is the moment of another message

In information theory the value of a message is calculated by the cost it would take to repeat or replace the work the message has done.

One might argue that a message’s usefulness is a better measure of value than its replacement cost. Usefulness is an anthropocentric concept that information theorists find difficult to conceptualise.

]]>
Sun, 11 Jul 2010 21:25:00 -0700 http://www.3quarksdaily.com/3quarksdaily/2010/07/a-diatribe-from-the-remains-of-dr-fred-mccabe.html