MachineMachine /stream - search for consciousness en-us LifePress <![CDATA[Consciousness Began When the Gods Stopped Speaking: Julian Jaynes’ Famous 1970s Theory]]>

Julian Jaynes was living out of a couple of suitcases in a Princeton dorm in the early 1970s. He must have been an odd sight there among the undergraduates, some of whom knew him as a lecturer who taught psychology, holding forth in a deep baritone voice.

Sun, 26 Nov 2017 10:30:42 -0800
<![CDATA[Transmediale 2017 (events)]]>

I just came back from two jam packed weeks at Transmediale festival, 2017. Morehshin Allahyari and I were involved in a wealth of events, mostly in relation to our #Additivism project. Including: On the Far Side of the Marchlands: an exhibition at Schering Stiftung gallery, featuring work by Catherine Disney, Keeley Haftner, Brittany Ransom, Morehshin and myself.

Photos from the event are gathered here.

The 3D Additivist Cookbook european launch: held at Transmediale on Saturday 4th Feb.

Audio of the event is available here.

Singularities: a panel and discussion conceived and introduced by Morehshin and myself. Featuring Luiza Prado & Pedro Oliveira (A parede), Rasheedah Phillips, and Dorothy R. Santos.

Audio of the entire panel is available here. The introduction to the panel – written by Morehshin and myself – can be found below. Photos from the panel are here.

Alien Matter exhibition: curated by Inke Arns as part of Transmediale 2017. Featuring The 3D Additivist Cookbook and works by Joey Holder, Dov Ganchrow, and Kuang-Yi Ku.

Photos from the exhibition can be found here.


Singularities Panel delivered at Transmediale, Sunday 5th February 2017 Introduction by Morehshin Allahyari and Daniel Rourke   Morehshin: In 1979, the Iranian Islamic revolution resulted in the overthrowing of the Pahlavi deen-as-ty and led to the establishment of an Islamic republic. Many different organizations, parties and guerrilla groups were involved in the Iranian Revolution. Some groups were created after the fall of Pahlavi and still survive in Iran; others helped overthrow the Shah but no longer exist. Much of Iranian society was hopeful about the coming revolution. Secular and leftist politicians participated in the movement to gain power in the aftermath, believing that Khomeini would support their voice and allow multiple positions and parties to be active and involved in the shaping of the post-revolution Iran. Like my mother – a Marxist at the time – would always say: The Iranian revolution brought sudden change, death, violence in unforeseen ways. It was a point, a very fast point of collapse and rise. The revolution spun out of control and the country was taken over by Islamists so fast that people weren’t able to react to it; to slow it; or even to understand it. The future was now in the hands of a single party with a single vision that would change the lives of generations of Iranians, including myself, in the years that followed. We were forced and expected to live in one singular reality. A mono authoritarian singularity. In physics, a singularity is a point in space and time of such incredible density that the very nature of reality is brought into question. Associated with elusive black holes and the alien particles that bubble out of the quantum foam at their event horizon, the term ‘singularity’ has also been co-opted by cultural theorists and techno-utopianists to describe moments of profound social, political, ontological or material transformation. The coming-into-being of new worlds that redefine their own origins. For mathematicians and physicists, singularities are often considered as ‘bad behaviour’ in the numbers and calculations. Infinite points may signal weird behaviours existing ‘in’ the physical world: things outside or beyond our ability to comprehend. Or perhaps, more interestingly, a singularity may expose the need for an entirely new physics. Some anomalies can only be made sense of by drafting a radically new model of the physical world to include them. For this panel we consider ‘bad behaviours’ in social, technological and ontological singularities. Moments of profound change triggered by a combination of technological shifts, cultural mutations, or unforeseen political dramas and events. Like the physicists who comprehend singularities in the physical world, we do not know whether the singularities our panelists highlight today tell us something profound about the world itself, or force us to question the model we have of the world or worlds. Daniel: As well as technological or socio-political singularities, this panel will question the ever narcissistic singularities of ‘I’, ‘here’ and ‘now’ – confounding the principles of human universality upon which these suppositions are based. We propose ‘singularities’ as eccentric and elusive figures in need of collective attention. It is no coincidence that ‘Singularity’ is often used as a term to indicate human finitude. Self-same subjects existing at particular points in time, embedded within particular contexts, told through a singular history or single potential future. The metaphor of the transformative Singularity signals not one reality ‘to come’, nor even two realities – one moved from and one towards – but of many, all dependant on who the subject of the singularity is and how much autonomy they are ascribed. The ‘Technological’ Singularity is a myth of the ‘transhumanists’, a group of mainly Western, commonly white, male enthusiasts, who ascribe to the collective belief that technology will help them to become ‘more than human’… ‘possessed of drastically augmented intellects, memories, and physical powers.’ As technological change accelerates, according to prominent Transhumanist Ray Kurzweil, so it pulls us upwards in its wake. Kurzweil argues that as the curve of change reaches an infinite gradient reality itself will be brought into question: like a Black Hole in space-time subjects travelling toward this spike will find it impossible to turn around, to escape its pull. A transformed post-human reality awaits us on the other side of the Technological Singularity. A reality Kurzweil and his ilk believe ‘we’ will inevitably pass into in the coming decades. In a 2007 paper entitled ‘Droppin’ Science Fiction’, Darryl A. Smith explores the metaphor of the singularity through Afro-American and Afrofuturist science fiction. He notes that the metaphor of runaway change positions those subject to it in the place of Sisyphus, the figure of Greek myth condemned to push a stone up a hill forever. For Sisyphus to progress he has to fight gravity as it conspires with the stone to pull him back to the bottom of the slope. The singularity in much science fiction from black and afro-american authors focusses on this potential fall, rather than the ascent:

“Here, in the geometrics of spacetime, the Spike lies not at the highest point on an infinite curve but at the lowest… Far from being the shift into a posthumanity, the Negative Spike is understood… as an infinite collapsing and, thus, negation of reality. Escape from such a region thus requires an opposing infinite movement.”

The image of a collective ‘push’ of the stone of progress up the slope necessarily posits a universal human subject, resisting the pull of gravity back down the slope. A universal human subject who passes victorious to the other side of the event horizon. But as history has shown us, technological, social and political singularities – arriving with little warning – often split the world into those inside and those outside their event horizons. Singularities like the 1979 Iranian revolution left many more on the outside of the Negative Spike, than the inside. Singularities such as the Industrial Revolution, which is retrospectively told in the West as a tale of imperial and technological triumph, rather than as a story of those who were violently abducted from their homelands, and made to toil and die in fields of cotton and sugarcane. The acceleration toward and away from that singularity brought about a Negative Spike so dense, that many millions of people alive today still find their identities subject to its social and ontological mass. In their recent definition of The Anthropocene, the International Commission on Stratigraphy named the Golden Spike after World War II as the official signal of the human-centric geological epoch. A series of converging events marked in the geological record around the same time: the detonation of the first nuclear warhead; the proliferation of synthetic plastic from crude oil constituents; and the introduction of large scale, industrialised farming practices, noted by the appearance of trillions of discarded chicken bones in the geological record. Will the early 21st century be remembered for the 9/11 terrorist event? The introduction of the iPhone, and Twitter? Or for the presidency of Donald J Trump? Or will each of these extraordinary events be considered as part of a single, larger shift in global power and techno-mediated autonomy? If ‘we’ are to rebuild ourselves through stronger unities, and collective actions in the wake of recent political upheavals, will ‘we’ also forego the need to recognise the different subjectivities and distinct realities that bubble out of each singularity’s wake? As the iPhone event sent shockwaves through the socio-technical cultures of the West, so the rare earth minerals required to power those iPhones were pushed skywards in value, forcing more bodies into pits in the ground to mine them. As we gather at Transmediale to consider ai, infrastructural, data, robotic, or cyborgian revolutions, what truly remains ‘elusive’ is a definition of ‘the human’ that does justice to the complex array of subjectivities destined to be impacted – and even crafted anew – by each of these advances. In his recent text on the 2011 Fukushima Daiichi nuclear disaster Jean-Luc Nancy proposes instilling “the condition of an ever-renewed present” into the urgent design and creation of new, mobile futures. In this proposition Nancy recognises that each singularity is equal to all others in its finitude; an equivalence he defines as “the essence of community.” To contend with the idea of singularities – plural – of ruptures as such, we must share together that which will forever remain unimaginable alone. Morehshin: This appeal to a plurality of singularities is easily mistaken for the kinds of large scale collective action we have seen in recent years around the world. From the Arab Springs, and Occupy Movement through to the recent Women’s March, which took place not 24 hours after the inauguration of Donald Trump. These events in particular spoke of a universal drive, a collective of people’s united against a single cause. Much has been written about the ‘human microphone’ technique utilized by Occupy protesters to amplify the voice of a speaker when megaphones and loud speakers were banned or unavailable. We wonder whether rather than speak as a single voice we should seek to emphasise the different singularities enabled by different voices, different minds; distinct votes and protestations. We wonder whether black and brown protestors gathered in similar numbers, with similar appeals to their collective unity and identity would have been portrayed very differently by the media. Whether the radical white women and population that united for the march would also show up to the next black lives matter or Muslim ban protests. These are not just some academic questions but an actual personal concern… what is collectivism and for who does the collective function? When we talk about futures and worlds and singularities, whose realities are we talking about? Who is going to go to Mars with Elon Musk? And who will be left? As we put this panel together, in the last weeks, our Manifesto’s apocalyptic vision of a world accelerated to breaking point by technological progress began to seem strangely comforting compared to the delirious political landscape we saw emerging before us. Whether you believe political mele-ee-ze, media delirium, or the inevitable implosion of the neo-liberal project is to blame for the rise of figures like Farage, Trump or – in the Philippines – the outspoken President Rodrigo Duterte, the promises these figures make of an absolute shift in the conditions of power, appear grand precisely because they choose to demonize the discrete differences of minority groups, or attempt to overturn truths that might fragment and disturb their all-encompassing narratives. Daniel: The appeal to inclusivity – in virtue of a shared political identity – often instates those of ‘normal’ body, race, sex, or genome as exclusive harbingers of the-change-which-should – or so we are told, will – come. A process that theorist Rosi Braidotti refers to as a ‘dialectics of otherness’ which subtly disguises difference, in celebration of a collective voice of will or governance. Morehshin: Last week on January 27, as part of a plan to keep out “Islamic terrorists” outside of the United States Trump signed an order, that suspended entry for citizens of seven countries for 90 days. This includes Iran, the country I am a citizen of. I have lived in the U.S. for 9 years and hold a green-card which was included in Trump’s ban and now is being reviewed case by case for each person who enters the U.S.. When the news came out, I was already in Berlin for Transmediale and wasn’t sure whether I had a home to go back to. Although the chaos of Trump’s announcement has now settled, and my own status as a resident of America appears a bit more clear for now, the ripples of emotion and uncertainty from last week have coloured my experience at this festival. As I have sat through panels and talks in the last 3 days, and as I stand here introducing this panel about elusive events, potential futures and the in betweenness of all profound technological singularities… the realities that feel most significant to me are yet to take place in the lives of so many Middle-Easterners and Muslims affected by Trump’s ban. How does one imagine/re-imagine/figure/re-figure the future when there are still so many ‘presents’ existing in conflict? I grew up in Iran for 23 years, where science fiction didn’t really exist as a genre in popular culture. I always think we were discouraged to imagine the future other than how it was ‘imagined’ for us. Science-fiction as a genre flourishes in the West… But I still struggle with the kinds of futures we seem most comfortable imagining. THANKS   We now want to hand over to our fantastic panelists, to highlight their voices, and build harmonies and dissonances with our own. We are extremely honoured to introduce them: Dorothy Santos is a Filipina-American writer, editor, curator, and educator. She has written and spoken on a wide variety of subjects, including art, activism, artificial intelligence, and biotechnology. She is managing editor of Hyphen Magazine, and a Yerba Buena Center for the Arts fellow, where she is researching the concept of citizenship. Her talk today is entitled Machines and Materiality: Speculations of Future Biology and the Human Body. Luiza Prado and Pedro Oliveira are Brazilian design researchers, who very recently wrapped up their PhDs at the University of the Arts Berlin. Under the ‘A Parede’ alias, the duo researches new design methodologies, processes, and pedagogies for an onto-epistemological decolonization of the field. In their joint talk and performance, Luiza and Pedro will explore the tensions around hyperdense gravitational pulls and acts of resistance. With particular focus on the so-called “non-lethal” bombs – teargas and stun grenades – manufactured in Brazil, and exported and deployed all around the world. Rasheedah Phillips is creative director of Afrofuturist Affair: a community formed to celebrate, strengthen, and promote Afrofuturistic and Sci-Fi concepts and culture. In her work with ‘Black Quantum Futurism’, Rasheedah derives facets, tenets, and qualities from quantum physics, futurist traditions, and Black/African cultural traditions to celebrate the ability of African-descended people to see “into,” choose, or create the impending future. In her talk today, Rasheedah will explore the history of linear time constructs, notions of the future, and alternative theories of temporal-spatial consciousness.      

Thu, 09 Feb 2017 08:50:26 -0800
<![CDATA[Towards a statistical mechanics of consciousness: maximization of number of connections is associated with conscious awareness]]>

Authors: R. Guevara Erra, D. M. Mateos, R. Wennberg, J.L. Perez Velazquez Abstract: It has been said that complexity lies between order and disorder. In the case of brain activity, and physiology in general, complexity issues are being considered with increased emphasis.

Sun, 23 Oct 2016 04:56:19 -0700
<![CDATA[Even Transhumanist Elites Are Worried Only the Rich Will Be Able to Hack Death | Motherboard]]>

The story of Z was supposed to be about how biohacking had allowed her to become immortal. She lived in the year 2040, and by most measures her life was happy. Her mother’s body had died five years prior, but her consciousness was uploaded to the global grid and they still spoke frequently.

Sun, 31 Jan 2016 09:14:44 -0800
<![CDATA[The Moral Imperative of Human Spaceflight | Grand Strategy: The View from Oregon]]>

The text below is a paper I wrote to correspond to my presentation at the first 100YSS Symposium in 2011. 0. Preamble1. The case against civilization2. The evils of civilization3. The future of civilization4. The future of morality5. The place of consciousness in axiology6.

Fri, 21 Aug 2015 12:36:00 -0700
<![CDATA[Consciousness Began When the Gods Stopped Speaking - Issue 24: Error - Nautilus]]>

Julian Jaynes was living out of a couple of suitcases in a Princeton dorm in the early 1970s. He must have been an odd sight there among the undergraduates, some of whom knew him as a lecturer who taught psychology, holding forth in a deep baritone voice.

Sun, 31 May 2015 05:38:43 -0700
<![CDATA[Ritual and the Consciousness Monoculture]]>

Sarah Perry is a guest blogger who blogs at Carcinisation and is the author of Every Cradle is a Grave: Rethinking the Ethics of Birth and Suicide.

Sat, 24 Jan 2015 06:50:45 -0800
<![CDATA[p-zombies are inconceivable. With notes on the idea of metaphysical possibility. « Scientia Salon]]>

Philosophy of mind and the nature of consciousness are fascinating topics, which recur both here at Scientia Salon [1] and at my former writing outlet, Rationally Speaking [2].

Sun, 10 Aug 2014 04:15:46 -0700
<![CDATA[The first conscious machines will probably be on Wall Street | The Mitrailleuse]]>

We must consider the possibility that intelligence, creativity and even consciousness are purely functions of the material world, with human beings as a peculiar kind of computer.

Mon, 14 Jul 2014 01:10:26 -0700
<![CDATA[The first conscious machines will probably be on Wall Street | The Mitrailleuse]]>

We must consider the possibility that intelligence, creativity and even consciousness are purely functions of the material world, with human beings as a peculiar kind of computer.

Tue, 08 Jul 2014 03:15:09 -0700
<![CDATA[Mathematical Model Suggests That Human Consciousness Is Noncomputable - Slashdot]]>

KentuckyFC (1144503) writes "One of the most profound advances in science in recent years is the way researchers from a variety of fields are beginning to formulate the problem of consciousness in mathematical terms, in particular using information theory.

Wed, 21 May 2014 13:29:39 -0700
<![CDATA[Entering Posthumanism: Ihab Hassan and Neil Badmington | Simulation Space]]>

In “Prometheus as Performer: Toward a Posthumanist Culture?” Ihab Hassan uses the metaphor of the mythological Prometheus to frame his discussion on posthumanism and positions him as a trickster with a double nature that he wishes to reconcile. Hassan’s overarching argument about posthumanism is that it must be viewed as the representation of the convergence of two opposing aspects of our reality. These opposing aspects are not singularly defined, but have to do with the mind’s struggle to grasp the overlap of imagination and science, or myth and technology. Both Hassan and Neil Badmington (“Introduction: Approaching Posthumanism”) talk about how posthumanism is viewed as a “dubious neologism” that implies a sense of Man’s self-hate. Yet, both also insist that humanism is coming to its inevitable end, and that we must accept the transformation for what it is – the beginning of Man’s end, and transformation into the posthuman subject.

As one of the first theorists to discuss the emergence of posthumanism, Hassan begins by letting his readers know that he will not be focusing on postmodernism, but rather on the necessity of accepting that the human form is changing and in need of re-examination. He insists that there is nothing mystical or supernatural in the process leading us to a posthumanist culture, but that it is a “sudden mutation of the times” (Hassan, 834) where the conjunction of imagination and science, as well as myth and technology, has already begun. This process is able to move forward only once the human mind can begin to understand and accept the dematerliazation of life and existence.

Here, he is not speaking of the literal end of Mankind, even though he evokes the writings of Levi-Strauss in A World on the Wane, who stated: “The world began without the human race and it will end without it.” Furthermore, he also cites Foucault, who in The Order of Things wrote: “Man is neither the oldest not the most constant problem that has been posed for human knowledge [...] man is an invention of recent date. And one perhaps nearing its end” (Hassan, 843). Again, Hassan is convinced that this does not mean the literal end of man but the end of an image of man shaped by Descartes, Thomas More and Erasmus. He is talking about contemporary structuralist thought and how it emphasizes the dissolution of the “subject” and the destruction of the Cartesian ego, which has turned the world into an “object” that Man has mastered. On the contrary, the self, for structuralists and post-structuralists, is an empty place.

This is a predecessor of sorts for Badmington’s argument that over the course of the centuries, Man’s self-love has suffered, according to Freud, “two major blows at the hands of science. The worst was when they learnt that our earth was not the centre of the universe but only a tiny fragment of a cosmic system of scarcely imaginable vastness” (Badmington, 6). Here, Badgminton insists that “to read Freud is to witness the waning of humanism,” because “Man loses his place at the center of things” (Badmington, 5). Lacan, who for Badmington is the central anti-humanist, found himself, along with Althusser and Foucault, issuing “a warrant for the death of Man” (Badmington, 6).

Returning to Hassan, he argues that the death of Man is both the death of Humanism as well as the rise of the machine. To comment on the former, he insists that thanks to contemporary Western thought, Humanists have always insisted on dividing the mind into reason and feelings. Using examples such as experimental science and the incorporation of technology into the arts, Hassan argues for an undeniable convergence that has already begun, and the “unified consciousness” that Man must strive towards if it wants to evolve into the transformative homo sapien. Hassan cites Elizabeth Mann Borghese who argues: “Human nature is still evolving. The postmodern man may not be the same homo sapien. Posthuman philosophy must now address artificial intelligence, which is no mere figment of science fiction – it is alive in our midst” (Hassan, 846). The “chilling obsolescence of the human brain” does not know when or how it will become obsolete, but it must revise its self-conception.

Citing Arthur Koestler, Hassan discusses the possibility of the human brain as a mistake in evolution, asking: “Will AI supercede the brain, rectify or, or extend it?” While he does not provide an answer, he does say that AI will help to transform the image of man as well as his conception, as an “agent of the new posthumanism.” Hassan reminds us that visions of AI are not science fiction that are meant to shock us, as they are immediate and relevant thoughts. Technology is apparently no longer empowered by human reality (Heiddeger, 1966), and no longer responds to the human measure. Hassan wonders whether Man is too daring in his pursuit of technological extension, and whether “transhumanization” will lead to the literal end of Man.

Badmington also talks about the crisis that Man has put himself in through his involvement with technology, citing several Hollywood science fiction films that popularize the rise of machines as well as the transformation into the cyborg. Badmington insists that this idea addresses the crisis of Humanism by presenting us with the end of Man as we know him. He repeatedly cites the work of Derrida in the hopes of reiterating the necessity of rethinking the anti-humanist position. This article concludes with the insistence that Humanism never manages to constitute itself; it forever rewrites itself as posthumanism. This movement is always happening, and humanism cannot escape its inevitable transition.

Wed, 11 Dec 2013 15:42:51 -0800
<![CDATA[Four Notes Towards Post-Digital Propaganda | post-digital-research]]>

“Propaganda is called upon to solve problems created by technology, to play on maladjustments and to integrate the individual into a technological world” (Ellul xvii).

How might future research into digital culture approach a purported “post-digital” age? How might this be understood?


A problem comes from the discourse of ‘the digital’ itself: a moniker which points towards units of Base-2 arbitrary configuration, impersonal architectures of code, massive extensions of modern communication and ruptures in post-modern identity. Terms are messy, and it has never been easy to establish a ‘post’ from something, when pre-discourse definitions continue to hang in the air. As Florian Cramer has articulated so well, ‘post-digital’ is something of a loose, ‘hedge your bets’ term, denoting a general tendency to criticise the digital revolution as a modern innovation (Cramer).

Perhaps it might be aligned with what some have dubbed “solutionism” (Morozov) or “computationalism” (Berry 129; Golumbia 8): the former critiquing a Silicon Valley-led ideology oriented towards solving liberalised problems through efficient computerised means. The latter establishing the notion (and critique thereof) that the mind is inherently computable, and everything associated with it. In both cases, digital technology is no longer just a business that privatises information, but the business of extending efficient, innovative logic to all corners of society and human knowledge, condemning everything else through a cultural logic of efficiency.

In fact, there is a good reason why ‘digital’ might as well be an synonym for ‘efficiency’. Before any consideration is assigned to digital media objects (i.e. platforms, operating systems, networks), consider the inception of ‘the digital’ inception as such: that is information theory. If information was a loose, shabby, inefficient method of vagueness specific to various mediums of communication, Claude Shannon compressed all forms of communication into a universal system with absolute mathematical precision (Shannon). Once information became digital, the conceptual leap of determined symbolic logic was set into motion, and with it, the ‘digital’ became synonymous with an ideology of effectivity. No longer would miscommunication be subject to human finitude, nor be subject to matters of distance and time, but only the limits of entropy and the matter of automating messages through the support of alternating ‘true’ or ‘false’ relay systems.

However, it would be quite difficult to envisage any ‘post-computational’ break from such discourses – and with good reason: Shannon’s breakthrough was only systematically effective through the logic of computation. So the old missed encounter goes: Shannon presupposed Alan Turing’s mathematical idea of computation to transmit digital information, and Turing presupposed Shannon’s information theory to understand what his Universal Turing Machines were actually transmitting. The basic theories of both have not changed, but the materials affording greater processing power, extensive server infrastructure and larger storage space have simply increased the means for these ideas to proliferate, irrespective of what Turing and Shannon actually thought of them (some historians even speculate that Turing may have made the link between information and entropy two years before Bell Labs did) (Good).

Thus a ‘post-digital’ reference point might encompass the historical acknowledgment of Shannon’s digital efficiency, and Turing’s logic but by the same measure, open up a space for critical reflection, and how such efficiencies have transformed not only work, life and culture but also artistic praxis and aesthetics. This is not to say that digital culture is reducibly predicated on efforts made in computer science, but instead fully acknowledges these structures and accounts for how ideologies propagate reactionary attitudes and beliefs within them, whilst restricting other alternatives which do not fit their ‘vision’. Hence, the post-digital ‘task’ set for us nowadays might consist in critiquing digital efficiency and how it has come to work against commonality, despite transforming the majority of Western infrastructure in its wake.

The purpose of these notes is to outline how computation has imparted an unwarranted effect of totalised efficiency, and to label this effect the type of description it deserves: propaganda. The fact that Shannon and Turing had multiple lunches together at Bell labs in 1943, held conversations and exchanged ideas, but not detailed methods of cryptanalysis (Price & Shannon) provides a nice contextual allegory for how digital informatics strategies fail to be transparent.

But in saying this, I do not mean that companies only use digital networks for propagative means (although that happens), but that the very means of computing a real concrete function is constitutively propagative. In this sense, propaganda resembles a post-digital understanding of what it means to be integrated into an ecology of efficiency, and how technical artefacts are literally enacted as propagative decisions. Digital information often deceives us into accepting its transparency, and of holding it to that account: yet in reality it does the complete opposite, with no given range of judgements available to detect manipulation from education, or persuasion from smear. It is the procedural act of interacting with someone else’s automated conceptual principles, embedding pre-determined decisions which not only generate but pre-determine ones ability to make choices about such decisions, like propaganda.

This might consist in distancing ideological definitions of false consciousness as an epistemological limit to knowing alternatives within thought, to engaging with a real programmable systems which embeds such limits concretely, withholding the means to transform them. In other words, propaganda incorporates how ‘decisional structures’ structure other decisions, either conceptually or systematically.


Two years before Shannon’s famous Masters thesis, Turing published what would be a theoretical basis for computation in his 1936 paper “On Computable Numbers, with an Application to the Entscheidungsproblem.” The focus of the paper was to establish the idea of computation within a formal system of logic, which when automated would solve particular mathematical problems put into function (Turing, An Application). What is not necessarily taken into account is the mathematical context to that idea: for the foundations of mathematics were already precarious, way before Turing outlined anything in 1936. Contra the efficiency of the digital, this is a precariousness built-in to computation from its very inception: the precariousness of solving all problems in mathematics.

The key word of that paper, its key focus, was on the Entscheidungsproblem, or decision problem. Originating from David Hilbert’s mathematical school of formalism, ‘decision’ means something more rigorous than the sorts of decisions in daily life. It really means a ‘proof theory’, or how analytic problems in number theory and geometry could be formalised, and thus efficiently solved (Hilbert 3). Solving a theorem is simply finding a provable ‘winning position’ in a game. Similar to Shannon, ‘decision’ is what happens when an automated system of function is constructed in such a sufficiently complex way, that an algorithm can always ‘decide’ a binary, yes or no answer to a mathematical problem, when given an arbitrary input, in a sufficient amount of time. It does not require ingenuity, intuition or heuristic gambles, just a combination of simple consistent formal rules and a careful avoidance of contradiction.

The two key words there are ‘always’ and ‘decide’. The progressive end-game of twentieth century mathematicians who, like Hilbert, sought after a simple totalising conceptual system to decide every mathematical problem and work towards absolute knowledge. All Turing had to do was make explicit Hilbert’s implicit computational treatment of formal rules, manipulate symbol strings and automate them using an ’effective’ or “systematic method” (Turing, Solvable and Unsolvable Problems 584) encoded into a machine. This is what Turing’s thesis meant (discovered independently to Alonzo Church’s equivalent thesis (Church)): any systematic algorithm solved by a mathematical theorem can be computed by a Turing machine (Turing, An Application), or in Robin Gandy’s words, “[e]very effectively calculable function is a computable function” (Gandy).

Thus effective procedures decide problems, and they resolve puzzles providing winning positions (like theorems) in the game of functional rules and formal symbols. In Turing’s words, “a systematic procedure is just a puzzle in which there is never more than one possible move in any of the positions which arise and in which some significance is attached to the final result” (Turing, Solvable and Unsolvable Problems 590). The significance, or the winning position, becomes the crux of the matter for the decision: what puzzles or problems are to be decided? This is what formalism attempted to do: encode everything through the regime of formalised efficiency, so that all of mathematically inefficient problems are, in principle, ready to be solved. Programs are simply proofs: if it could be demonstrated mathematically, it could be automated.

In 1936, Turing had showed some complex mathematical concepts of effective procedures could simulate the functional decisions of all the other effective procedures (such as the Universal Turing Machine). Ten years later, Turing and John von Neumann would independently show how physical general purpose computers, offered the same thing and from that moment on, efficient digital decisions manifested themselves in the cultural application of physical materials. Before Shannon’s information theory offered the precision of transmitting information, Hilbert and Turing developed the structure of its transmission in the underlying regime of formal decision.

Yet, there was also a non-computational importance here, for Turing was also fascinated by what decisions couldn’t compute. His thesis was quite precise, so as to elucidate that if no mathematical problem could be proved, a computer was not of any use. In fact, the entire focus of his 1936 paper, often neglected by Silicon Valley cohorts, was to show that Hilbert’s particular decision problem could not be solved. Unlike Hilbert, Turing was not interested in using computation to solve every problem, but as a curious endeavour for surprising intuitive behaviour. The most important of all, Turing’s halting, or printing problem was influential, precisely as it was undecidable; a decision problem which couldn’t be decided.

We can all picture the halting problem, even obliquely. Picture the frustrated programmer or mathematician starting at their screen, waiting to know when an algorithm will either halt and spit out a result, or provide no answer. The computer itself has already determined the answer for us, the programmer just has to know when to give up. But this is a myth, inherited with a bias towards human knowledge, and a demented understanding of machines as infinite calculating engines, rather than concrete entities of decision. For reasons that escape word space, Turing didn’t understand the halting problem in this way: instead he understood it as a contradictory example of computational decisions failing to decide on each other, on the account that there could never be one totalising decision or effective procedure. There is no guaranteed effective procedure to decide on all the others, and any attempt to build one (or invest in a view which might help build one), either has too much investment in absolute formal reason, or it ends up with ineffective procedures.

Undecidable computation might be looked at as a dystopian counterpart against the efficiency of Shannon’s ‘digital information’ theory. A base 2 binary system of information resembling one of two possible states, whereby a system can communicate with one digit, only in virtue of the fact that there is one other digit alternative to it. Yet the perfect transmission of that information, is only subject to a system which can ‘decide’ on the digits in question, and establish a proof to calculate a success rate. If there is no mathematical proof to decide a problem, then transmitting information becomes problematic for establishing a solution.


What has become clear is that our world is no longer simply accountable to human decision alone. Decisions are no longer limited to the borders of human decisions and ‘culture’ is no longer simply guided by a collective whole of social human decisions. Nor is it reducible to one harmonious ‘natural’ collective decision which prompts and pre-empts everything else. Instead we seem to exist in an ecology of decisions: or better yet decisional ecologies. Before there was ever the networked protocol (Galloway), there was the computational decision. Decision ecologies are already set up before we enter the world, implicitly coterminous with our lives: explicitly determining a quantified or bureaucratic landscape upon which an individual has limited manoeuvrability.

Decisions are not just digital, they are continuous as computers can be: yet decisions are at their most efficient when digitally transferred. Decisions are everywhere and in everything. Look around. We are constantly told by governments and states that are they making tough decisions in the face of austerity. CEOs and Directors make tough decisions for the future of their companies and ‘great’ leaders are revered for being ‘great decisive leaders’: not just making decisions quickly and effectively, but also settling issues and producing definite results.

Even the word ‘decide’, comes from the Latin origin of ‘decidere’, which means to determine something and ‘to cut off.’ Algorithms in financial trading know not of value, but of decision: whether something is marked by profit or loss. Drones know not of human ambiguity, but can only decide between kill and ignore, cutting off anything in-between. Constructing a system which decides between one of two digital values, even repeatedly, means cutting off and excluding all other possible variables, leaving a final result at the end of the encoded message. Making a decision, or building a system to decide a particular ideal or judgement must force other alternatives outside of it. Decisions are always-already embedded into the framework of digital action, always already deciding what is to be done, how it can be done or what is threatening to be done. It would make little sense to suggest that these entities ‘make decisions’ or ‘have decisions’, it would be better to say that they are decisions and ecologies are constitutively constructed by them.

The importance of neo-liberal digital transmissions are not that they become innovative, or worthy of a zeitgeist break: but that they demonstrably decide problems whose predominant significance is beneficial for self-individual efficiency and accumulation of capital. Digital efficiency is simply about the expansion of automating decisions and what sort of formalised significances must be propagated to solve social and economic problems, which creates new problems in a vicious circle.

The question can no longer simply be ‘who decides’, but now, ‘what decides?’ Is it the cafe menu board, the dinner party etiquette, the NASDAQ share price, Google Pagerank, railway network delays, unmanned combat drones, the newspaper crossword, the javascript regular expression or the differential calculus? It’s not quite right to say that algorithms rule the world, whether in algo-trading or in data capture, but the uncomfortable realisation that real entities are built to determine provable outcomes time and time again: most notably ones for cumulating profit and extracting revenue from multiple resources.

One pertinent example: consider George Dantzig’s simplex algorithm: this effective procedure (whose origins began in multidimensional geometry) can always decide solutions for large scale optimisation problems which continually affect multi-national corporations. The simplex algorithm’s proliferation and effectiveness has been critical since its first commercial application in 1952, when Abraham Charnes and William Cooper used it to decide how best to optimally blend four different petroleum products at the Gulf Oil Company (Elwes 35; Gass & Assad 79). Since then the simplex algorithm has had years of successful commercial use, deciding almost everything from bus timetables and work shift patterns to trade shares and Amazon warehouse configurations. According to the optimisation specialist Jacek Gondzio, the simplex algorithm runs at “tens, probably hundreds of thousands of calls every minute” (35), always deciding the most efficient method of extracting optimisation.

In contemporary times, nearly all decision ecologies work in this way, accompanying and facilitating neo-liberal methods of self-regulation and processing all resources through a standardised efficiency: from bureaucratic methods of formal standardisation, banal forms ready to be analysed one central system, to big-data initiatives and simple procedural methods of measurement and calculation. The technique of decision is a propagative method of embedding knowledge, optimisation and standardisation techniques in order to solve problems and an urge to solve the most unsolvable ones, including us.

Google do not build into their services an option to pay for the privilege of protecting privacy: the entire point of providing a free service which purports to improve daily life, is that it primarily benefits the interests of shareholders and extend commercial agendas. James Grimmelmann gave a heavily detailed exposition on Google’s own ‘net neutrality’ algorithms and how biased they happen to be. In short, PageRank does not simply decide relevant results, it decides visitor numbers and he concluded on this note.

With disturbing frequency, though, websites are not users’ friends. Sometimes they are, but often, the websites want visitors, and will be willing to do what it takes to grab them (Grimmelmann 458).

If the post-digital stands for the self-criticality of digitalisation already underpinning contemporary regimes of digital consumption and production, then its saliency lies in understanding the logic of decision inherent to such regimes. The reality of the post-digital, shows that machines remain curiously efficient whether we relish in cynicism or not. Such regimes of standardisation and determined results, were already ‘mistakenly built in’ to the theories which developed digital methods and means, irrespective of what computers can or cannot compute.


Why then should such post-digital actors be understood as instantiations of propaganda? The familiarity of propaganda is manifestly evident in religious and political acts of ideological persuasion: brainwashing, war activity, political spin, mind control techniques, subliminal messages, political campaigns, cartoons, belief indoctrination, media bias, advertising or news reports. A definition of propaganda might follow from all of these examples: namely, the systematic social indoctrination of biased information that persuades the masses to take action on something which is neither beneficial to them, nor in their best interests: or as Peter Kenez writes, propaganda is “the attempt to transmit social and political values in the hope of affecting people’s thinking, emotions, and thereby behaviour” (Kenez 4) Following Stanley B. Cunningham’s watered down definition, propaganda might also denote a helpful and pragmatic “shorthand statement about the quality of information transmitted and received in the twentieth century” (Cunningham 3).

But propaganda isn’t as clear as this general definition makes out: in fact what makes propaganda studies such a provoking topic is that nearly every scholar agrees that no stable definition exists. Propaganda moves beyond simple ‘manipulation’ and ‘lies’ or derogatory, jingoistic representation of an unsubtle mood – propaganda is as much about the paradox of constructing truth, and the irrational spread of emotional pleas, as well as endorsing rational reason. As the master propagandist William J. Daugherty wrote;

It is a complete delusion to think of the brilliant propagandist as being a professional liar. The brilliant propagandist […] tells the truth, or that selection of the truth which is requisite for his purpose, and tells it in such a way that the recipient does not think that he is receiving any propaganda…. (Daugherty 39).

Propaganda, like ideology works by being inherently implicit and social. In the same way that post-ideology apologists ignore their symptom, propaganda is also ignored. It isn’t to be taken as a shadowy fringe activity, blown apart by the democratising fairy-dust of ‘the Internet’. As many others have noted, the purported ‘decentralising’ power of online networks, offer new methods for propagative techniques, or ‘spinternet’ strategies, evident in China (Brady). Iran’s recent investment into video game technology only makes sense, only when you discover that 70% of Iran’s population are under 30 years of age, underscoring a suitable contemporary method of dissemination. Similarly in 2011, the New York City video game developer Kuma Games was mired in controversy when it was discovered that an alleged CIA agent, Amir Mirza Hekmati, had been recruited to make an episodic video game series intending to “change the public opinion’s mindset in the Middle East.” (Tehran Times). The game in question, Kuma\War (2006 – 2011) was a free-to-play First-Person Shooter series, delivered in episodic chunks, the format of which attempted to simulate biased re-enactments of real-life conflicts, shortly after they reached public consciousness.

Despite his unremarkable leanings towards Christian realism, Jacques Ellul famously updated propaganda’s definition as the end product of what he previously lamented as ‘technique’. Instead of viewing propaganda as a highly organised systematic strategy for extending the ideologues of peaceful warfare, he understood it as a general social phenomenon in contemporary society.

Ellul outlined two types: political and sociological propaganda: Political propaganda involves government, administrative techniques which intend to directly change the political beliefs of an intended audience. By contrast, sociological propaganda is the implicit unification of involuntary public behaviour which creates images, aesthetics, problems, stereotypes, the purpose of which aren’t explicitly direct, nor overtly militaristic. Ellul argues that sociological propaganda exists; “in advertising, in the movies (commercial and non-political films), in technology in general, in education, in the Reader’s Digest; and in social service, case work, and settlement houses” (Ellul 64). It is linked to what Ellul called “pre” or “sub-propaganda”: that is, an imperceptible persuasion, silently operating within ones “style of life” or permissible attitude (63). Faintly echoing Louis Althusser’s Ideological State Apparatuses (Althusser 182) nearly ten years prior, Ellul defines it as “the penetration of an ideology by means of its sociological context.” (63) Sociological propaganda is inadequate for decisive action, paving the way for political propaganda – its strengthened explicit cousin – once the former’s implicitness needs to be transformed into the latter’s explicitness.

In a post-digital world, such implicitness no longer gathers wartime spirits, but instead propagates a neo-liberal way of life that is individualistic, wealth driven and opinionated. Ellul’s most powerful assertion is that ‘facts’ and ‘education’ are part and parcel of the sociological propagative effect: nearly everyone faces a compelling need to be opinionated and we are all capable of judging for ourselves what decisions should be made, without at first considering the implicit landscape from which these judgements take place. One can only think of the implicit digital landscape of Twitter: the archetype for self-promotion and snippets of opinions and arguments – all taking place within Ellul’s sub-propaganda of data collection and concealment. Such methods, he warns, will have “solved the problem of man” (xviii).

But information is of relevance here, and propaganda is only effective within a social community when it offers the means to solve problems using the communicative purview of information:

Thus, information not only provides the basis for propaganda but gives propaganda the means to operate; for information actually generates the problems that propaganda exploits and for which it pretends to offer solutions. In fact, no propaganda can work until the moment when a set of facts has become a problem in the eyes of those who constitute public opinion (114).

Wed, 11 Dec 2013 15:42:45 -0800
<![CDATA[Artist Profile: Erica Scourti]]>

The latest in a series of interviews with artists who have developed a significant body of work engaged (in its process, or in the issues it raises) with technology. See the full list of Artist Profiles here.   Daniel Rourke: Your recent work, You Could've Said, is described as "a Google keyword confessional for radio." I've often considered your work as having elements of the confession, partly because of the deeply personal stance you perform—addressing we, the viewer or listener, in a one-on-one confluence, but also through the way your work hijacks and exposes the unseen, often algorithmic, functions of social and network media. You allow Google keywords to parasitize your identity and in turn you apparently "confess" on Google's behalf. Are you in search of redemption for your social-media self? Or is it the soul of the algorithm you wish to save? Erica Scourti: Or maybe the algorithm and social media soul is now so intertwined and interdependent that it makes little sense to even separate the two, in a unlikely fulfillment of Donna Haraway's cyborg? Instead of having machines built into/onto us (Google glasses notwithstanding), the algorithms which parse our email content, Facebook behaviours, Amazon spending habits, and so on, don't just read us, but shape us. I'm interested in where agency resides when our desires, intentions and behaviours are constantly being tracked and manipulated through the media and technology that we inhabit; how can we claim to have any "authentic" desires? Facebook's "About" section actually states, "You can't be on Facebook without being your authentic self," and yet this is a self that must fit into the predetermined format and is mostly defined by its commercial choices (clothing brands, movies, ice cream, whatever). And those choices are increasingly influenced by the algorithms through the ambient, personalized advertising that surrounds us. So in You Could've Said, which is written entirely in an instrumentalised form of language, i.e. Google's AdWords tool, I'm relaying the impossibility of having an authentic feeling, or even a first-hand experience, despite the seemingly subjective, emotional content and tone. Google search stuff is often seen reflective of a kind of cute "collective self" (hey, we all want to kill our boyfriends sometimes!) but perhaps it's producing as much as reflecting us. It's not just that everything's already been said, and can be commodified but that the devices we share so much intimate time with are actively involved in shaping what we consider to be our "selves," our identities. And yet, despite being entirely mediated, my delivery is "sincere" and heartfelt; I'm really interested in the idea of sincere, but not authentic. I think it's the same reason spambots can have such unexpected pathos; they seem to "express" things in a sincere way, which suggests some kind of "soul" at work there, or some kind of agency,  and yet they totally lack interiority, or authenticity. In this and other work of mine (especially Life in AdWords) dissonance is produced by my apparent misrecognition of the algorithmically produced language as my own- mistaking the machine lingo as a true expression of my own subjectivity. Which is not to say that there is some separate, unmediated self that we could access if only we would disconnect our damn gadgets for a second, but the opposite—that autobiography, which my work clearly references, can no longer be seen as a narrative produced by some sort of autonomous subject, inseparable from the technology it interacts with. Also, autobiography often involves a confessional, affective mode, and I'm interested in how this relates to the self-exposure which the attention economy seems to encourage—TMI can secure visibility when there's not enough attention to go round. With the Google confessional, I'm enacting an exposure of my flaws and vulnerabilities and while it's potentially "bad" for me (i.e. my mediated self) since you might think I'm a loser, if you're watching, then it's worth it, since value is produced simply through attention-retention. Affective vitality doesn't so much resist commodification as actively participate within it…

DR: You mention agency. When it comes to the algorithms that drive the current attention economy I tend to think we have very little. Active participation is all well and good, but the opposite—an opting out, rather than a passivity—feels increasingly impossible. I am thinking about those reCaptcha questions we spend all our time filling in. If I want to access my account and check the recommendations it has this week, I'm required to take part in this omnipresent, undeniably clever, piece of crowd-sourcing. Alan Turing's predictions of a world filled with apparently intelligent machines has come true, except, its the machines now deciding whether we are human or not. ES: Except of course—stating the obvious here—it's just carrying out the orders another human instructed it to, a mediated form of gatekeeping that delegates responsibility to the machine, creating a distance from the entirely human, social, political etc structure that has deemed it necessary (a bit like drones then?). I'm very interested also in the notion of participation as compulsory—what Zizek calls the "You must, because you can" moral imperative of consumerism—especially online, not just at the banal level (missing out on events, job opportunities, interesting articles and so on if you're not on Facebook) but because your actions necessarily feed back into the algorithms tracking and parsing our behaviours. And even opting out becomes a choice that positions you within a particular demographic (more likely to be vegetarian, apparently). Also, this question of opting out seems to recur in conversations around art made online, in a way it doesn't for artists working with traditional media—like, if you're being critical of it, why not go make your own Facebook, why not opt out? My reasoning is that I like to work with widely used technology, out of an idea that the proximity of these media to mainstream, domestic and wider social contexts makes the work more able to reflect on its sociopolitical implications, just as some video artists working in the 80s specifically engaged with TV as the main mediator of public consciousness. Of course some say this is interpassiviity, just feebly participating in the platforms without making any real change, and I can understand that criticism. Now that coded spaces and ubiquitous computing are a reality of the world—and power structures—we inhabit, I do appreciate artists who can work with code and software (in a way that I can't) and use their deeper understanding of digital infrastructure to reflect critically on it. DR: You've been engaged in a commision for Colm Cille's Spiral, sending personal video postcards to anyone who makes a request. Your interpretation of the "confessional" mode seems in this piece to become very human-centric again, since the work is addressed specifically at one particular individual. How has this work been disseminated, and what does your approach have to do with "intimacy"? ES: I've always liked Walter Benjamin's take on the ability of mediating technologies to traverse spatial distances, bringing previously inaccessible events within touching distance. With this project, I wanted to heighten this disembodied intimacy by sending unedited videos shot on my iPhone, a device that's physically on me at all times, directly to the recipients' inbox. So it's not just "sharing" but actually "giving" them a unique video file gift, which only they see,  positioning the recipient as a captive audience of one, unlike on social media where you have no idea who is watching or who cares. But also, I asked them to "complete" the video by adding its metadata, which puts them on the spot—they have to respond, instead of having the option to ignore me—and also extracting some labor in return, which is exactly what social media does: extracting our affective and attentive labor, supposedly optionally, in exchange for the gift of the free service. The metadata—tags, title and optionally a caption—became the only viewable part of the exchange, since I used it to annotate a corresponding black, "empty" video on Instagram, also shared on Twitter and Facebook, so the original content remains private. These blank videos record the creative output of the recipient, while acting as proof of the transaction (i.e. that I sent them a video). They also act as performative objects which will continue to operate online due to their tagging, which connects them to other groups of media and renders them visible—i.e. searchable—online, since search bots cannot as yet "see" video content. I wanted to make a work which foregrounds its own connectedness, both to other images via the hashtags but also to the author-recipients through tagging them on social media. So the process of constantly producing and updating oneself within the restrictive and pre-determined formats of social media platforms, i.e. their desired user behaviours, becomes almost the content of the piece. I also like the idea that hashtag searches on all these platforms, for (let's say) Greece, will bring up these blank/ black videos (which by the way, involved a little hack, as Instagram will not allow you to upload pre-recorded content and it's impossible to record a black and silent video...). It's a tiny intervention into the regime of carefully filtered and cropped life-style depictions that Instagram is best known for. It's also a gesture of submitting oneself to the panoptical imperative to share one's experience no matter how private or banal, hence using Instagram for its associations with a certain solipsistic self-display; by willingly enacting the production of mediated self on social media I'm exploring a kind of masochistic humour which has some affinities with what Benjamin Noys identified as an accelerationist attitude of "the worse the better." And yet, by remaining hidden, and not publicly viewable, the public performance of a mediated self is denied.

DR: An accelerationist Social Media artwork would have to be loaded with sincerity, firstly, on the part of the human (artist/performer), but also, in an authentic attempt to utilise the network completely on its terms. Is there something, then, about abundance and saturation in your work? An attempt to overload the panopticon? ES: That's a very interesting way of putting it. I sometimes relate that oversaturation to the horror vacui of art that springs from a self-therapeutic need, which my work addresses, though it's less obsessive scribbles, more endless connection, output and flow and semi-ritualistic and repetitive working processes. And in terms of utilizing the network on its own terms, Geert Lovink's notion of the "natural language hack" (rather than the "deep level" hack) is one I've thought about—where your understanding of the social, rather than technical, operation of online platforms gets your work disseminated. For example my project Woman Nature Alone, where I re-enacted stock video which is freely available on my Youtube channel—some of those videos are high on the Google ranking page, so Google is effectively "marketing" my work without me doing anything.  Whether it overloads the panopticon, or just contributes more to the babble, is a pertinent question (as Jodi Dean's work around communicative capitalism has shown), since if the work is disseminated on commercial platforms like YouTube or Facebook, it operates within a system of value generation which benefits the corporation, involving, as is by now well known, a Faustian pact of personal data in exchange for "free" service. And going back to agency—the mutability of the platforms means that if the work makes use of particular features (suchas YouTube annotations) its existence is contingent on them being continued; since the content and the context are inextricable in situations like this, it would become impossible to display the original work exactly as it was first made and seen. Even then, as with Olia Lialina and Dragan Espenschied's One Terabyte of Kilobyte Age, it would become an archive, which preserves documents from a specific point in the web's history but cannot replicate the original viewing conditions because all the infrastructure around it has changed completely. So if the platforms—the corporations—control the context and viewing conditions, then artists working within them are arguably at their mercy- and keeping the endless flow alive by adding to it. I'm more interested in working within the flows rather than, as some artists prefer, rejecting the dissemination of their work online. Particularly with moving image work,  I'm torn between feeling that artists' insistence on certain very specific, usually high quality, viewing conditions for their work bolsters, as Sven Lütticken has argued, the notion of the rarefied auratic art object whose appreciation requires a kind of hushed awe and reverence, while being aware that the opposite—the image ripped from its original location and circulated in crap-res iPhone pics/ videos—is an example of what David Joselit would call image neoliberalism, which sees images as site-less and like any other commodity, to be traded across borders and contexts with no respect for the artist's intentions. However, I also think that this circulation is becoming an inevitability and no matter how much you insist your video is viewed on zillion lumens projector (or whatever), it will most likely end up being seen by the majority of viewers on YouTube or on a phone screen; I'm interested in how artists (like Hito Steyerl) address, rather than avoid, the fact of this image velocity and spread. DR: Lastly, what have you been working on recently? What's next? ES: I recently did a series of live, improvised performance series called Other People's Problems direct to people's desktops, with Field Broadcast, where I read out streams of tags and captions off Tumblr, Instagram and Facebook, randomly jumping to other tags as I went. I'm fascinated by tags—they're often highly idiosyncratic and personal, as well as acting as connective tissue between dispersed users; but also I liked the improvisation, where something can go wrong and the awkwardness it creates. (I love awkwardness!) Future projects are going to explore some of the ideas this work generated: how to improvise online (when things can always be deleted/ rejigged afterwards), how to embrace the relinquishing of authorial control which I see as integral to the online (or at least social media) experience, and how to work with hashtags/ metadata both as text in its own right and as a tool.   Age: 33 Location: London, Athens when I can manage it How long have you been working creatively with technology? How did you start? 14, 15 maybe, when I started mucking around with Photoshop—I remember scanning a drawing I'd made of a skunk from a Disney tale and making it into a horrendous composition featuring a rasta flag background... I was young. And I've always been obsessed with documenting things; growing up I was usually the one in our gang who had the camera—showing my age here, imagine there being one person with a camera—which has given me plenty of blackmail leverage and a big box of tastefully weathered photos that, despite my general frustration with analogue nostalgia, I know I will be carrying around with me for life. Where did you go to school? What did you study? After doing Physics, Chemistry and Maths at school, I did one year of a Chemistry BA, until I realized I wasn't cut out for lab work (too much like cooking) or what seemed like the black-and-white nature of scientific enquiry. I then did an art and design foundation at a fashion college, followed by one year of Fine Art Textiles BA—a nonsensical course whose only redeeming feature was its grounding in feminist theory—before finally entering the second year of a Fine Art BA. For a while this patchy trajectory through art school made me paranoid, until I realised it probably made me sound more interesting than I am. And in my attempt to alleviate the suspicion that there was some vital piece of information I was missing, I also did loads of philosophy diploma courses, which actually did come in handy when back at Uni last year: I recently finished a Masters of Research in moving image art. What do you do for a living or what occupations have you held previously? Do you think this work relates to your art practice in a significant way? At the moment I'm just about surviving as an artist and I've always been freelance apart from time done in bar, kitchen, shop (Londoners, remember Cyberdog?) cleaning and nightclub jobs, some of which the passage of time has rendered as amusingly risqué rather than borderline exploitative. After my B.A., I set up in business with the Prince's Trust, running projects with what are euphemistically known as hard-to-reach young people, making videos, digital art pieces and music videos until government funding was pulled from the sector. I mostly loved this work and it definitely fed into and reflects my working with members of loose groups, like the meditation community around the Insight Time app, or Freecycle, or Facebook friends. I've also been assisting artist and writer Caroline Bergvall on and off for a few years, which has been very helpful in terms of observing how an artist makes a life/ living. What does your desktop or workspace look like? I'm just settling into a new space at the moment but invariably, a bit of a mess, a cup of tea, piles of books, and both desktop and workspace are are covered in neon post-it notes. Generally I am a paradigmatic post-Fordist flexi worker though: I can and do work pretty much anywhere—to the occasional frustration of friends and family. 

Tue, 08 Oct 2013 07:30:18 -0700
<![CDATA[The Phantom Zone]]>

The boundary between science fiction and social reality is an optical illusion.

Donna Haraway, A Cyborg Manifesto (1991) [1]

This is no fantasy... no careless product of wild imagination. No, my good friends.

The opening lines of Richard Donner's Superman (1978) [2] In a 1950 film serial entitled Atom Man vs Superman [3] television executive and evil genius Lex Luthor sends Superman into a ghostly limbo he calls "The Empty Doom." Trapped in this phantom void, Superman's infinite powers are rendered useless, for although he can still see and hear the "real" world his ability to interact with it has all but disappeared. Over the following decades this paraspace [4]—to use Samuel Delany's term for a fictional space, accessed via technology, that is neither within nor entirely separate from the 'real' world—would reappear in the Superman mythos in various forms, beginning in 1961. Eventually dubbed "The Phantom Zone," its back story was reworked substantially, until by the mid 60s it had become a parallel dimension discovered by Superman's father, Jor El. Once used to incarcerate Krypton's most unsavory characters, The Phantom Zone had outlasted its doomed home world and eventually burst at the seams, sending legions of super-evil denizens raining down onto Earth. Beginning its life as an empty doom, The Phantom Zone was soon filled with terrors prolific enough to make even The Man of Steel fear its existence.

Overseen by story editor Mortimer Weisinger, and the unfortunately named artist Wayne Boring, the late 50s and early 60s were a strange time in the Superman universe. The comics suddenly became filled with mutated variants of kryptonite that gave Superman the head of an ant or the ability to read thoughts; with miniature Supermen arriving seconds before their namesake to save the day and steal his thunder; with vast universes of time caught fast in single comic book panels. It was an era of narrative excess wrapped by a tighter, more meticulous and, many would say, repressed aesthetic:

Centuries of epic time could pass in a single caption. Synasties fell between balloons, and the sun could grow old and die on the turn of a page. It was a toy world, too, observed through the wrong end of a telescope. Boring made eternity tiny, capable of being held in two small hands. He reduced the infinite to fit in a cameo... [5]

The Phantom Zone is one of the least bizarre narrative concepts from what is now known as the Silver Age of D.C. Comics (following on from the more widely celebrated Golden Age). It could be readily understood on a narrative level, and it had a metaphorical dimension as well, one that made conceivable the depths contained in Superman's vast, but ultimately manipulable universe. The Phantom Zone was usually portrayed on a television screen kept safe in one of the many rooms of the League of Justice headquarters. It could also be used as a weapon and fired from a portable projection device—the cold, harsh infinity of the Empty Doom blazing into Superman's world long enough to ensnare any character foolish enough to stand in its rays. Whether glimpsed on screen or via projection, then, The Phantom Zone could be interpreted as a metaphor for the moving image. 

In comic books, as in the moving image, the frame is the constituent element of narrative. Each page of a comic book is a frame which itself frames a series of frames, so that by altering each panel's size, bleed or aesthetic variety, time and space can be made elastic. Weisinger and Boring's Phantom Zone took this mechanism further, behaving like a weaponized frame free to roam within the comic book world. Rather than manipulating three-dimensional space or the fourth dimension of time, as the comic book frame does, The Phantom Zone opened out onto the existence of other dimensions. It was a comic book device that bled beyond the edge of the page, out into a world in which comic book narratives were experienced not in isolation, but in parallel with the onscreen narratives of the cinema and the television. As such, the device heralded televisual modes of attention.

For his 1978 big-budget movie version of Superman, [6] director Richard Donner cunningly translated The Phantom Zone into something resembling the cinema screen itself. In the film's opening sequence, a crystal surface swoops down from the immense backdrop of space, rendering the despicable General Zod and his cronies two-dimensional as it imprisons them. In the documentary The Magic Behind the Cape, [7] bundled with the DVD release of Superman in 2001, we are given an insight into the technical prowess behind Donner's The Phantom Zone. The actors are made to simulate existential terror against the black void of the studio, pressed up against translucent, flesh-like membranes and physically rotated out of sync with the gaze of the camera. Rendering the faux two-dimensional surface of Donner's Phantom Zone believable required all manner of human dimensions to be framed out of the final production. The actors react to causes generated beyond the studio space, the director's commands, or the camera's gaze. They twist and recoil from transformations still to occur in post-production. In a sense, the actors behave as bodies that are already images. With its reliance on post-produced visual effects, the Phantom Zone sequence represents an intermediary stage in the gradual removal of sets, locations, and any 'actual' spatial depths from the film production process. Today, actors must address their humanity to green voids post-produced with CGI, and the indexical relationship between the film image and the events unfolding in front of the lens have been almost entirely shattered. In this Phantom cinema produced after the event, ever-deeper layers of special effects seal actors into a cinematic paraspace. Just as The Phantom Zone of the comic book heralded televisual modes of attention, The Phantom Zone of the 1970s marked a perceptual regime in which the cinematic image was increasingly sealed off from reality by synthetic visual effects.

   For Walter Benjamin, writing during cinema's first “Golden Era,", the ability of the cinema screen to frame discontinuous times and spaces represented its most profound "truth." Delivered by cinema, Benjamin argued, mechanically disseminated images were actually fracturing the limits of our perceptions, training "human beings in the apperceptions and reactions needed to deal with a vast apparatus whose role in their lives is expanding almost daily." [8]  The cinema screen offered audiences who were confined to finite bodies that had never before experienced such juxtapositions an apparently shared experience of illuminated consciousness. Far from inventing this new mode of perception through the "shock-character" of montage, Benjamin believed that cinema spoke of the 'truths' which awaited us beneath the mirage of proletarian experience. Truths which would guide us—with utopian fervor—towards an awareness, and eventual control, of what Benjamin called the "new nature":

Not just industrial technology, but the entire world of matter (including human beings) as it has been transformed by that technology. [9]

In short, cinema was less a technology than a new and evolving mode of machinic thought, both generated by and generating the post-industrial subject. Observing the relation between representation and visibility, Jens Andermann notes:

Truth, the truth of representation, crucially depends on the clear-cut separation between the visible and the invisible, the non-objectness of the latter. Truth is the effect of what we could call the catachretic nature of visuality, the way in which the world of visual objects can point to the invisible domain of pure being only by obsessively pointing to itself. [10]

As from the Greek root aisthanesthai – "to perceive"—the aesthetic conditions through which The Phantom Zone have been translated frame far more than a supposed fictional void. Called upon to indicate an absolute outside — the unfathomable infinity of another, ghostly, parallel universe — The Phantom Zone instead reiterates the medium of its delivery, whether comic book, television, or cinema, with mirror-like insistency. Such is the power of new technical modes of thought, that very often, they cause us to rethink the outmoded media that we are so used to as to be unaware. The Phantom Zone hides the cinematographic image in plain view. Its reappearance and reimagining over the last 60 odd years, in ever newer forms and aesthetic modes, can be read paradigmatically, that is, as a figure that stands in place of, and points towards, shifts, mutations and absolute overturnings in our perceptual apparatus. Its most recent iteration is in the 2013 Superman reboot, Man of Steel, [11] and in particular in a 'viral' trailer distributed on YouTube a few weeks before the film was released. [12] Coming towards us soars a new mode of machinic thought; a Phantom Zone of unparalleled depth and aesthetic complexity that opens onto a new new - digital - nature.

The General Zod trailer for Man of Steel begins with a static rift that breaks into a visual and audial disarrangement of the phrase, “You are not alone". General Zod's masked face materializes, blended with the digital miasma: a painterly 3D effect that highlights the inherent ‘otherness' of where his message originates. The aesthetic is unsettling in as much as it is recognizable. We have no doubt as viewers of this 'viral' dispatch as to the narrative meaning of what we are witnessing, namely, a datastream compressed and distributed from a paraspace by an entity very much unlike us. The uncanny significance of the trailer stems more from how very normal the digital miasma feels; from how apprehensible this barrage of noise is to us. Indeed, it is ‘other', but its otherness is also somehow routine, foreseeable. The pathogen here is not Zod's message, it is digital technology itself. The glitched aesthetic of the trailer has become so habitual as to herald the passing of digital materiality into the background of awareness. Its mode of dissemination, via the Trojan Horse of YouTube, just as unvisible to us during the regular shifts we make between online/offline modes of communication. The surface of this Phantom Zone very much interfaces with our material world, even if the message it impresses upon us aches to be composed of an alien substance.   Digital video does the work of representation via a series of very clever algorithms called codecs that compress the amount of information needed to produce a moving image. Rather than the individual frames of film, each as visually rich and total as the last, in a codec only the difference between frames need be encoded, making each frame “more like a set of movement instructions than an image." [13] The painterly technique used in the General Zod trailer is normally derived from a collapse between key (image) and reference (difference) frames at the status of encoding. The process is called ‘datamoshing', and has its origins in glitch art, a form of media manipulation predicated on those minute moments when the surface of an image or sound cracks open to reveal some aspect of the process that produced it. By a method of cutting, repeating or glitching of key and reference frames visual representations are made to blend into one another, space becomes difference and time becomes image. The General Zod trailer homages/copies/steals the datamoshing technique, marking digital video's final move from convenient means of dissemination, to palpable aesthetic and cultural influence.  In the actual movie, Man of Steel (2013), Zod's video message is transposed in its entirety to the fictional Planet Earth. The viral component of its movement around the web is entirely absent: its apparent digitality, therefore, remains somewhat intact, but only as a mere surface appearance. This time around the message shattering through The Phantom Zone is completely devoid of affective power: it frames nothing but its existence as a narrative device. The filmmakers rely on a series of “taking over the world" tropes to set the stage for General Zod's Earth-shaking proclamation. TV sets in stereotypical, exotic, locales flicker into life, all broadcasting the same thing. Electronic billboards light up, loudspeakers blare, mobile phones rumble in pockets, indeed, all imaging technologies suddenly take on the role of prostheses for a single, datamoshed, stream. In one—particularly sincere—moment of the montage a faceless character clutches a Nokia brand smartphone in the centre of shot and exclaims, “It's coming through the RSS feeds!" This surface, this Phantom Zone, frames an apparatus far vaster than a datamoshed image codec: an apparatus apparently impossible to represent through the medium of cinema. The surface appearance of the original viral trailer is only a small component of what constitutes the image it conveys, and thus, of the image it frames of our time. Digital materiality shows itself via poorly compressed video clips arriving through streams of overburdened bandwidth. Our understanding of what constitutes a digital image must then, according to Mark Hansen, “be extended to encompass the entire process by which information is made perceivable." [14]

In its cinematic and comic book guises, The Phantom Zone was depicted as “a kind of membrane dividing yet connecting two worlds that are alien to and also dependent upon each other".[15] The success of the datamoshed trailer comes from the way it broke through that interface, its visual surface bubbling with a new kind of viral, digital, potential that encompasses and exposes the material engaged in its delivery. As cinematographic subjects we have an integral understanding of the materiality of film. Although we know that the frames of cinema are separate we crave the illusion of movement, and the image of time, they create. The ‘viral' datamoshed message corrupts this separation between image and movement, the viewer and the viewed. Not only does General Zod seem to push out, from inside the numerical image, it is as if we, the viewing subject enraptured by the digital event, have been consumed by its flow. The datamoshed Phantom Zone trailer takes the one last, brave, step beyond the apparatus of image production. Not only is the studio, the actor, and even the slick appeal of CGI framed out of its mode of delivery, arriving through a network that holds us complicit, this Phantom Zone frames the 'real' world in its entirety, making even the fictional world it appeals to devoid of affective impact. To take liberty with the words of Jean Baudrillard:

[Jorge Luis] Borges wrote: they are slaves to resemblance and representation; a day will come when they will try to stop resembling. They will go to the other side of the mirror and destroy the empire. But here, you cannot come back from the other side. The empire is on both sides. [16]

Once again, The Phantom Zone highlights the material mode of its delivery with uncanny exactness. We are now surrounded by images that supersede mere visual appearance: they generate and are generated by everything the digital touches, including we, the most important component of General Zod's 'viral' diffusion. The digital Phantom Zone extends to both sides of the flickering screen.   References

[1] Donna Haraway, Simians, Cyborgs and Women : The Reinvention of Nature. (London: Free Association Books Ltd, 1991), 149–181.

[2] Richard Donner, Superman, Action, Adventure, Sci-Fi, 1978.

[3] Spencer Gordon Bennet, Atom Man Vs. Superman, Sci-Fi, 1950.

[4] Scott Bukatman, Terminal Identity: The Virtual Subject in Postmodern Science Fiction (Durham: Duke University Press, 1993), 164.

[5] Grant Morrison, Supergods: Our World in the Age of the Superhero (London: Vintage Books, 2012), 62.

[6] Donner, Superman.

[7] Michael Thau, The Magic Behind the Cape, Documentary, Short, 2001. See :

[8] Walter Benjamin, “The Work of Art in the Age of Its Technological Reproducibility," in The Work of Art in the Age of Its Technological Reproducibility, and Other Writings on Media (Cambridge  Mass.: Belknap Press of Harvard University Press, 2008), 26.

[9] Susan Buck-Morss, The Dialectics of Seeing: Walter Benjamin and the Arcades Project (MIT Press, 1991), 70.

[10] Jens Andermann, The Optic of the State: Visuality and Power in Argentina and Brazil (University of Pittsburgh Pre, 2007), 5.

[11] Zack Snyder, Man of Steel, Action, Adventure, Fantasy, Sci-Fi, 2013.

[12] Man of Steel Viral - General Zod's Warning (2013) Superman Movie HD, 2013,

[13] BackStarCreativeMedia, “Datamoshing—the Beauty of Glitch," April 9, 2009,

[14] Mark B. Hansen, “Cinema Beyond Cybernetics, or How to Frame the Digital Image," Configurations 10, no. 1 (2002): 54, doi:10.1353/con.2003.0005.

[15] Mark Poster, The Second Media Age (Wiley, 1995), 20.

[16] Jean Baudrillard, “The Murder of the Sign," in Consumption in an Age of Information, ed. Sande Cohen and R. L. Rutsky (Berg, 2005), 11.  

Tue, 10 Sep 2013 08:00:00 -0700
<![CDATA[Roger Scruton – A culture of fake originality]]>

A high culture is the self-consciousness of a society. It contains the works of art, literature, scholarship and philosophy that establish a shared frame of reference among educated people. High culture is a precarious achievement, and endures only if it is underpinned by a sense of tradition, and by

Tue, 22 Jan 2013 05:16:00 -0800
<![CDATA[Meaning as gloss]]>

Frances Egan is a mind-bombing philosopher who wonders on explanatory frameworks of science, the fits and starts of mind evolution, the links between neuroscience and meaning, the redness of tomatoes, the difference between horizon and zenith moons, fMRI interfaces with philosophy, mind/computer uploading and the consciousness of the USA. All in all, she is a deep groove hipster of the philo-mindster jive. Awesome!

3:AM: What made you a philosopher and has it been rewarding so far?

Frances Egan: I read some political philosophy on my own in high school, but I wasn’t exposed to philosophy systematically until college. I took a philosophy course in my first semester because I was looking for something different. After a brief introduction to logic we discussed the problem of evil: how could an omnipotent, benevolent god allow so much pain and suffering? I was raised Catholic but that was the end of religion for me. Nothing quite that dramatic has happened since, but thinking about fund

Wed, 14 Nov 2012 04:39:00 -0800
<![CDATA[Errormancy: Glitch as Divination – A New Essay by Kim Cascone]]>

To the modern mind, a glitch is an unwanted artifact, a momentary interruption of expected behavior produced by a faulty system. In an instant it changes the user’s relationship with that system. A glitch instills suspicion, indicating the system is unreliable, corrupted, not to be trusted.

This is the view most commonly held today by the mental-rational mind, a consciousness formed by living in a mechanistic technological society. We have been trained via a form of shock treatment to panic when things go wrong. After learning the quirks of a system we come to react to these intrusive events by recalling a bullet- list of troubleshooting tips, throwing them at the problem in hopes that one will fix it.

Early in the history of digital media, when the science of error correction was in its infancy, artists discovered that glitches could oftentimes produce wondrous artifacts. And that, much like the technique of the “Cut-up,” formed new juxtapositions that seemingly came from nowhere. As if invoked or summoned with a toss of dice.

Sat, 29 Sep 2012 04:14:00 -0700
<![CDATA[The Image in Mind: Theism, Naturalism, and the Imagination]]>

Written jointly by philosopher of religion Charles Taliaferro and visual artist Jil Evans, The Image in Mind takes up a suggestion of Suzanne Langer that religious thought operates primarily with images and that images alone can make us aware of the wholeness and form of entities and reality. Furthermore, and this is the claim upon which the authors want to extend Langer's proposal, only images can serve as a measure of the adequacy of the terms that describe reality. Taliaferro and Evans aim to apply this idea as a test of theistic versus naturalistic metaphysical worldviews.

In line with this approach, they speak of "the theistic image of the world" in contrast to "the naturalistic image." They begin with a general Platonic theism, centering on the idea of a teleological cosmos informed by values of truth, beauty and goodness, created and sustained by an all-good, purposive immaterial being. However, Taliaferro and Evans more often than not discuss specifically Christian forms of theism, and in the end defend a particular form that includes a Cristus Victor image of redemption and the promise of an eternal afterlife. Naturalism rejects this in favor of a strictly materialistic image of the world, though broad naturalism allows for the emergence of properties like consciousness, values and aesthetics.

Mon, 02 Apr 2012 14:34:48 -0700
<![CDATA[Bioconservatives vs. Bioprogressives]]>

We are now living in the age of biopolitics, claims University of Pennsylvania bioethicist Jonathan Moreno in his new book The Body Politic: The Battle Over Science in America. “Biopolitics is the nonviolent struggle for control over the actual and imagined achievements of the new biology and the new world it symbolizes,” he writes. “The stakes are about as big as they can get.” Moreno is right.

Our biopolitical and bioethical struggles span human concerns from birth to death. Should embryos be tested genetically in vitro, allowing parents to implant only those they choose? What about using embryos to produce stem cells that can be transformed into tissues to repair damaged hearts and brains? Is it OK to create mice endowed with human brain cells? When is it appropriate to halt medical care for people who show no signs of minimal consciousness?

Thu, 01 Mar 2012 01:36:22 -0800