MachineMachine /stream - search for dissemination https://machinemachine.net/stream/feed en-us http://blogs.law.harvard.edu/tech/rss LifePress therourke@gmail.com <![CDATA[Algorithmic Narratives and Synthetic Subjects (paper)]]> http://machinemachine.net/portfolio/paper-at-theorizing-the-web-synthetic-subjects/

This was the paper I delivered at The Theorizing the Web Conference, New York, 18th April 2015. This video of the paper begins part way in, and misses out some important stuff. I urge you to watch the other, superb, papers on my panel by Natalie Kane, Solon Barocas, and Nick Seaver. A better video is forthcoming. I posted this up partly in response to this post at Wired about the UK election, Facebook’s echo-chamber effect, and other implications well worth reading into.

Data churning algorithms are integral to our social and economic networks. Rather than replace humans these programs are built to work with us, allowing the distinct strengths of human and computational intelligences to coalesce. As we are submerged into the era of ‘big data’, these systems have become more and more common, concentrating every terrabyte of raw data into meaningful arrangements more easily digestible by high-level human reasoning. A company calling themselves ‘Narrative Science’, based in Chicago, have established a profitable business model based on this relationship. Their slogan, ‘Tell the Stories Hidden in Your Data’, [1] is aimed at companies drowning in spreadsheets of cold information: a promise that Narrative Science can ‘humanise’ their databases with very little human input. Kristian Hammond, Chief Technology Officer of the company, claims that within 15 years over 90% of all news stories will also be written by algorithms. [2] But rather than replacing the jobs that human journalists now undertake, Hammond claims the vast majority of their ‘robonews’ output will report on data currently not covered by traditional news outlets. One family-friendly example of this is the coverage of little-league baseball games. Very few news organisations have the resources, or desire, to hire a swathe of human journalists to write-up every little-league game. Instead, Narrative Science offer leagues, parents and their children a miniature summary of each game gleaned from match statistics uploaded by diligent little league attendees, and then written up by Narrative Science in a variety of journalistic styles. In their book ‘Big Data’ from 2013, Oxford University Professor of internet governance Viktor Mayer-Schönberger, and  ‘data editor’ of The Economist, Kenneth Cukier, tell us excitedly about another data aggregation company, Prismatic, who: …rank content from the web on the basis of text analysis, user preferences, social network-popularity, and big-data analysis. [3] According to Mayer- Schönberger and Cukier this makes Prismatic able ‘to tell the world what it ought to pay attention to better than the editors of the New York Times’. [4] A situation, Steven Poole reminds us, we can little argue with so long as we agree that popularity underlies everything that is culturally valuable. Data is now the lifeblood of technocapitalism. A vast endless influx of information flowing in from the growing universe of networked and internet connected devices. As many of the papers at Theorizing the Web attest, our environment is more and more founded by systems whose job it is to mediate our relationship with this data. Technocapitalism still appears to respond to Jean Francois Lyotard’s formulation of Postmodernity: that whether something is true has less relevance, than whether it is useful. In 1973 Jean Francois Lyotard described the Postmodern Condition as a change in “the status of knowledge” brought about by new forms of techno-scienctific and techno-economic organisation. If a student could be taught effectively by a machine, rather than by another human, then the most important thing we could give the next generation was what he called, “elementary training in informatics and telematics.” In other words, as long as our students are computer literate “pedagogy would not necessarily suffer”. [5] The next passage – where Lyotard marks the Postmodern turn from the true to the useful – became one of the book’s most widely quoted, and it is worth repeating here at some length:

It is only in the context of the grand narratives of legitimation – the life of the spirit and/or the emancipation of humanity – that the partial replacement of teachers by machines may seem inadequate or even intolerable. But it is probable that these narratives are already no longer the principal driving force behind interest in acquiring knowledge. [6] Here, I want to pause to set in play at least three elements from Lyotard’s text that colour this paper. Firstly, the historical confluence between technocapitalism and the era now considered ‘postmodern’. Secondly, the association of ‘the grand-narrative’ with modern, and pre-modern conditions of knowledge. And thirdly, the idea that the relationship between the human and the machine – or computer, or software – is generally one-sided: i.e. we may shy away from the idea of leaving the responsibility of our children’s education to a machine, but Lyotard’s position presumes that since the machine was created and programmed by humans, it will therefore necessarily be understandable and thus controllable, by humans. Today, Lyotard’s vision of an informatically literate populous has more or less come true. Of course we do not completely understand the intimate workings of all our devices or the software that runs them, but the majority of the world population has some form of regular relationship with systems simulated on silicon. And as Lyotard himself made clear, the uptake of technocapitalism, and therefore the devices and systems it propagates, is piece-meal and difficult to predict or trace. At the same time Google’s fleet of self-driving motor vehicles are let-loose on Californian state highways, in parts of sub-Saharan Africa models of mobile-phones designed 10 or more years ago are allowing farming communities to aggregate their produce into quantities with greater potential to make profit on a world market. As Brian Massumi remarks, network technology allows us the possibility of “bringing to full expression a prehistory of the human”, a “worlding of the human” that marks the “becoming-planetary” of the body itself. [7] This “worlding of the human” represents what Edmund Berger argues is the death of the Postmodern condition itself: [T]he largest bankruptcy of Postmodernism is that the grand narrative of human mastery over the cosmos was never unmoored and knocked from its pulpit. Instead of making the locus of this mastery large aggregates of individuals and institutions – class formations, the state, religion, etc. – it simply has shifted the discourse towards the individual his or herself, promising them a modular dreamworld for their participation… [8] Algorithmic narratives appear to continue this trend. They are piece-meal, tending to feedback user’s dreams, wants and desires, through carefully aggregated, designed, packaged Narratives for individual ‘use’. A world not of increasing connectivity and understanding between entities, but a network worlded to each individual’s data-shadow. This situation is reminiscent of the problem pointed out by Eli Pariser of the ‘filter bubble’, or the ‘you loop’, a prevalent outcome of social media platforms tweaked and personalised by algorithms to echo at the user exactly the kind of thing they want to hear. As algorithms develop in complexity the stories they tell us about the vast sea of data will tend to become more and more enamoring, more and more palatable. Like some vast synthetic evolutionary experiment, those algorithms that devise narratives users dislike, will tend to be killed off in the feedback loop, in favour of other algorithms whose turn of phrase, or ability to stoke our egos, is more pronounced. For instance, Narrative Science’s early algorithms for creating little league narratives tended to focus on the victors of each game. What Narrative Science found is that parents were more interested in hearing about their own children, the tiny ups and downs that made the game significant to them. So the algorithms were tweaked in response. Again, to quote chief scientist Kris Hammond from Narrative Science: These are narratives generated by systems that understand data, that give us information to support the decisions we need to make about tomorrow. [9] Whilst we can program software to translate the informational nuances of a baseball game, or internet social trends, into human palatable narratives, larger social, economic and environmental events also tend to get pushed through an algorithmic meatgrinder to make them more palatable. The ‘tomorrow’ that Hammond claims his company can help us prepare for is one that, presumably, companies like Narrative Science and Prismatic will play an ever larger part in realising. In her recently published essay on Crisis and the Temporality of Networks, Wendy Chun reminds us of the difference between the user and the agent in the machinic assemblage: Celebrations of an all powerful user/agent – ‘you’ as the network, ‘you’ as the producer- counteract concerns over code as law as police by positing ‘you’ as the sovereign subject, ‘you’ as the decider. An agent however, is one who does the  actual labor, hence agent is one who acts on behalf of another. On networks, the agent would seem to be technology, rather than the users or programmers who authorize actions through their commands and clicks. [10] In order to unpack Wendy Chun’s proposition here we need only look at two of the most powerful, and impactful algorithms from the last ten years of the web. Firstly, Amazon’s recommendation system, which I assume you have all interacted with at some point. And secondly, Facebook’s news feed algorithm, that ranks and sorts posts on your personalised stream. Both these algorithms rely on a community of user interactions to establish a hierarchy of products, or posts, based on popularity. Both these algorithms also function in response to user’s past activity, and both, of course, have been tweaked and altered over time by the design and programming teams of the respective companies. As we are all no doubt aware, one of the most significant driving principles behind these extraordinarily successful pieces of code is capitalism itself. The drive for profit, and the relationship that has on distinguishing between a successful or failing company, service or product. Wendy Chun’s reminder that those that carry out an action, that program and click, are not the agents here should give use solace. We are positioned as sovereign subjects over our data, because that idea is beneficial to the propagation of the ‘product’. Whether we are told how well our child has done at baseball, or what particular kinds of news stories we might like, personally, to read right now, it is to the benefit of technocapitalism that those narratives are positive, palatable and uncompromising. However the aggregation and dissemination of big data effects our lives over the coming years, the likelihood is that at the surface – on our screens, and ubiquitous handheld devices – everything will seem rosey, comfortable, and suited to the ‘needs’ and ‘use’ of each sovereign subject.

TtW15 #A7 @npseaver @nd_kane @s010n @smwat pic.twitter.com/BjJndzaLz1

— Daniel Rourke (@therourke) April 17, 2015

So to finish I just want to gesture towards a much much bigger debate that I think we need to have about big data, technocapitalism and its algorithmic agents. To do this I just want to read a short paragraph which, as far as I know, was not written by an algorithm: Surface temperature is projected to rise over the 21st century under all assessed emission scenarios. It is very likely that heat waves will occur more often and last longer, and that extreme precipitation events will become more intense and frequent in many regions. The ocean will continue to warm and acidify, and global mean sea level to rise. [11] This is from a document entitled ‘Synthesis Report for Policy Makers’ drafted by The Intergovernmental Panel on Climate Change – another organisation who rely on a transnational network of computers, sensors, and programs capable of modeling atmospheric, chemical and wider environmental processes to collate data on human environmental impact. Ironically then, perhaps the most significant tool we have to understand the world, at present, is big data. Never before has humankind had so much information to help us make decisions, and help us enact changes on our world, our society, and our selves. But the problem is that some of the stories big data has to tell us are too big to be narrated, they are just too big to be palatable. To quote Edmund Berger again: For these reasons we can say that the proper end of postmodernism comes in the gradual realization of the Anthropocene: it promises the death of the narrative of human mastery, while erecting an even grander narrative. If modernism was about victory of human history, and postmodernism was the end of history, the Anthropocene means that we are no longer in a “historical age but also a geological one. Or better: we are no longer to think history as exclusively human…” [12] I would argue that the ‘grand narratives of legitimation’ Lyotard claimed we left behind in the move to Postmodernity will need to return in some way if we are to manage big data in a meaningful way. Crises such as catastrophic climate change will never be made palatable in the feedback between users, programmers and  technocapitalism. Instead, we need to revisit Lyotard’s distinction between the true and the useful. Rather than ask how we can make big data useful for us, we need to ask what grand story we want that data to tell us.   References [1] Source: www.narrativescience.com, accessed 15/10/14 [2] Steven Levy, “Can an Algorithm Write a Better News Story Than a Human Reporter?,” WIRED, April 24, 2012, http://www.wired.com/2012/04/can-an-algorithm-write-a-better-news-story-than-a-human-reporter/. [3] “Steven Poole – On Algorithms,” Aeon Magazine, accessed May 8, 2015, http://aeon.co/magazine/technology/steven-poole-can-algorithms-ever-take-over-from-humans/. [4] Ibid. [5] Jean-François Lyotard, The Postmodern Condition: A Report on Knowledge, Repr, Theory and History of Literature 10 (Manchester: Univ. Pr, 1992), 50. [6] Ibid., 51. [7] Brian Massumi, Parables for the Virtual: Movement, Affect, Sensation (Duke University Press, 2002), 128. [8] Edmund Berger, “The Anthropocene and the End of Postmodernism,” Synthetic Zero, n.d., http://syntheticzero.net/2015/04/01/the-anthropocene-and-the-end-of-postmodernism/. [9] Source: www.narrativescience.com, accessed 15/10/14 [10] Wendy Chun, “Crisis and the Temporality of Networks,” in The Nonhuman Turn, ed. Richard Grusin (Minneapolis: University of Minnesota Press, 2015), 154. [11] Rajendra K. Pachauri et al., “Climate Change 2014: Synthesis Report. Contribution of Working Groups I, II and III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change,” 2014, http://epic.awi.de/37530/. [12] Berger, “The Anthropocene and the End of Postmodernism.”

]]>
Fri, 08 May 2015 04:02:51 -0700 http://machinemachine.net/portfolio/paper-at-theorizing-the-web-synthetic-subjects/
<![CDATA[Interview with Domenico Quaranta]]> http://www.furtherfield.org/features/interviews/interview-domenico-quaranta

Daniel Rourke: At Furtherfield on November 22nd 2014 you launched a Beta version of a networked project, 6PM Your Local Time, in collaboration with Fabio Paris, Abandon Normal Devices and Gummy Industries. #6PMYLT uses twitter hashtags as a nexus for distributed art happenings. Could you tell us more about the impetus behind the project? Domenico Quaranta: In September 2012, the Link Art Center launched the Link Point in Brescia: a small project space where, for almost two years, we presented installation projects by local and international artists. The Link Point was, since the beginning, a “dual site”: a space where to invite our local audience, but also a set for photographic documentation meant to be distributed online to a global audience. Fabio Paris’ long experience with his commercial gallery – that used the same space for more than 10 years, persuaded us that this was what we had to offer to the artists invited. So, the space was reduced to a small cube, white from floor to ceiling, with neon lights and a big logo (a kind of analogue watermark) on the back door. Thinking about this project, and the strong presence of the Link Point logo in all the documentation, we realized that the Link Point was actually not bound to that space: as an abstract, highly formalized space, it could actually be everywhere. Take a white cube and place the Link Point logo in it, and that’s the Link Point.

This realization brought us, on the one hand, to close the space in Brescia and to turn the Link Point into a nomad, erratic project, that can resurrect from time to time in other places; and, on the other hand, to conceive 6PM Your Local Time. The idea was simple: if exhibition spaces are all more or less similar; if online documentation has become so important to communicate art events to a wider audience, and if people started perceiving it as not different from primary experience, why not set up an exhibition that takes place in different locations, kept together only by documentation and by the use of the same logo? All the rest came right after, as a natural development from this starting point (and as an adaptation of this idea to reality). Of course, this is a statement as well as a provocation: watching the documentation of the UK Beta Test you can easily realize that exhibition spaces are NOT more or less the same; that attending or participating in an event is different from watching pictures on a screen; that some artworks work well in pictures but many need to be experiences. We want to stress the value of networking and of giving prominence to your network rather than to your individual identity; but if the project would work as a reminder that reality is still different from media representation, it would be successful anyway. Daniel Rourke: There is something of Hakim Bey’s Temporary Autonomous Zones in your proposal. The idea that geographic, economic and/or political boundaries need no longer define the limits of social collective action. We can criticise Bey’s 1991 text now, because in retrospect the Internet and its constitutive protocols have themselves become a breeding ground for corporate and political concerns, even as technology has allowed ever more distributed methods of connectivity. You foreground network identity over individual identity in the 6PM YLT vision, yet the distinction between the individuals that create a network and the corporate hierarchies that make that networkingpossible are less clear. I am of course gesturing towards the use of Twitter as the principal platform of the project, a question that Ruth Catlow brought up at the launch. Do you still believe that TAZs are possible in our hyper-connected, hyper-corporate world? Domenico Quaranta: In its first, raw conceptualization, 6PM YLT had to come with its own smartphone app, that had to be used both to participate in the project and to access the gallery. The decision to aggregate content published on different social platforms came from the realization that people already had the production and distribution tools required to participate in the action, and were already familiar with some gestures: take a photo, apply a filter, add an hashtag, etc. Of course, we could invite participants and audiences to use some specific, open source social network of our choice, but we prefer to tell them: just use the fucking platform of your choice. We want to facilitate and expand participation, not to reduce it; and we are not interested in adding another layer to the project. 6PM YLT is not a TAZ, it’s just a social game that wants to raise some awareness about the importance of documentation, the power of networks, the public availability of what we do with our phones. And it’s a parasitic tool that, as anything else happening online, implies an entire set of corporate frameworks in order to exist: social networks, browsers, operative systems, internet providers, server farms etc. That said, yes, I think TAZs are still possible. The model of TAZ has been designed for an hyper-connected, hyper-corporate world; they are temporary and nomadic; they exist in interstices for a short time. But I agree that believing in them is mostly an act of faith.

Daniel Rourke: The beta-tested, final iteration of 6pm YLT will be launched in the summer of 2015. How will you be rolling out the project in the forthcoming months? How can people get involved? Domenico Quaranta: 6PM Your Local Time has been conceived as an opportunity, for the organizing subject, to bring to visibility its network of relationships and to improve it. It’s not an exhibition with a topic, but a social network turned visible. To put it simply: our identity is defined not just by what we do, but also by the people we hang out with. After organizing 6PM Your Local Time Europe, the Link Art Center would like to take a step back and to offer the platform to other organizing subjects, to allow them to show off their network as well. So, what we are doing now is preparing a long list of institutions, galleries and artists we made love with in the past or we’d like to make love with in the future, and inviting them to participate in the project. We won’t launch an open call, but we already made the event public saying that if anyone is interested to participate, they are allowed to submit a proposal. We won’t accept anybody, but we would be happy to get in touch with people we didn’t know. After finalizing the list of participants, we will work on all the organizational stuff, basically informing them about the basic rules of the game, gathering information about the events, answering questions, etc. On the other hand, we have of course to work on the presentation. While every participant presents an event of her choice, the organizer of a 6PM Your Local Time event has to present to its local audience the platform event, as an ongoing installation / performance. We are from Brescia, Italy, and that’s where we will make our presentation. We made an agreement with MusicalZOO, a local festival of art and electronic music, in order to co-produce the presentation and have access to their audience. This is what determined the date of the event in the first place. Since the festival takes place outdoor during the summer, we are working with them on designing a temporary office where we can coordinate the event, stay in touch with the participants, discuss with the audience, and a video installation in which the live stream of pics and videos will be displayed. Since we are expecting participants from Portugal to the Russian Federation, the event will start around 5 PM, and will follow the various opening events up to late night. One potential reference for this kind of presentation may be those (amazing) telecommunication projects that took place in the Eighties: Robert Adrian’s The World in 24 Hours, organized at Ars Electronica in 1982; the Planetary Network set up in 1986 at the Venice Biennale; and even Nam June Paik’s satellite communication project Good Morning Mr Orwell (1984). Left to Right – Enrico Boccioletti, Kim Asendorf, Ryder Ripps, Kristal South, Evan Roth Daniel Rourke: Your exhibition Unoriginal Genius, featuring the work of 17 leading net and new media artists, was the last project to be hosted in the Carroll/Fletcher Project Space (closing November 22nd, 2014). Could you tell us more about the role you consider ‘genius’ plays in framing contemporary art practice? Domenico Quaranta: The idea of genius still plays an important role in Western culture, and not just in the field of art. Whether we are talking about the Macintosh, Infinite Jest, a space trip or Nymphomaniac, we are always celebrating an individual genius, even if we perfectly know that there is a team and a concerted action behind each of these things. Every art world is grounded in the idea that there are gifted people who, provided specific conditions, can produce special things that are potentially relevant for anybody. This is not a problem in itself – what’s problematic are some corollaries to our traditional idea of genius – namely “originality” and “intellectual property”. The first claims that a good work of creation is new and doesn’t depend on previous work by others; the second claims that an original work belongs to the author. In my opinion, creation never worked this way, and I’m totally unoriginal in saying this: hundreds of people, before and along to me, say that creating consists in taking chunks of available material and assembling them in ways that, in the best situation, allow us to take a small step forward from what came before. But in the meantime, entire legal systems have been built upon such bad beliefs; and what’s happening now is that, while on the one hand the digitalization of the means of production and dissemination allow us to look at this process with unprecedented clarity; on the other hand these regulations have evolved in such a way that they may eventually slow down or stop the regular evolution of culture, which is based on the exchange of ideas. We – and creators in particular – have to fight against this situation. But Unoriginal Genius shouldn’t be read in such an activist way. It is just a small attempt to show how the process of creation works today, in the shared environment of a networked computer, and to bring this in front of a gallery audience. Left to Right – Kim Asendorf, Ryder Ripps, Kristal South, Evan Roth Daniel Rourke: So much online material ‘created’ today is free-flowing and impossible to trace back to an original author, yet the tendency to attribute images, ideas or ‘works’ to an individual still persists – as it does in Unoriginal Genius. I wonder whether you consider some of the works in the show as more liberated from authorial constraints than others? That is, what are the works that appear to make themselves; floating and mutating regardless of particular human (artist) intentions? Domenico Quaranta: Probably Museum of the Internet is the one that fits best to your description. Everybody can contribute anonymously to it by just dropping images on the webpage; the authors’ names are not available on the website, and there’s no link to their homepage. It’s so simple, so necessary and so pure that one may think that it always existed out there in some way or another. And in a way it did, because the history of the internet is full of projects that invite people to do more or less the same. Left to Right – Brout & Marion, Gervais & Magal, Sara Ludy Daniel Rourke: 2014 was an exciting year for the recognition of digital art cultures, with the appointment of Dragan Espenschied as lead Digital Conservator at Rhizome, the second Paddles On! auction of digital works in London, with names like Hito Steyerl and Ryan Trecartin moving up ArtReview’s power list, and projects like Kenneth Goldsmith’s ‘Printing out the Internet’ highlighting the increasing ubiquity – and therefore arguable fragility – of web-based cultural aggregation. I wondered what you were looking forward to in 2015 – apart from 6PM YLT of course. Where would you like to see the digital/net/new media arts 12 months from now? Domenico Quaranta: On the moon, of course! Out of joke: I agree that 2014 has been a good year for the media arts community, as part of a general positive trend along the last few years. Other highlighs may include, in various order: the September 2013 issue of Artforum, on “Art and Media”, and the discussion sparked by Claire Bishop’s essay; Cory Arcangel discovering and restoring lost Andy Warhol’s digital files from floppy disks; Ben Fino-Radin becoming digital conservator at MoMA, New York; JODI winning the Prix Net Art; the Barbican doing a show on the Digital Revolution with Google. Memes like post internet, post digital and the New Aesthetic had negative side effects, but they helped establishing digital culture in the mainstream contemporary art discourse, and bringing to prominence some artists formerly known as net artists. In 2015, the New Museum Triennial will be curated by Lauren Cornell and Ryan Trecartin, and DIS has been announced to be curator of the 9th Berlin Biennial in 2016. All this looks promising, but one thing that I learned from the past is to be careful with optimistic judgements. The XXI century started with a show called 010101. Art in Technological Times, organized by SFMoMA. The same year, net art entered the Venice Biennale, the Whitney organizedBitstreams and Data Dynamics, the Tate Art and Money Online. Later on, the internet was announced dead, and it took years for the media art community to get some prominence in the art discourse again. The situation now is very different, a lot has been done at all levels (art market, institutions, criticism), and the interest in digital culture and technologies is not (only) the result of the hype and of big money flushed by corporations unto museums. But still, where we really are? The first Paddles On! Auction belongs to history because it helped selling the first website ever on auction; the second one mainly sold digital and analogue paintings. Digital Revolution was welcomed by sentences like: “No one could fault the advances in technology on display, but the art that has emerged out of that technology? Well, on this showing, too much of it seems gimmicky, weak and overly concerned with spectacle rather than meaning, or making a comment on our culture.” (The Telegraph) The upcoming New Museum Triennial will include artists like Ed Atkins, Aleksandra Domanovic, Oliver Laric, K-HOLE, Steve Roggenbuck, but Lauren and Ryan did their best to avoid partisanship. There’s no criticism in this statement, actually I would have done exactly the same, and I’m sure it will be an amazing show that I can’t wait to see. Just, we don’t have to expect too much from this show in terms of “digital art recognition”. So, to put it short: I’m sure digital art and culture is slowly changing the arts, and that this revolution will be dramatic; but it won’t take place in 2015

http://www.6pmyourlocaltime.com/

]]>
Wed, 08 Apr 2015 03:57:20 -0700 http://www.furtherfield.org/features/interviews/interview-domenico-quaranta
<![CDATA[Four Notes Towards Post-Digital Propaganda | post-digital-research]]> http://post-digital.projects.cavi.dk/?p=475

“Propaganda is called upon to solve problems created by technology, to play on maladjustments and to integrate the individual into a technological world” (Ellul xvii).

How might future research into digital culture approach a purported “post-digital” age? How might this be understood?

1.

A problem comes from the discourse of ‘the digital’ itself: a moniker which points towards units of Base-2 arbitrary configuration, impersonal architectures of code, massive extensions of modern communication and ruptures in post-modern identity. Terms are messy, and it has never been easy to establish a ‘post’ from something, when pre-discourse definitions continue to hang in the air. As Florian Cramer has articulated so well, ‘post-digital’ is something of a loose, ‘hedge your bets’ term, denoting a general tendency to criticise the digital revolution as a modern innovation (Cramer).

Perhaps it might be aligned with what some have dubbed “solutionism” (Morozov) or “computationalism” (Berry 129; Golumbia 8): the former critiquing a Silicon Valley-led ideology oriented towards solving liberalised problems through efficient computerised means. The latter establishing the notion (and critique thereof) that the mind is inherently computable, and everything associated with it. In both cases, digital technology is no longer just a business that privatises information, but the business of extending efficient, innovative logic to all corners of society and human knowledge, condemning everything else through a cultural logic of efficiency.

In fact, there is a good reason why ‘digital’ might as well be an synonym for ‘efficiency’. Before any consideration is assigned to digital media objects (i.e. platforms, operating systems, networks), consider the inception of ‘the digital’ inception as such: that is information theory. If information was a loose, shabby, inefficient method of vagueness specific to various mediums of communication, Claude Shannon compressed all forms of communication into a universal system with absolute mathematical precision (Shannon). Once information became digital, the conceptual leap of determined symbolic logic was set into motion, and with it, the ‘digital’ became synonymous with an ideology of effectivity. No longer would miscommunication be subject to human finitude, nor be subject to matters of distance and time, but only the limits of entropy and the matter of automating messages through the support of alternating ‘true’ or ‘false’ relay systems.

However, it would be quite difficult to envisage any ‘post-computational’ break from such discourses – and with good reason: Shannon’s breakthrough was only systematically effective through the logic of computation. So the old missed encounter goes: Shannon presupposed Alan Turing’s mathematical idea of computation to transmit digital information, and Turing presupposed Shannon’s information theory to understand what his Universal Turing Machines were actually transmitting. The basic theories of both have not changed, but the materials affording greater processing power, extensive server infrastructure and larger storage space have simply increased the means for these ideas to proliferate, irrespective of what Turing and Shannon actually thought of them (some historians even speculate that Turing may have made the link between information and entropy two years before Bell Labs did) (Good).

Thus a ‘post-digital’ reference point might encompass the historical acknowledgment of Shannon’s digital efficiency, and Turing’s logic but by the same measure, open up a space for critical reflection, and how such efficiencies have transformed not only work, life and culture but also artistic praxis and aesthetics. This is not to say that digital culture is reducibly predicated on efforts made in computer science, but instead fully acknowledges these structures and accounts for how ideologies propagate reactionary attitudes and beliefs within them, whilst restricting other alternatives which do not fit their ‘vision’. Hence, the post-digital ‘task’ set for us nowadays might consist in critiquing digital efficiency and how it has come to work against commonality, despite transforming the majority of Western infrastructure in its wake.

The purpose of these notes is to outline how computation has imparted an unwarranted effect of totalised efficiency, and to label this effect the type of description it deserves: propaganda. The fact that Shannon and Turing had multiple lunches together at Bell labs in 1943, held conversations and exchanged ideas, but not detailed methods of cryptanalysis (Price & Shannon) provides a nice contextual allegory for how digital informatics strategies fail to be transparent.

But in saying this, I do not mean that companies only use digital networks for propagative means (although that happens), but that the very means of computing a real concrete function is constitutively propagative. In this sense, propaganda resembles a post-digital understanding of what it means to be integrated into an ecology of efficiency, and how technical artefacts are literally enacted as propagative decisions. Digital information often deceives us into accepting its transparency, and of holding it to that account: yet in reality it does the complete opposite, with no given range of judgements available to detect manipulation from education, or persuasion from smear. It is the procedural act of interacting with someone else’s automated conceptual principles, embedding pre-determined decisions which not only generate but pre-determine ones ability to make choices about such decisions, like propaganda.

This might consist in distancing ideological definitions of false consciousness as an epistemological limit to knowing alternatives within thought, to engaging with a real programmable systems which embeds such limits concretely, withholding the means to transform them. In other words, propaganda incorporates how ‘decisional structures’ structure other decisions, either conceptually or systematically.

2.

Two years before Shannon’s famous Masters thesis, Turing published what would be a theoretical basis for computation in his 1936 paper “On Computable Numbers, with an Application to the Entscheidungsproblem.” The focus of the paper was to establish the idea of computation within a formal system of logic, which when automated would solve particular mathematical problems put into function (Turing, An Application). What is not necessarily taken into account is the mathematical context to that idea: for the foundations of mathematics were already precarious, way before Turing outlined anything in 1936. Contra the efficiency of the digital, this is a precariousness built-in to computation from its very inception: the precariousness of solving all problems in mathematics.

The key word of that paper, its key focus, was on the Entscheidungsproblem, or decision problem. Originating from David Hilbert’s mathematical school of formalism, ‘decision’ means something more rigorous than the sorts of decisions in daily life. It really means a ‘proof theory’, or how analytic problems in number theory and geometry could be formalised, and thus efficiently solved (Hilbert 3). Solving a theorem is simply finding a provable ‘winning position’ in a game. Similar to Shannon, ‘decision’ is what happens when an automated system of function is constructed in such a sufficiently complex way, that an algorithm can always ‘decide’ a binary, yes or no answer to a mathematical problem, when given an arbitrary input, in a sufficient amount of time. It does not require ingenuity, intuition or heuristic gambles, just a combination of simple consistent formal rules and a careful avoidance of contradiction.

The two key words there are ‘always’ and ‘decide’. The progressive end-game of twentieth century mathematicians who, like Hilbert, sought after a simple totalising conceptual system to decide every mathematical problem and work towards absolute knowledge. All Turing had to do was make explicit Hilbert’s implicit computational treatment of formal rules, manipulate symbol strings and automate them using an ’effective’ or “systematic method” (Turing, Solvable and Unsolvable Problems 584) encoded into a machine. This is what Turing’s thesis meant (discovered independently to Alonzo Church’s equivalent thesis (Church)): any systematic algorithm solved by a mathematical theorem can be computed by a Turing machine (Turing, An Application), or in Robin Gandy’s words, “[e]very effectively calculable function is a computable function” (Gandy).

Thus effective procedures decide problems, and they resolve puzzles providing winning positions (like theorems) in the game of functional rules and formal symbols. In Turing’s words, “a systematic procedure is just a puzzle in which there is never more than one possible move in any of the positions which arise and in which some significance is attached to the final result” (Turing, Solvable and Unsolvable Problems 590). The significance, or the winning position, becomes the crux of the matter for the decision: what puzzles or problems are to be decided? This is what formalism attempted to do: encode everything through the regime of formalised efficiency, so that all of mathematically inefficient problems are, in principle, ready to be solved. Programs are simply proofs: if it could be demonstrated mathematically, it could be automated.

In 1936, Turing had showed some complex mathematical concepts of effective procedures could simulate the functional decisions of all the other effective procedures (such as the Universal Turing Machine). Ten years later, Turing and John von Neumann would independently show how physical general purpose computers, offered the same thing and from that moment on, efficient digital decisions manifested themselves in the cultural application of physical materials. Before Shannon’s information theory offered the precision of transmitting information, Hilbert and Turing developed the structure of its transmission in the underlying regime of formal decision.

Yet, there was also a non-computational importance here, for Turing was also fascinated by what decisions couldn’t compute. His thesis was quite precise, so as to elucidate that if no mathematical problem could be proved, a computer was not of any use. In fact, the entire focus of his 1936 paper, often neglected by Silicon Valley cohorts, was to show that Hilbert’s particular decision problem could not be solved. Unlike Hilbert, Turing was not interested in using computation to solve every problem, but as a curious endeavour for surprising intuitive behaviour. The most important of all, Turing’s halting, or printing problem was influential, precisely as it was undecidable; a decision problem which couldn’t be decided.

We can all picture the halting problem, even obliquely. Picture the frustrated programmer or mathematician starting at their screen, waiting to know when an algorithm will either halt and spit out a result, or provide no answer. The computer itself has already determined the answer for us, the programmer just has to know when to give up. But this is a myth, inherited with a bias towards human knowledge, and a demented understanding of machines as infinite calculating engines, rather than concrete entities of decision. For reasons that escape word space, Turing didn’t understand the halting problem in this way: instead he understood it as a contradictory example of computational decisions failing to decide on each other, on the account that there could never be one totalising decision or effective procedure. There is no guaranteed effective procedure to decide on all the others, and any attempt to build one (or invest in a view which might help build one), either has too much investment in absolute formal reason, or it ends up with ineffective procedures.

Undecidable computation might be looked at as a dystopian counterpart against the efficiency of Shannon’s ‘digital information’ theory. A base 2 binary system of information resembling one of two possible states, whereby a system can communicate with one digit, only in virtue of the fact that there is one other digit alternative to it. Yet the perfect transmission of that information, is only subject to a system which can ‘decide’ on the digits in question, and establish a proof to calculate a success rate. If there is no mathematical proof to decide a problem, then transmitting information becomes problematic for establishing a solution.

3.

What has become clear is that our world is no longer simply accountable to human decision alone. Decisions are no longer limited to the borders of human decisions and ‘culture’ is no longer simply guided by a collective whole of social human decisions. Nor is it reducible to one harmonious ‘natural’ collective decision which prompts and pre-empts everything else. Instead we seem to exist in an ecology of decisions: or better yet decisional ecologies. Before there was ever the networked protocol (Galloway), there was the computational decision. Decision ecologies are already set up before we enter the world, implicitly coterminous with our lives: explicitly determining a quantified or bureaucratic landscape upon which an individual has limited manoeuvrability.

Decisions are not just digital, they are continuous as computers can be: yet decisions are at their most efficient when digitally transferred. Decisions are everywhere and in everything. Look around. We are constantly told by governments and states that are they making tough decisions in the face of austerity. CEOs and Directors make tough decisions for the future of their companies and ‘great’ leaders are revered for being ‘great decisive leaders’: not just making decisions quickly and effectively, but also settling issues and producing definite results.

Even the word ‘decide’, comes from the Latin origin of ‘decidere’, which means to determine something and ‘to cut off.’ Algorithms in financial trading know not of value, but of decision: whether something is marked by profit or loss. Drones know not of human ambiguity, but can only decide between kill and ignore, cutting off anything in-between. Constructing a system which decides between one of two digital values, even repeatedly, means cutting off and excluding all other possible variables, leaving a final result at the end of the encoded message. Making a decision, or building a system to decide a particular ideal or judgement must force other alternatives outside of it. Decisions are always-already embedded into the framework of digital action, always already deciding what is to be done, how it can be done or what is threatening to be done. It would make little sense to suggest that these entities ‘make decisions’ or ‘have decisions’, it would be better to say that they are decisions and ecologies are constitutively constructed by them.

The importance of neo-liberal digital transmissions are not that they become innovative, or worthy of a zeitgeist break: but that they demonstrably decide problems whose predominant significance is beneficial for self-individual efficiency and accumulation of capital. Digital efficiency is simply about the expansion of automating decisions and what sort of formalised significances must be propagated to solve social and economic problems, which creates new problems in a vicious circle.

The question can no longer simply be ‘who decides’, but now, ‘what decides?’ Is it the cafe menu board, the dinner party etiquette, the NASDAQ share price, Google Pagerank, railway network delays, unmanned combat drones, the newspaper crossword, the javascript regular expression or the differential calculus? It’s not quite right to say that algorithms rule the world, whether in algo-trading or in data capture, but the uncomfortable realisation that real entities are built to determine provable outcomes time and time again: most notably ones for cumulating profit and extracting revenue from multiple resources.

One pertinent example: consider George Dantzig’s simplex algorithm: this effective procedure (whose origins began in multidimensional geometry) can always decide solutions for large scale optimisation problems which continually affect multi-national corporations. The simplex algorithm’s proliferation and effectiveness has been critical since its first commercial application in 1952, when Abraham Charnes and William Cooper used it to decide how best to optimally blend four different petroleum products at the Gulf Oil Company (Elwes 35; Gass & Assad 79). Since then the simplex algorithm has had years of successful commercial use, deciding almost everything from bus timetables and work shift patterns to trade shares and Amazon warehouse configurations. According to the optimisation specialist Jacek Gondzio, the simplex algorithm runs at “tens, probably hundreds of thousands of calls every minute” (35), always deciding the most efficient method of extracting optimisation.

In contemporary times, nearly all decision ecologies work in this way, accompanying and facilitating neo-liberal methods of self-regulation and processing all resources through a standardised efficiency: from bureaucratic methods of formal standardisation, banal forms ready to be analysed one central system, to big-data initiatives and simple procedural methods of measurement and calculation. The technique of decision is a propagative method of embedding knowledge, optimisation and standardisation techniques in order to solve problems and an urge to solve the most unsolvable ones, including us.

Google do not build into their services an option to pay for the privilege of protecting privacy: the entire point of providing a free service which purports to improve daily life, is that it primarily benefits the interests of shareholders and extend commercial agendas. James Grimmelmann gave a heavily detailed exposition on Google’s own ‘net neutrality’ algorithms and how biased they happen to be. In short, PageRank does not simply decide relevant results, it decides visitor numbers and he concluded on this note.

With disturbing frequency, though, websites are not users’ friends. Sometimes they are, but often, the websites want visitors, and will be willing to do what it takes to grab them (Grimmelmann 458).

If the post-digital stands for the self-criticality of digitalisation already underpinning contemporary regimes of digital consumption and production, then its saliency lies in understanding the logic of decision inherent to such regimes. The reality of the post-digital, shows that machines remain curiously efficient whether we relish in cynicism or not. Such regimes of standardisation and determined results, were already ‘mistakenly built in’ to the theories which developed digital methods and means, irrespective of what computers can or cannot compute.

4.

Why then should such post-digital actors be understood as instantiations of propaganda? The familiarity of propaganda is manifestly evident in religious and political acts of ideological persuasion: brainwashing, war activity, political spin, mind control techniques, subliminal messages, political campaigns, cartoons, belief indoctrination, media bias, advertising or news reports. A definition of propaganda might follow from all of these examples: namely, the systematic social indoctrination of biased information that persuades the masses to take action on something which is neither beneficial to them, nor in their best interests: or as Peter Kenez writes, propaganda is “the attempt to transmit social and political values in the hope of affecting people’s thinking, emotions, and thereby behaviour” (Kenez 4) Following Stanley B. Cunningham’s watered down definition, propaganda might also denote a helpful and pragmatic “shorthand statement about the quality of information transmitted and received in the twentieth century” (Cunningham 3).

But propaganda isn’t as clear as this general definition makes out: in fact what makes propaganda studies such a provoking topic is that nearly every scholar agrees that no stable definition exists. Propaganda moves beyond simple ‘manipulation’ and ‘lies’ or derogatory, jingoistic representation of an unsubtle mood – propaganda is as much about the paradox of constructing truth, and the irrational spread of emotional pleas, as well as endorsing rational reason. As the master propagandist William J. Daugherty wrote;

It is a complete delusion to think of the brilliant propagandist as being a professional liar. The brilliant propagandist […] tells the truth, or that selection of the truth which is requisite for his purpose, and tells it in such a way that the recipient does not think that he is receiving any propaganda…. (Daugherty 39).

Propaganda, like ideology works by being inherently implicit and social. In the same way that post-ideology apologists ignore their symptom, propaganda is also ignored. It isn’t to be taken as a shadowy fringe activity, blown apart by the democratising fairy-dust of ‘the Internet’. As many others have noted, the purported ‘decentralising’ power of online networks, offer new methods for propagative techniques, or ‘spinternet’ strategies, evident in China (Brady). Iran’s recent investment into video game technology only makes sense, only when you discover that 70% of Iran’s population are under 30 years of age, underscoring a suitable contemporary method of dissemination. Similarly in 2011, the New York City video game developer Kuma Games was mired in controversy when it was discovered that an alleged CIA agent, Amir Mirza Hekmati, had been recruited to make an episodic video game series intending to “change the public opinion’s mindset in the Middle East.” (Tehran Times). The game in question, Kuma\War (2006 – 2011) was a free-to-play First-Person Shooter series, delivered in episodic chunks, the format of which attempted to simulate biased re-enactments of real-life conflicts, shortly after they reached public consciousness.

Despite his unremarkable leanings towards Christian realism, Jacques Ellul famously updated propaganda’s definition as the end product of what he previously lamented as ‘technique’. Instead of viewing propaganda as a highly organised systematic strategy for extending the ideologues of peaceful warfare, he understood it as a general social phenomenon in contemporary society.

Ellul outlined two types: political and sociological propaganda: Political propaganda involves government, administrative techniques which intend to directly change the political beliefs of an intended audience. By contrast, sociological propaganda is the implicit unification of involuntary public behaviour which creates images, aesthetics, problems, stereotypes, the purpose of which aren’t explicitly direct, nor overtly militaristic. Ellul argues that sociological propaganda exists; “in advertising, in the movies (commercial and non-political films), in technology in general, in education, in the Reader’s Digest; and in social service, case work, and settlement houses” (Ellul 64). It is linked to what Ellul called “pre” or “sub-propaganda”: that is, an imperceptible persuasion, silently operating within ones “style of life” or permissible attitude (63). Faintly echoing Louis Althusser’s Ideological State Apparatuses (Althusser 182) nearly ten years prior, Ellul defines it as “the penetration of an ideology by means of its sociological context.” (63) Sociological propaganda is inadequate for decisive action, paving the way for political propaganda – its strengthened explicit cousin – once the former’s implicitness needs to be transformed into the latter’s explicitness.

In a post-digital world, such implicitness no longer gathers wartime spirits, but instead propagates a neo-liberal way of life that is individualistic, wealth driven and opinionated. Ellul’s most powerful assertion is that ‘facts’ and ‘education’ are part and parcel of the sociological propagative effect: nearly everyone faces a compelling need to be opinionated and we are all capable of judging for ourselves what decisions should be made, without at first considering the implicit landscape from which these judgements take place. One can only think of the implicit digital landscape of Twitter: the archetype for self-promotion and snippets of opinions and arguments – all taking place within Ellul’s sub-propaganda of data collection and concealment. Such methods, he warns, will have “solved the problem of man” (xviii).

But information is of relevance here, and propaganda is only effective within a social community when it offers the means to solve problems using the communicative purview of information:

Thus, information not only provides the basis for propaganda but gives propaganda the means to operate; for information actually generates the problems that propaganda exploits and for which it pretends to offer solutions. In fact, no propaganda can work until the moment when a set of facts has become a problem in the eyes of those who constitute public opinion (114).

]]>
Wed, 11 Dec 2013 15:42:45 -0800 http://post-digital.projects.cavi.dk/?p=475
<![CDATA[Artist Profile: Erica Scourti]]> https://rhizome.org/editorial/2013/oct/08/artist-profile-erica-scourti/#new_tab

The latest in a series of interviews with artists who have developed a significant body of work engaged (in its process, or in the issues it raises) with technology. See the full list of Artist Profiles here.   Daniel Rourke: Your recent work, You Could’ve Said, is described as “a Google keyword confessional for radio.” I’ve often considered your work as having elements of the confession, partly because of the deeply personal stance you perform—addressing we, the viewer or listener, in a one-on-one confluence, but also through the way your work hijacks and exposes the unseen, often algorithmic, functions of social and network media. You allow Google keywords to parasitize your identity and in turn you apparently “confess” on Google’s behalf. Are you in search of redemption for your social-media self? Or is it the soul of the algorithm you wish to save? Erica Scourti: Or maybe the algorithm and social media soul is now so intertwined and interdependent that it makes little sense to even separate the two, in a unlikely fulfillment of Donna Haraway’s cyborg? Instead of having machines built into/onto us (Google glasses notwithstanding), the algorithms which parse our email content, Facebook behaviours, Amazon spending habits, and so on, don’t just read us, but shape us. I’m interested in where agency resides when our desires, intentions and behaviours are constantly being tracked and manipulated through the media and technology that we inhabit; how can we claim to have any “authentic” desires? Facebook’s “About” section actually states, “You can’t be on Facebook without being your authentic self,” and yet this is a self that must fit into the predetermined format and is mostly defined by its commercial choices (clothing brands, movies, ice cream, whatever). And those choices are increasingly influenced by the algorithms through the ambient, personalized advertising that surrounds us.

So in You Could’ve Said, which is written entirely in an instrumentalised form of language, i.e. Google’s AdWords tool, I’m relaying the impossibility of having an authentic feeling, or even a first-hand experience, despite the seemingly subjective, emotional content and tone. Google search stuff is often seen reflective of a kind of cute “collective self” (hey, we all want to kill our boyfriends sometimes!) but perhaps it’s producing as much as reflecting us. It’s not just that everything’s already been said, and can be commodified but that the devices we share so much intimate time with are actively involved in shaping what we consider to be our “selves,” our identities. And yet, despite being entirely mediated, my delivery is “sincere” and heartfelt; I’m really interested in the idea of sincere, but not authentic. I think it’s the same reason spambots can have such unexpected pathos; they seem to “express” things in a sincere way, which suggests some kind of “soul” at work there, or some kind of agency,  and yet they totally lack interiority, or authenticity. In this and other work of mine (especially Life in AdWords) dissonance is produced by my apparent misrecognition of the algorithmically produced language as my own- mistaking the machine lingo as a true expression of my own subjectivity. Which is not to say that there is some separate, unmediated self that we could access if only we would disconnect our damn gadgets for a second, but the opposite—that autobiography, which my work clearly references, can no longer be seen as a narrative produced by some sort of autonomous subject, inseparable from the technology it interacts with. Also, autobiography often involves a confessional, affective mode, and I’m interested in how this relates to the self-exposure which the attention economy seems to encourage—TMI can secure visibility when there’s not enough attention to go round. With the Google confessional, I’m enacting an exposure of my flaws and vulnerabilities and while it’s potentially “bad” for me (i.e. my mediated self) since you might think I’m a loser, if you’re watching, then it’s worth it, since value is produced simply through attention-retention. Affective vitality doesn’t so much resist commodification as actively participate within it…

DR: You mention agency. When it comes to the algorithms that drive the current attention economy I tend to think we have very little. Active participation is all well and good, but the opposite—an opting out, rather than a passivity—feels increasingly impossible. I am thinking about those reCaptcha questions we spend all our time filling in. If I want to access my account and check the recommendations it has this week, I’m required to take part in this omnipresent, undeniably clever, piece of crowd-sourcing. Alan Turing’s predictions of a world filled with apparently intelligent machines has come true, except, its the machines now deciding whether we are human or not. ES: Except of course—stating the obvious here—it’s just carrying out the orders another human instructed it to, a mediated form of gatekeeping that delegates responsibility to the machine, creating a distance from the entirely human, social, political etc structure that has deemed it necessary (a bit like drones then?). I’m very interested also in the notion of participation as compulsory—what Zizek calls the “You must, because you can” moral imperative of consumerism—especially online, not just at the banal level (missing out on events, job opportunities, interesting articles and so on if you’re not on Facebook) but because your actions necessarily feed back into the algorithms tracking and parsing our behaviours. And even opting out becomes a choice that positions you within a particular demographic (more likely to be vegetarian, apparently). Also, this question of opting out seems to recur in conversations around art made online, in a way it doesn’t for artists working with traditional media—like, if you’re being critical of it, why not go make your own Facebook, why not opt out? My reasoning is that I like to work with widely used technology, out of an idea that the proximity of these media to mainstream, domestic and wider social contexts makes the work more able to reflect on its sociopolitical implications, just as some video artists working in the 80s specifically engaged with TV as the main mediator of public consciousness. Of course some say this is interpassiviity, just feebly participating in the platforms without making any real change, and I can understand that criticism. Now that coded spaces and ubiquitous computing are a reality of the world—and power structures—we inhabit, I do appreciate artists who can work with code and software (in a way that I can’t) and use their deeper understanding of digital infrastructure to reflect critically on it. DR: You’ve been engaged in a commision for Colm Cille’s Spiral, sending personal video postcards to anyone who makes a request. Your interpretation of the “confessional” mode seems in this piece to become very human-centric again, since the work is addressed specifically at one particular individual. How has this work been disseminated, and what does your approach have to do with “intimacy”? ES: I’ve always liked Walter Benjamin’s take on the ability of mediating technologies to traverse spatial distances, bringing previously inaccessible events within touching distance. With this project, I wanted to heighten this disembodied intimacy by sending unedited videos shot on my iPhone, a device that’s physically on me at all times, directly to the recipients’ inbox. So it’s not just “sharing” but actually “giving” them a unique video file gift, which only they see,  positioning the recipient as a captive audience of one, unlike on social media where you have no idea who is watching or who cares. But also, I asked them to “complete” the video by adding its metadata, which puts them on the spot—they have to respond, instead of having the option to ignore me—and also extracting some labor in return, which is exactly what social media does: extracting our affective and attentive labor, supposedly optionally, in exchange for the gift of the free service. The metadata—tags, title and optionally a caption—became the only viewable part of the exchange, since I used it to annotate a corresponding black, “empty” video on Instagram, also shared on Twitter and Facebook, so the original content remains private. These blank videos record the creative output of the recipient, while acting as proof of the transaction (i.e. that I sent them a video). They also act as performative objects which will continue to operate online due to their tagging, which connects them to other groups of media and renders them visible—i.e. searchable—online, since search bots cannot as yet “see” video content. I wanted to make a work which foregrounds its own connectedness, both to other images via the hashtags but also to the author-recipients through tagging them on social media. So the process of constantly producing and updating oneself within the restrictive and pre-determined formats of social media platforms, i.e. their desired user behaviours, becomes almost the content of the piece. I also like the idea that hashtag searches on all these platforms, for (let’s say) Greece, will bring up these blank/ black videos (which by the way, involved a little hack, as Instagram will not allow you to upload pre-recorded content and it’s impossible to record a black and silent video…). It’s a tiny intervention into the regime of carefully filtered and cropped life-style depictions that Instagram is best known for. It’s also a gesture of submitting oneself to the panoptical imperative to share one’s experience no matter how private or banal, hence using Instagram for its associations with a certain solipsistic self-display; by willingly enacting the production of mediated self on social media I’m exploring a kind of masochistic humour which has some affinities with what Benjamin Noys identified as an accelerationist attitude of “the worse the better.” And yet, by remaining hidden, and not publicly viewable, the public performance of a mediated self is denied.

DR: An accelerationist Social Media artwork would have to be loaded with sincerity, firstly, on the part of the human (artist/performer), but also, in an authentic attempt to utilise the network completely on its terms. Is there something, then, about abundance and saturation in your work? An attempt to overload the panopticon? ES: That’s a very interesting way of putting it. I sometimes relate that oversaturation to the horror vacui of art that springs from a self-therapeutic need, which my work addresses, though it’s less obsessive scribbles, more endless connection, output and flow and semi-ritualistic and repetitive working processes. And in terms of utilizing the network on its own terms, Geert Lovink’s notion of the “natural language hack” (rather than the “deep level” hack) is one I’ve thought about—where your understanding of the social, rather than technical, operation of online platforms gets your work disseminated. For example my project Woman Nature Alone, where I re-enacted stock video which is freely available on my Youtube channel—some of those videos are high on the Google ranking page, so Google is effectively “marketing” my work without me doing anything.  Whether it overloads the panopticon, or just contributes more to the babble, is a pertinent question (as Jodi Dean’s work around communicative capitalism has shown), since if the work is disseminated on commercial platforms like YouTube or Facebook, it operates within a system of value generation which benefits the corporation, involving, as is by now well known, a Faustian pact of personal data in exchange for “free” service. And going back to agency—the mutability of the platforms means that if the work makes use of particular features (suchas YouTube annotations) its existence is contingent on them being continued; since the content and the context are inextricable in situations like this, it would become impossible to display the original work exactly as it was first made and seen. Even then, as with Olia Lialina and Dragan Espenschied’s One Terabyte of Kilobyte Age, it would become an archive, which preserves documents from a specific point in the web’s history but cannot replicate the original viewing conditions because all the infrastructure around it has changed completely. So if the platforms—the corporations—control the context and viewing conditions, then artists working within them are arguably at their mercy- and keeping the endless flow alive by adding to it. I’m more interested in working within the flows rather than, as some artists prefer, rejecting the dissemination of their work online. Particularly with moving image work,  I’m torn between feeling that artists’ insistence on certain very specific, usually high quality, viewing conditions for their work bolsters, as Sven Lütticken has argued, the notion of the rarefied auratic art object whose appreciation requires a kind of hushed awe and reverence, while being aware that the opposite—the image ripped from its original location and circulated in crap-res iPhone pics/ videos—is an example of what David Joselit would call image neoliberalism, which sees images as site-less and like any other commodity, to be traded across borders and contexts with no respect for the artist’s intentions. However, I also think that this circulation is becoming an inevitability and no matter how much you insist your video is viewed on zillion lumens projector (or whatever), it will most likely end up being seen by the majority of viewers on YouTube or on a phone screen; I’m interested in how artists (like Hito Steyerl) address, rather than avoid, the fact of this image velocity and spread. DR: Lastly, what have you been working on recently? What’s next? ES: I recently did a series of live, improvised performance series called Other People’s Problems direct to people’s desktops, with Field Broadcast, where I read out streams of tags and captions off Tumblr, Instagram and Facebook, randomly jumping to other tags as I went. I’m fascinated by tags—they’re often highly idiosyncratic and personal, as well as acting as connective tissue between dispersed users; but also I liked the improvisation, where something can go wrong and the awkwardness it creates. (I love awkwardness!) Future projects are going to explore some of the ideas this work generated: how to improvise online (when things can always be deleted/ rejigged afterwards), how to embrace the relinquishing of authorial control which I see as integral to the online (or at least social media) experience, and how to work with hashtags/ metadata both as text in its own right and as a tool.   Age: 33 Location: London, Athens when I can manage it How long have you been working creatively with technology? How did you start? 14, 15 maybe, when I started mucking around with Photoshop—I remember scanning a drawing I’d made of a skunk from a Disney tale and making it into a horrendous composition featuring a rasta flag background… I was young. And I’ve always been obsessed with documenting things; growing up I was usually the one in our gang who had the camera—showing my age here, imagine there being one person with a camera—which has given me plenty of blackmail leverage and a big box of tastefully weathered photos that, despite my general frustration with analogue nostalgia, I know I will be carrying around with me for life. Where did you go to school? What did you study? After doing Physics, Chemistry and Maths at school, I did one year of a Chemistry BA, until I realized I wasn’t cut out for lab work (too much like cooking) or what seemed like the black-and-white nature of scientific enquiry. I then did an art and design foundation at a fashion college, followed by one year of Fine Art Textiles BA—a nonsensical course whose only redeeming feature was its grounding in feminist theory—before finally entering the second year of a Fine Art BA. For a while this patchy trajectory through art school made me paranoid, until I realised it probably made me sound more interesting than I am. And in my attempt to alleviate the suspicion that there was some vital piece of information I was missing, I also did loads of philosophy diploma courses, which actually did come in handy when back at Uni last year: I recently finished a Masters of Research in moving image art. What do you do for a living or what occupations have you held previously? Do you think this work relates to your art practice in a significant way? At the moment I’m just about surviving as an artist and I’ve always been freelance apart from time done in bar, kitchen, shop (Londoners, remember Cyberdog?) cleaning and nightclub jobs, some of which the passage of time has rendered as amusingly risqué rather than borderline exploitative. After my B.A., I set up in business with the Prince’s Trust, running projects with what are euphemistically known as hard-to-reach young people, making videos, digital art pieces and music videos until government funding was pulled from the sector. I mostly loved this work and it definitely fed into and reflects my working with members of loose groups, like the meditation community around the Insight Time app, or Freecycle, or Facebook friends. I’ve also been assisting artist and writer Caroline Bergvall on and off for a few years, which has been very helpful in terms of observing how an artist makes a life/ living. What does your desktop or workspace look like? I’m just settling into a new space at the moment but invariably, a bit of a mess, a cup of tea, piles of books, and both desktop and workspace are are covered in neon post-it notes. Generally I am a paradigmatic post-Fordist flexi worker though: I can and do work pretty much anywhere—to the occasional frustration of friends and family. 

]]>
Tue, 08 Oct 2013 08:30:18 -0700 https://rhizome.org/editorial/2013/oct/08/artist-profile-erica-scourti/#new_tab
<![CDATA[Artist Profile: Erica Scourti]]> http://rhizome.org/editorial/2013/oct/8/artist-profile-erica-scourti

The latest in a series of interviews with artists who have developed a significant body of work engaged (in its process, or in the issues it raises) with technology. See the full list of Artist Profiles here.   Daniel Rourke: Your recent work, You Could've Said, is described as "a Google keyword confessional for radio." I've often considered your work as having elements of the confession, partly because of the deeply personal stance you perform—addressing we, the viewer or listener, in a one-on-one confluence, but also through the way your work hijacks and exposes the unseen, often algorithmic, functions of social and network media. You allow Google keywords to parasitize your identity and in turn you apparently "confess" on Google's behalf. Are you in search of redemption for your social-media self? Or is it the soul of the algorithm you wish to save? Erica Scourti: Or maybe the algorithm and social media soul is now so intertwined and interdependent that it makes little sense to even separate the two, in a unlikely fulfillment of Donna Haraway's cyborg? Instead of having machines built into/onto us (Google glasses notwithstanding), the algorithms which parse our email content, Facebook behaviours, Amazon spending habits, and so on, don't just read us, but shape us. I'm interested in where agency resides when our desires, intentions and behaviours are constantly being tracked and manipulated through the media and technology that we inhabit; how can we claim to have any "authentic" desires? Facebook's "About" section actually states, "You can't be on Facebook without being your authentic self," and yet this is a self that must fit into the predetermined format and is mostly defined by its commercial choices (clothing brands, movies, ice cream, whatever). And those choices are increasingly influenced by the algorithms through the ambient, personalized advertising that surrounds us. So in You Could've Said, which is written entirely in an instrumentalised form of language, i.e. Google's AdWords tool, I'm relaying the impossibility of having an authentic feeling, or even a first-hand experience, despite the seemingly subjective, emotional content and tone. Google search stuff is often seen reflective of a kind of cute "collective self" (hey, we all want to kill our boyfriends sometimes!) but perhaps it's producing as much as reflecting us. It's not just that everything's already been said, and can be commodified but that the devices we share so much intimate time with are actively involved in shaping what we consider to be our "selves," our identities. And yet, despite being entirely mediated, my delivery is "sincere" and heartfelt; I'm really interested in the idea of sincere, but not authentic. I think it's the same reason spambots can have such unexpected pathos; they seem to "express" things in a sincere way, which suggests some kind of "soul" at work there, or some kind of agency,  and yet they totally lack interiority, or authenticity. In this and other work of mine (especially Life in AdWords) dissonance is produced by my apparent misrecognition of the algorithmically produced language as my own- mistaking the machine lingo as a true expression of my own subjectivity. Which is not to say that there is some separate, unmediated self that we could access if only we would disconnect our damn gadgets for a second, but the opposite—that autobiography, which my work clearly references, can no longer be seen as a narrative produced by some sort of autonomous subject, inseparable from the technology it interacts with. Also, autobiography often involves a confessional, affective mode, and I'm interested in how this relates to the self-exposure which the attention economy seems to encourage—TMI can secure visibility when there's not enough attention to go round. With the Google confessional, I'm enacting an exposure of my flaws and vulnerabilities and while it's potentially "bad" for me (i.e. my mediated self) since you might think I'm a loser, if you're watching, then it's worth it, since value is produced simply through attention-retention. Affective vitality doesn't so much resist commodification as actively participate within it…

DR: You mention agency. When it comes to the algorithms that drive the current attention economy I tend to think we have very little. Active participation is all well and good, but the opposite—an opting out, rather than a passivity—feels increasingly impossible. I am thinking about those reCaptcha questions we spend all our time filling in. If I want to access my account and check the recommendations it has this week, I'm required to take part in this omnipresent, undeniably clever, piece of crowd-sourcing. Alan Turing's predictions of a world filled with apparently intelligent machines has come true, except, its the machines now deciding whether we are human or not. ES: Except of course—stating the obvious here—it's just carrying out the orders another human instructed it to, a mediated form of gatekeeping that delegates responsibility to the machine, creating a distance from the entirely human, social, political etc structure that has deemed it necessary (a bit like drones then?). I'm very interested also in the notion of participation as compulsory—what Zizek calls the "You must, because you can" moral imperative of consumerism—especially online, not just at the banal level (missing out on events, job opportunities, interesting articles and so on if you're not on Facebook) but because your actions necessarily feed back into the algorithms tracking and parsing our behaviours. And even opting out becomes a choice that positions you within a particular demographic (more likely to be vegetarian, apparently). Also, this question of opting out seems to recur in conversations around art made online, in a way it doesn't for artists working with traditional media—like, if you're being critical of it, why not go make your own Facebook, why not opt out? My reasoning is that I like to work with widely used technology, out of an idea that the proximity of these media to mainstream, domestic and wider social contexts makes the work more able to reflect on its sociopolitical implications, just as some video artists working in the 80s specifically engaged with TV as the main mediator of public consciousness. Of course some say this is interpassiviity, just feebly participating in the platforms without making any real change, and I can understand that criticism. Now that coded spaces and ubiquitous computing are a reality of the world—and power structures—we inhabit, I do appreciate artists who can work with code and software (in a way that I can't) and use their deeper understanding of digital infrastructure to reflect critically on it. DR: You've been engaged in a commision for Colm Cille's Spiral, sending personal video postcards to anyone who makes a request. Your interpretation of the "confessional" mode seems in this piece to become very human-centric again, since the work is addressed specifically at one particular individual. How has this work been disseminated, and what does your approach have to do with "intimacy"? ES: I've always liked Walter Benjamin's take on the ability of mediating technologies to traverse spatial distances, bringing previously inaccessible events within touching distance. With this project, I wanted to heighten this disembodied intimacy by sending unedited videos shot on my iPhone, a device that's physically on me at all times, directly to the recipients' inbox. So it's not just "sharing" but actually "giving" them a unique video file gift, which only they see,  positioning the recipient as a captive audience of one, unlike on social media where you have no idea who is watching or who cares. But also, I asked them to "complete" the video by adding its metadata, which puts them on the spot—they have to respond, instead of having the option to ignore me—and also extracting some labor in return, which is exactly what social media does: extracting our affective and attentive labor, supposedly optionally, in exchange for the gift of the free service. The metadata—tags, title and optionally a caption—became the only viewable part of the exchange, since I used it to annotate a corresponding black, "empty" video on Instagram, also shared on Twitter and Facebook, so the original content remains private. These blank videos record the creative output of the recipient, while acting as proof of the transaction (i.e. that I sent them a video). They also act as performative objects which will continue to operate online due to their tagging, which connects them to other groups of media and renders them visible—i.e. searchable—online, since search bots cannot as yet "see" video content. I wanted to make a work which foregrounds its own connectedness, both to other images via the hashtags but also to the author-recipients through tagging them on social media. So the process of constantly producing and updating oneself within the restrictive and pre-determined formats of social media platforms, i.e. their desired user behaviours, becomes almost the content of the piece. I also like the idea that hashtag searches on all these platforms, for (let's say) Greece, will bring up these blank/ black videos (which by the way, involved a little hack, as Instagram will not allow you to upload pre-recorded content and it's impossible to record a black and silent video...). It's a tiny intervention into the regime of carefully filtered and cropped life-style depictions that Instagram is best known for. It's also a gesture of submitting oneself to the panoptical imperative to share one's experience no matter how private or banal, hence using Instagram for its associations with a certain solipsistic self-display; by willingly enacting the production of mediated self on social media I'm exploring a kind of masochistic humour which has some affinities with what Benjamin Noys identified as an accelerationist attitude of "the worse the better." And yet, by remaining hidden, and not publicly viewable, the public performance of a mediated self is denied.

DR: An accelerationist Social Media artwork would have to be loaded with sincerity, firstly, on the part of the human (artist/performer), but also, in an authentic attempt to utilise the network completely on its terms. Is there something, then, about abundance and saturation in your work? An attempt to overload the panopticon? ES: That's a very interesting way of putting it. I sometimes relate that oversaturation to the horror vacui of art that springs from a self-therapeutic need, which my work addresses, though it's less obsessive scribbles, more endless connection, output and flow and semi-ritualistic and repetitive working processes. And in terms of utilizing the network on its own terms, Geert Lovink's notion of the "natural language hack" (rather than the "deep level" hack) is one I've thought about—where your understanding of the social, rather than technical, operation of online platforms gets your work disseminated. For example my project Woman Nature Alone, where I re-enacted stock video which is freely available on my Youtube channel—some of those videos are high on the Google ranking page, so Google is effectively "marketing" my work without me doing anything.  Whether it overloads the panopticon, or just contributes more to the babble, is a pertinent question (as Jodi Dean's work around communicative capitalism has shown), since if the work is disseminated on commercial platforms like YouTube or Facebook, it operates within a system of value generation which benefits the corporation, involving, as is by now well known, a Faustian pact of personal data in exchange for "free" service. And going back to agency—the mutability of the platforms means that if the work makes use of particular features (suchas YouTube annotations) its existence is contingent on them being continued; since the content and the context are inextricable in situations like this, it would become impossible to display the original work exactly as it was first made and seen. Even then, as with Olia Lialina and Dragan Espenschied's One Terabyte of Kilobyte Age, it would become an archive, which preserves documents from a specific point in the web's history but cannot replicate the original viewing conditions because all the infrastructure around it has changed completely. So if the platforms—the corporations—control the context and viewing conditions, then artists working within them are arguably at their mercy- and keeping the endless flow alive by adding to it. I'm more interested in working within the flows rather than, as some artists prefer, rejecting the dissemination of their work online. Particularly with moving image work,  I'm torn between feeling that artists' insistence on certain very specific, usually high quality, viewing conditions for their work bolsters, as Sven Lütticken has argued, the notion of the rarefied auratic art object whose appreciation requires a kind of hushed awe and reverence, while being aware that the opposite—the image ripped from its original location and circulated in crap-res iPhone pics/ videos—is an example of what David Joselit would call image neoliberalism, which sees images as site-less and like any other commodity, to be traded across borders and contexts with no respect for the artist's intentions. However, I also think that this circulation is becoming an inevitability and no matter how much you insist your video is viewed on zillion lumens projector (or whatever), it will most likely end up being seen by the majority of viewers on YouTube or on a phone screen; I'm interested in how artists (like Hito Steyerl) address, rather than avoid, the fact of this image velocity and spread. DR: Lastly, what have you been working on recently? What's next? ES: I recently did a series of live, improvised performance series called Other People's Problems direct to people's desktops, with Field Broadcast, where I read out streams of tags and captions off Tumblr, Instagram and Facebook, randomly jumping to other tags as I went. I'm fascinated by tags—they're often highly idiosyncratic and personal, as well as acting as connective tissue between dispersed users; but also I liked the improvisation, where something can go wrong and the awkwardness it creates. (I love awkwardness!) Future projects are going to explore some of the ideas this work generated: how to improvise online (when things can always be deleted/ rejigged afterwards), how to embrace the relinquishing of authorial control which I see as integral to the online (or at least social media) experience, and how to work with hashtags/ metadata both as text in its own right and as a tool.   Age: 33 Location: London, Athens when I can manage it How long have you been working creatively with technology? How did you start? 14, 15 maybe, when I started mucking around with Photoshop—I remember scanning a drawing I'd made of a skunk from a Disney tale and making it into a horrendous composition featuring a rasta flag background... I was young. And I've always been obsessed with documenting things; growing up I was usually the one in our gang who had the camera—showing my age here, imagine there being one person with a camera—which has given me plenty of blackmail leverage and a big box of tastefully weathered photos that, despite my general frustration with analogue nostalgia, I know I will be carrying around with me for life. Where did you go to school? What did you study? After doing Physics, Chemistry and Maths at school, I did one year of a Chemistry BA, until I realized I wasn't cut out for lab work (too much like cooking) or what seemed like the black-and-white nature of scientific enquiry. I then did an art and design foundation at a fashion college, followed by one year of Fine Art Textiles BA—a nonsensical course whose only redeeming feature was its grounding in feminist theory—before finally entering the second year of a Fine Art BA. For a while this patchy trajectory through art school made me paranoid, until I realised it probably made me sound more interesting than I am. And in my attempt to alleviate the suspicion that there was some vital piece of information I was missing, I also did loads of philosophy diploma courses, which actually did come in handy when back at Uni last year: I recently finished a Masters of Research in moving image art. What do you do for a living or what occupations have you held previously? Do you think this work relates to your art practice in a significant way? At the moment I'm just about surviving as an artist and I've always been freelance apart from time done in bar, kitchen, shop (Londoners, remember Cyberdog?) cleaning and nightclub jobs, some of which the passage of time has rendered as amusingly risqué rather than borderline exploitative. After my B.A., I set up in business with the Prince's Trust, running projects with what are euphemistically known as hard-to-reach young people, making videos, digital art pieces and music videos until government funding was pulled from the sector. I mostly loved this work and it definitely fed into and reflects my working with members of loose groups, like the meditation community around the Insight Time app, or Freecycle, or Facebook friends. I've also been assisting artist and writer Caroline Bergvall on and off for a few years, which has been very helpful in terms of observing how an artist makes a life/ living. What does your desktop or workspace look like? I'm just settling into a new space at the moment but invariably, a bit of a mess, a cup of tea, piles of books, and both desktop and workspace are are covered in neon post-it notes. Generally I am a paradigmatic post-Fordist flexi worker though: I can and do work pretty much anywhere—to the occasional frustration of friends and family. 

]]>
Tue, 08 Oct 2013 07:30:18 -0700 http://rhizome.org/editorial/2013/oct/8/artist-profile-erica-scourti
<![CDATA[THE PIRATE CINEMA]]> http://vimeo.com/67518774

THE PIRATE CINEMA TRANSFORMS FILM TORRENTS INTO ILLICIT INTERACTIVE ART A CINEMATIC COLLAGE GENERATED BY PEER-TO-PEER NETWORK USERS. > More info: thepiratecinema.com - In the context of omnipresent telecommunications surveillance, “The Pirate Cinema” reveals the hidden activity and geography of Peer-to-Peer file sharing. The project is presented as a monitoring room, which shows Peer-to-Peer transfers happening in real time on networks using the BitTorrent protocol. The installation produces an arbitrary cut-up of the files currently being exchanged. User IP addresses and countries are displayed on each cut, depicting the global topology of content consumption and dissemination. - Conception: Nicolas Maigret - 2012-2013 Software development: Brendan Howell Production: ArtKillArt, La Maison populaire - More info: thepiratecinema.com More IMG: flickr.com/photos/n1c0la5ma1gr3t/sets/72157633577769570/Cast: N1C0L45 M41GR3TTags: piracy, pirate, hack, cinema, peer-to-peer, P2P, copyright, illicite, file sharing, Torrent, BitTorrent, thepiratebay, download, distributed, network, mashup, collage and cut-up

]]>
Thu, 26 Sep 2013 06:59:48 -0700 http://vimeo.com/67518774
<![CDATA[The Phantom Zone]]> http://rhizome.org/editorial/2013/sep/10/phantom-zone

The boundary between science fiction and social reality is an optical illusion.

Donna Haraway, A Cyborg Manifesto (1991) [1]

This is no fantasy... no careless product of wild imagination. No, my good friends.

The opening lines of Richard Donner's Superman (1978) [2] In a 1950 film serial entitled Atom Man vs Superman [3] television executive and evil genius Lex Luthor sends Superman into a ghostly limbo he calls "The Empty Doom." Trapped in this phantom void, Superman's infinite powers are rendered useless, for although he can still see and hear the "real" world his ability to interact with it has all but disappeared. Over the following decades this paraspace [4]—to use Samuel Delany's term for a fictional space, accessed via technology, that is neither within nor entirely separate from the 'real' world—would reappear in the Superman mythos in various forms, beginning in 1961. Eventually dubbed "The Phantom Zone," its back story was reworked substantially, until by the mid 60s it had become a parallel dimension discovered by Superman's father, Jor El. Once used to incarcerate Krypton's most unsavory characters, The Phantom Zone had outlasted its doomed home world and eventually burst at the seams, sending legions of super-evil denizens raining down onto Earth. Beginning its life as an empty doom, The Phantom Zone was soon filled with terrors prolific enough to make even The Man of Steel fear its existence.

Overseen by story editor Mortimer Weisinger, and the unfortunately named artist Wayne Boring, the late 50s and early 60s were a strange time in the Superman universe. The comics suddenly became filled with mutated variants of kryptonite that gave Superman the head of an ant or the ability to read thoughts; with miniature Supermen arriving seconds before their namesake to save the day and steal his thunder; with vast universes of time caught fast in single comic book panels. It was an era of narrative excess wrapped by a tighter, more meticulous and, many would say, repressed aesthetic:

Centuries of epic time could pass in a single caption. Synasties fell between balloons, and the sun could grow old and die on the turn of a page. It was a toy world, too, observed through the wrong end of a telescope. Boring made eternity tiny, capable of being held in two small hands. He reduced the infinite to fit in a cameo... [5]

The Phantom Zone is one of the least bizarre narrative concepts from what is now known as the Silver Age of D.C. Comics (following on from the more widely celebrated Golden Age). It could be readily understood on a narrative level, and it had a metaphorical dimension as well, one that made conceivable the depths contained in Superman's vast, but ultimately manipulable universe. The Phantom Zone was usually portrayed on a television screen kept safe in one of the many rooms of the League of Justice headquarters. It could also be used as a weapon and fired from a portable projection device—the cold, harsh infinity of the Empty Doom blazing into Superman's world long enough to ensnare any character foolish enough to stand in its rays. Whether glimpsed on screen or via projection, then, The Phantom Zone could be interpreted as a metaphor for the moving image. 

In comic books, as in the moving image, the frame is the constituent element of narrative. Each page of a comic book is a frame which itself frames a series of frames, so that by altering each panel's size, bleed or aesthetic variety, time and space can be made elastic. Weisinger and Boring's Phantom Zone took this mechanism further, behaving like a weaponized frame free to roam within the comic book world. Rather than manipulating three-dimensional space or the fourth dimension of time, as the comic book frame does, The Phantom Zone opened out onto the existence of other dimensions. It was a comic book device that bled beyond the edge of the page, out into a world in which comic book narratives were experienced not in isolation, but in parallel with the onscreen narratives of the cinema and the television. As such, the device heralded televisual modes of attention.

For his 1978 big-budget movie version of Superman, [6] director Richard Donner cunningly translated The Phantom Zone into something resembling the cinema screen itself. In the film's opening sequence, a crystal surface swoops down from the immense backdrop of space, rendering the despicable General Zod and his cronies two-dimensional as it imprisons them. In the documentary The Magic Behind the Cape, [7] bundled with the DVD release of Superman in 2001, we are given an insight into the technical prowess behind Donner's The Phantom Zone. The actors are made to simulate existential terror against the black void of the studio, pressed up against translucent, flesh-like membranes and physically rotated out of sync with the gaze of the camera. Rendering the faux two-dimensional surface of Donner's Phantom Zone believable required all manner of human dimensions to be framed out of the final production. The actors react to causes generated beyond the studio space, the director's commands, or the camera's gaze. They twist and recoil from transformations still to occur in post-production. In a sense, the actors behave as bodies that are already images. With its reliance on post-produced visual effects, the Phantom Zone sequence represents an intermediary stage in the gradual removal of sets, locations, and any 'actual' spatial depths from the film production process. Today, actors must address their humanity to green voids post-produced with CGI, and the indexical relationship between the film image and the events unfolding in front of the lens have been almost entirely shattered. In this Phantom cinema produced after the event, ever-deeper layers of special effects seal actors into a cinematic paraspace. Just as The Phantom Zone of the comic book heralded televisual modes of attention, The Phantom Zone of the 1970s marked a perceptual regime in which the cinematic image was increasingly sealed off from reality by synthetic visual effects.

   For Walter Benjamin, writing during cinema's first “Golden Era,", the ability of the cinema screen to frame discontinuous times and spaces represented its most profound "truth." Delivered by cinema, Benjamin argued, mechanically disseminated images were actually fracturing the limits of our perceptions, training "human beings in the apperceptions and reactions needed to deal with a vast apparatus whose role in their lives is expanding almost daily." [8]  The cinema screen offered audiences who were confined to finite bodies that had never before experienced such juxtapositions an apparently shared experience of illuminated consciousness. Far from inventing this new mode of perception through the "shock-character" of montage, Benjamin believed that cinema spoke of the 'truths' which awaited us beneath the mirage of proletarian experience. Truths which would guide us—with utopian fervor—towards an awareness, and eventual control, of what Benjamin called the "new nature":

Not just industrial technology, but the entire world of matter (including human beings) as it has been transformed by that technology. [9]

In short, cinema was less a technology than a new and evolving mode of machinic thought, both generated by and generating the post-industrial subject. Observing the relation between representation and visibility, Jens Andermann notes:

Truth, the truth of representation, crucially depends on the clear-cut separation between the visible and the invisible, the non-objectness of the latter. Truth is the effect of what we could call the catachretic nature of visuality, the way in which the world of visual objects can point to the invisible domain of pure being only by obsessively pointing to itself. [10]

As from the Greek root aisthanesthai – "to perceive"—the aesthetic conditions through which The Phantom Zone have been translated frame far more than a supposed fictional void. Called upon to indicate an absolute outside — the unfathomable infinity of another, ghostly, parallel universe — The Phantom Zone instead reiterates the medium of its delivery, whether comic book, television, or cinema, with mirror-like insistency. Such is the power of new technical modes of thought, that very often, they cause us to rethink the outmoded media that we are so used to as to be unaware. The Phantom Zone hides the cinematographic image in plain view. Its reappearance and reimagining over the last 60 odd years, in ever newer forms and aesthetic modes, can be read paradigmatically, that is, as a figure that stands in place of, and points towards, shifts, mutations and absolute overturnings in our perceptual apparatus. Its most recent iteration is in the 2013 Superman reboot, Man of Steel, [11] and in particular in a 'viral' trailer distributed on YouTube a few weeks before the film was released. [12] Coming towards us soars a new mode of machinic thought; a Phantom Zone of unparalleled depth and aesthetic complexity that opens onto a new new - digital - nature.

The General Zod trailer for Man of Steel begins with a static rift that breaks into a visual and audial disarrangement of the phrase, “You are not alone". General Zod's masked face materializes, blended with the digital miasma: a painterly 3D effect that highlights the inherent ‘otherness' of where his message originates. The aesthetic is unsettling in as much as it is recognizable. We have no doubt as viewers of this 'viral' dispatch as to the narrative meaning of what we are witnessing, namely, a datastream compressed and distributed from a paraspace by an entity very much unlike us. The uncanny significance of the trailer stems more from how very normal the digital miasma feels; from how apprehensible this barrage of noise is to us. Indeed, it is ‘other', but its otherness is also somehow routine, foreseeable. The pathogen here is not Zod's message, it is digital technology itself. The glitched aesthetic of the trailer has become so habitual as to herald the passing of digital materiality into the background of awareness. Its mode of dissemination, via the Trojan Horse of YouTube, just as unvisible to us during the regular shifts we make between online/offline modes of communication. The surface of this Phantom Zone very much interfaces with our material world, even if the message it impresses upon us aches to be composed of an alien substance.   Digital video does the work of representation via a series of very clever algorithms called codecs that compress the amount of information needed to produce a moving image. Rather than the individual frames of film, each as visually rich and total as the last, in a codec only the difference between frames need be encoded, making each frame “more like a set of movement instructions than an image." [13] The painterly technique used in the General Zod trailer is normally derived from a collapse between key (image) and reference (difference) frames at the status of encoding. The process is called ‘datamoshing', and has its origins in glitch art, a form of media manipulation predicated on those minute moments when the surface of an image or sound cracks open to reveal some aspect of the process that produced it. By a method of cutting, repeating or glitching of key and reference frames visual representations are made to blend into one another, space becomes difference and time becomes image. The General Zod trailer homages/copies/steals the datamoshing technique, marking digital video's final move from convenient means of dissemination, to palpable aesthetic and cultural influence.  In the actual movie, Man of Steel (2013), Zod's video message is transposed in its entirety to the fictional Planet Earth. The viral component of its movement around the web is entirely absent: its apparent digitality, therefore, remains somewhat intact, but only as a mere surface appearance. This time around the message shattering through The Phantom Zone is completely devoid of affective power: it frames nothing but its existence as a narrative device. The filmmakers rely on a series of “taking over the world" tropes to set the stage for General Zod's Earth-shaking proclamation. TV sets in stereotypical, exotic, locales flicker into life, all broadcasting the same thing. Electronic billboards light up, loudspeakers blare, mobile phones rumble in pockets, indeed, all imaging technologies suddenly take on the role of prostheses for a single, datamoshed, stream. In one—particularly sincere—moment of the montage a faceless character clutches a Nokia brand smartphone in the centre of shot and exclaims, “It's coming through the RSS feeds!" This surface, this Phantom Zone, frames an apparatus far vaster than a datamoshed image codec: an apparatus apparently impossible to represent through the medium of cinema. The surface appearance of the original viral trailer is only a small component of what constitutes the image it conveys, and thus, of the image it frames of our time. Digital materiality shows itself via poorly compressed video clips arriving through streams of overburdened bandwidth. Our understanding of what constitutes a digital image must then, according to Mark Hansen, “be extended to encompass the entire process by which information is made perceivable." [14]

In its cinematic and comic book guises, The Phantom Zone was depicted as “a kind of membrane dividing yet connecting two worlds that are alien to and also dependent upon each other".[15] The success of the datamoshed trailer comes from the way it broke through that interface, its visual surface bubbling with a new kind of viral, digital, potential that encompasses and exposes the material engaged in its delivery. As cinematographic subjects we have an integral understanding of the materiality of film. Although we know that the frames of cinema are separate we crave the illusion of movement, and the image of time, they create. The ‘viral' datamoshed message corrupts this separation between image and movement, the viewer and the viewed. Not only does General Zod seem to push out, from inside the numerical image, it is as if we, the viewing subject enraptured by the digital event, have been consumed by its flow. The datamoshed Phantom Zone trailer takes the one last, brave, step beyond the apparatus of image production. Not only is the studio, the actor, and even the slick appeal of CGI framed out of its mode of delivery, arriving through a network that holds us complicit, this Phantom Zone frames the 'real' world in its entirety, making even the fictional world it appeals to devoid of affective impact. To take liberty with the words of Jean Baudrillard:

[Jorge Luis] Borges wrote: they are slaves to resemblance and representation; a day will come when they will try to stop resembling. They will go to the other side of the mirror and destroy the empire. But here, you cannot come back from the other side. The empire is on both sides. [16]

Once again, The Phantom Zone highlights the material mode of its delivery with uncanny exactness. We are now surrounded by images that supersede mere visual appearance: they generate and are generated by everything the digital touches, including we, the most important component of General Zod's 'viral' diffusion. The digital Phantom Zone extends to both sides of the flickering screen.   References

[1] Donna Haraway, Simians, Cyborgs and Women : The Reinvention of Nature. (London: Free Association Books Ltd, 1991), 149–181.

[2] Richard Donner, Superman, Action, Adventure, Sci-Fi, 1978.

[3] Spencer Gordon Bennet, Atom Man Vs. Superman, Sci-Fi, 1950.

[4] Scott Bukatman, Terminal Identity: The Virtual Subject in Postmodern Science Fiction (Durham: Duke University Press, 1993), 164.

[5] Grant Morrison, Supergods: Our World in the Age of the Superhero (London: Vintage Books, 2012), 62.

[6] Donner, Superman.

[7] Michael Thau, The Magic Behind the Cape, Documentary, Short, 2001. See : http://www.youtube.com/watch?v=bYXbzVJ6NzA&feature=youtu.be&t=4m12s

[8] Walter Benjamin, “The Work of Art in the Age of Its Technological Reproducibility," in The Work of Art in the Age of Its Technological Reproducibility, and Other Writings on Media (Cambridge  Mass.: Belknap Press of Harvard University Press, 2008), 26.

[9] Susan Buck-Morss, The Dialectics of Seeing: Walter Benjamin and the Arcades Project (MIT Press, 1991), 70.

[10] Jens Andermann, The Optic of the State: Visuality and Power in Argentina and Brazil (University of Pittsburgh Pre, 2007), 5.

[11] Zack Snyder, Man of Steel, Action, Adventure, Fantasy, Sci-Fi, 2013.

[12] Man of Steel Viral - General Zod's Warning (2013) Superman Movie HD, 2013, http://www.youtube.com/watch?v=5QkfmqsDTgY.

[13] BackStarCreativeMedia, “Datamoshing—the Beauty of Glitch," April 9, 2009, http://backstar.com/blog/2009/04/09/datamoshing-the-beauty-of-glitch/.

[14] Mark B. Hansen, “Cinema Beyond Cybernetics, or How to Frame the Digital Image," Configurations 10, no. 1 (2002): 54, doi:10.1353/con.2003.0005.

[15] Mark Poster, The Second Media Age (Wiley, 1995), 20.

[16] Jean Baudrillard, “The Murder of the Sign," in Consumption in an Age of Information, ed. Sande Cohen and R. L. Rutsky (Berg, 2005), 11.  

]]>
Tue, 10 Sep 2013 08:00:00 -0700 http://rhizome.org/editorial/2013/sep/10/phantom-zone
<![CDATA[Do Artists Actually Confront Our New Technological Reality?]]> http://hyperallergic.com/56319/do-artists-actually-confront-our-new-technological-reality/

Art historian and associate professor at New York’s CUNY Graduate Center Claire Bishop has taken to the pages of Artforum’s September edition to issue a kind of rebuke for contemporary art. She argues, in an extended essay that only briefly detours into egregious artspeak, that though the new realities of technology and the internet provide the fundamental context for art currently being made, art and artists have failed to critically confront this context and are too content simply to respond and adapt to it. Bishop writes simplistically of digital art that “somehow the venture never really gained traction,” and that “the appearance and content of contemporary art have been curiously unresponsive to the total upheaval in our labor and leisure inaugurated by the digital revolution.” Is it really the case that art has been so nonreactive to such a huge change in our world?

Bishop rightly notes that, “Most art today deploys new technology at one if not most stages of its production, dissemination, and consumption.” Like any time in history, artists have taken to contemporary technology, adapting computers, portable projectors, and server networks as art-making materials (see Stan VanDerBeek’s 1963-66 “Movie-Drome” at the New Museum’s Ghosts in the Machine exhibition for one such example). Yet the author goes on to cite contemporary artists who aren’t exactly the names one immediately comes up with when considering the avant-garde of digital art. She considers Frances Stark, Thomas Hirschhorn, and Ryan Trecartin as artists who do make some effort to be technologically engaged, but Bishop fails to acknowledge other artists who truly confront digital technology, both appropriating it and reflecting on it critically.

]]>
Sat, 08 Sep 2012 06:07:00 -0700 http://hyperallergic.com/56319/do-artists-actually-confront-our-new-technological-reality/
<![CDATA[Rigid Implementation vs Flexible Materiality]]> http://machinemachine.net/text/research/rigid-implementation-vs-flexible-materiality

Wow. It’s been a while since I updated my blog. I intend to get active again here soon, with regular updates on my research. For now, I thought it might be worth posting a text I’ve been mulling over for a while (!) Yesterday I came across this old TED presentation by Daniel Hillis, and it set off a bunch of bells tolling in my head. His book The Pattern on the Stone was one I leafed through a few months back whilst hunting for some analogies about (digital) materiality. The resulting brainstorm is what follows. (This blog post, from even longer ago, acts as a natural introduction: On (Text and) Exaptation) In the 1960s and 70s Roland Barthes named “The Text” as a network of production and exchange. Whereas “the work” was concrete, final – analogous to a material – “the text” was more like a flow, a field or event – open ended. Perhaps even infinite. In, From Work to Text, Barthes wrote: The metaphor of the Text is that of the network… (Barthes 1979) This semiotic approach to discourse, by initiating the move from print culture to “text” culture, also helped lay the ground for a contemporary politics of content-driven media. Skipping backwards through From Work to Text, we find this statement: The text must not be understood as a computable object. It would be futile to attempt a material separation of works from texts. I am struck here by Barthes” use of the phrase “computable object”, as well as his attention to the “material”. Katherine Hayles in her essay, Text is Flat, Code is Deep, (Hayles 2004) teases out the statement for us: ‘computable’ here mean[s] to be limited, finite, bound, able to be reckoned. Written twenty years before the advent of the microcomputer, his essay stands in the ironic position of anticipating what it cannot anticipate. It calls for a movement away from works to texts, a movement so successful that the ubiquitous ‘text’ has all but driven out the media-specific term book. Hayles notes that the “ubiquity” of Barthes” term “Text” allowed – in its wake – an erasure of media-specific terms, such as “book”. In moving from, The Work to The Text, we move not just between different politics of exchange and dissemination, we also move between different forms and materialities of mediation. (Manovich 2002)For Barthes the material work was computable, whereas the network of the text – its content – was not.

In 1936, the year that Alan Turing wrote his iconic paper ‘On Computable Numbers’, a German engineer by the name of Konrad Zuse built the first working digital computer. Like its industrial predecessors, Zuse’s computer was designed to function via a series of holes encoding its program. Born as much out of convenience as financial necessity, Zuse punched his programs directly into discarded reels of 35mm film-stock. Fused together by the technologies of weaving and cinema, Zuse’s computer announced the birth of an entirely new mode of textuality. The Z3, the world’s first working programmable, fully automatic computer, arrived in 1941. (Manovich 2002) A year earlier a young graduate by the name of Claude Shannon had published one of the most important Masters theses in history. In it he demonstrated that any logical expression of Boolean algebra could be programmed into a series of binary switches. Today computers still function with a logic impossible to distinguish from their mid-20th century ancestors. What has changed is the material environment within which Boolean expressions are implemented. Shannon’s work first found itself manifest in the fragile rows of vacuum tubes that drove much of the technical innovation of the 40s and 50s. In time, the very same Boolean expressions were firing, domino-like, through millions of transistors etched onto the surface of silicon chips. If we were to query the young Shannon today, he might well gawp in amazement at the material advances computer technology has gone through. But, if Shannon was to examine either your digital wrist watch or the world’s most advanced supercomputer in detail, he would once again feel at home in the simple binary – on/off – switches lining those silicon highways. Here the difference between how computers are implemented and what computers are made of digs the first of many potholes along our journey. We live in an era not only practically driven by the computer, but an era increasingly determined by the metaphors computers have injected into our language. Let us not make the mistake of presupposing that brains (or perhaps minds) are “like” computers. Tempting though it is to reduce the baffling complexities of the human being to the functions of the silicon chip, the parallel processor or Wide Area Network this reduction occurs most usefully at the level of metaphor and metonym. Again the mantra must be repeated that computers function through the application of Boolean logic and binary switches, something that can not be said about the human brain with any confidence a posteriori. Later I will explore the consequences on our own understanding of ourselves enabled by the processing paradigm, but for now, or at least the next few paragraphs, computers are to be considered in terms of their rigid implementation and flexible materiality alone. At the beginning of his popular science book, The Pattern on the Stone, (Hillis 1999) W.  Daniel Hillis narrates one of his many tales on the design and construction of a computer. Built from tinker-toys the computer in question was/is functionally complex enough to “play” tic-tac-toe (noughts and crosses). The tinker-toy was chosen to indicate the apparent simplicity of computer design, but as Hillis argues himself, he may very well have used pipes and valves to create a hydraulic computer, driven by water pressure, or stripped the design back completely, using flowing sand, twigs and twine or any other recipe of switches and connectors. The important point is that the tinker-toy tic-tac-toe computer functions perfectly well for the task it is designed for, perfectly well, that is, until the tinker-toy material begins to fail. This failure is what Chapter 1 of this thesis is about: why it happens, why its happening is a material phenomenon and how the very idea of “failure” is suspect. Tinker-toys fail because the mechanical operation of the tic-tac-toe computer puts strain on the strings of the mechanism, eventually stretching them beyond practical use. In a perfect world, devoid of entropic behaviour, the tinker-toy computer may very well function forever, its users setting O or X conditions, and the computer responding according to its program in perfect, logical order. The design of the machine, at the level of the program, is completely closed; finished; perfect. Only materially does the computer fail (or flail), noise leaking into the system until inevitable chaos ensues and the tinker-toys crumble back into jumbles of featureless matter. This apparent closure is important to note at this stage because in a computer as simple as the tic-tac-toe machine, every variable can be accounted for and thus programmed for. Were we to build a chess playing computer from tinker-toys (pretending we could get our hands on the, no doubt, millions of tinker-toy sets we”d need) the closed condition of the computer may be less simple to qualify. Tinker-toys, hydraulic valves or whatever material you choose, could be manipulated into any computer system you can imagine, even the most brain numbingly complicated IBM supercomputer is technically possible to build from these fundamental materials. The reason we don”t do this, why we instead choose etched silicon as our material of choice for our supercomputers, exposes another aspect of computers we need to understand before their failure becomes a useful paradigm. A chess playing computer is probably impossible to build from tinker-toys, not because its program would be too complicated, but because tinker-toys are too prone to entropy to create a valid material environment. The program of any chess playing application could, theoretically, be translated into a tinker-toy equivalent, but after the 1,000th string had stretched, with millions more to go, no energy would be left in the system to trigger the next switch along the chain. Computer inputs and outputs are always at the mercy of this kind of entropy: whether in tinker-toys or miniature silicon highways. Noise and dissipation are inevitable at any material scale one cares to examine. The second law of thermo dynamics ensures this. Claude Shannon and his ilk knew this, even back when the most advanced computers they had at their command couldn”t yet play tic-tac-toe. They knew that they couldn”t rely on materiality to delimit noise, interference or distortion; that no matter how well constructed a computer is, no matter how incredible it was at materially stemming entropy (perhaps with stronger string connectors, or a built in de-stretching mechanism), entropy nonetheless was inevitable. But what Shannon and other computer innovators such as Alan Turing also knew, is that their saviour lay in how computers were implemented. Again, the split here is incredibly important to note:

Flexible materiality: How and of what a computer is constructed e.g. tinker-toys, silicon Rigid implementation: Boolean logic enacted through binary on/off switches (usually with some kind of input à storage à feedback/program function à output). Effectively, how a computer works

Boolean logic was not enough on its own. Computers, if they were to avoid entropy ruining their logical operations, needed to have built within them an error management protocol. This protocol is still in existence in EVERY computer in the world. Effectively it takes the form of a collection of parity bits delivered alongside each packet of data that computers, networks and software deal with. The bulk of data contains the binary bits encoding the intended quarry, but the receiving element in the system also checks the main bits alongside the parity bits to determine whether any noise has crept into the system. What is crucial to note here is the error-checking of computers happens at the level of their rigid implementation. It is also worth noting that for every eight 0s and 1s delivered by a computer system, at least one of those bits is an error checking function. W. Daniel Hillis puts the stretched strings of his tinker-toy mechanism into clear distinction and in doing so, re-introduces an umbrella term set to dominate this chapter: I constructed a later version of the Tinker Toy computer which fixed the problem, but I never forgot the lesson of the first machine: the implementation technology must produce perfect outputs from imperfect inputs, nipping small errors in the bud. This is the essence of digital technology, which restores signals to near perfection at every stage. It is the only way we know – at least, so far – for keeping a complicated system under control. (Hillis 1999, 18)   Bibliography  Barthes, Roland. 1979. ‘From Work to Text.’ In Textual Strategies: Perspectives in Poststructuralist Criticism, ed. Josue V. Harari, 73–81. Ithaca, NY: Cornell University Press. Hayles, N. Katherine. 2004. ‘Print Is Flat, Code Is Deep: The Importance of Media-Specific Analysis.’ Poetics Today 25 (1) (March): 67–90. doi:10.1215/03335372-25-1-67. Hillis, W. 1999. The Pattern on the Stone : the Simple Ideas That Make Computers Work. 1st paperback ed. New York: Basic Books. Manovich, Lev. 2002. The Language of New Media. 1st MIT Press pbk. ed. Cambridge  Mass.: MIT Press.      

]]>
Thu, 07 Jun 2012 06:08:07 -0700 http://machinemachine.net/text/research/rigid-implementation-vs-flexible-materiality
<![CDATA[Dissemination by Jacques Derrida]]> http://www.librarything.com/work/book/86151304

Continuum International Publishing Group Ltd. (2004), Paperback, 448 pages

]]>
Tue, 29 May 2012 07:10:52 -0700 http://www.librarything.com/work/book/86151304
<![CDATA[Information Wants to be Consumed]]> http://userwww.sfsu.edu/~rlrutsky/RR/Consumption.pdf

 Although information spreads, virus-like, through replication, this replication, as Walter Benjamin foresaw, involves a dispersion that allows images or data to be seen in different places, in different contexts (what Benjamin (1969) called “exhibition value”). It is, however, only through the process of consumption that this reproduction and dissemination of data can occur. Consumption, in short, is the means by which information, whether expensive or free, reproduces and spreads. Information, in fact, depends upon consumption for its very existence. Without being consumed, it ceases to be information in any practical sense, becoming merely a static and inaccessible knowledge, an eternal and unreachable verity. Information is, by definition, consumable. It is less the case, then, that “information wants to be free” than that “information wants to be consumed.”

]]>
Wed, 03 Aug 2011 06:00:18 -0700 http://userwww.sfsu.edu/~rlrutsky/RR/Consumption.pdf
<![CDATA[Credit in the Straight WWW: "DDDDoomed", Berger, and the Image Aggregator]]> http://2thewalls.com/journal/2011/1/10/credit-in-the-straight-www-ddddoomed-berger-and-the-image-ag.html

[ED: Nearly all of the text in this post is taken from R. Gerald Nelson's independently published, occasionally problematic but more often brilliantly concise treatise DDDDoomed. Anyone concerned with issues of and methods pertaining to digital image dissemination, authorship and context should make an effort to purchase and read this chapbook. I cannot recommend it enough.]

"With new blogs springing up every day, beautiful images & words are springing up with them. I try to credit everything I put on this blog. I know sometimes I fail. Many of the images I feature are scanned by me from an extensive library- I only scanned them. They are not mine to claim. I am always surprised, amused, dismayed when I see bloggers paste watermark images over images they have scanned, or even more surprising- claim ownership of images from magazines, the content of magazines barely having even reached subscribers- by adding footnotes to their blogs like:

]]>
Tue, 15 Mar 2011 08:01:21 -0700 http://2thewalls.com/journal/2011/1/10/credit-in-the-straight-www-ddddoomed-berger-and-the-image-ag.html
<![CDATA[On (Text and) Exaptation]]> http://machinemachine.net/text/ideas/on-text-and-exaptation

(This post was written as a kind of ‘prequel’ to a previous essay, Rancière’s Ignoramus) ‘Text’ originates from the Latin word texere, to weave. A material craft enabled by a human ingenuity for loops, knots and pattern. Whereas a single thread may collapse under its own weight, looped and intertwined threads originate their strength and texture as a network. The textile speaks of repetition and multiplicity, yet it is only once we back away from the tapestry that the larger picture comes into focus. At an industrial scale textile looms expanded beyond the frame of their human operators. Reducing a textile design to a system of coded instructions, the complex web of a decorative rug could be fixed into the gears and pulleys that drove the clattering apparatus. In later machines long reels of card, punched through with holes, told a machine how, or what, to weave. Not only could carpets and textiles themselves be repeated, with less chance of error, but the punch-cards that ordered them were now equally capable of being mass-produced for a homogenous market. From one industrial loom an infinite number of textile variations could be derived. All one needed to do was feed more punch-card into the greedy, demanding reels of the automated system. The material origins of film may also have been inspired by weaving. Transparent reels of celluloid were pulled through mechanisms resembling the steam-driven contraptions of the industrial revolution. The holes running down its edges delimit a reel’s flow. Just as the circular motion of a mechanical loom is translated into a network of threads, so the material specificity of the film-stock and projector weave the illusion of cinematic time. Some of the more archaic, out-moded types of film are known to shrink slightly as they decay, affording us – the viewer – a juddering, inconsistent vision of the world captured in the early 20th century. In 1936, the year that Alan Turing wrote his iconic paper “On Computable Numbers”, a German engineer by the name of Konrad Zuse built the first working digital computer. Like its industrial predecessors, Zuse’s computer was designed to function via a series of holes encoding its program. Born as much out of convenience as financial necessity, Zuse punched his programs directly into discarded reels of 35mm film-stock. Fused together by the technologies of weaving and cinema, Zuse’s digital computer announced the birth of an entirely new mode of textuality. As Lev Manovich suggests: “The pretence of modern media to create simulations of sensible reality is… cancelled; media are reduced to their original condition as information carrier, nothing less, nothing more… The iconic code of cinema is discarded in favour of the more efficient binary one. Cinema becomes a slave to the computer.” Rather than Manovich’s ‘slave’ / ‘master’ relationship, I want to suggest a kind of lateral pollination of media traits. As technologies develop, specificities from one media are co-opted by another. Reverting to biological metaphor, we see genetic traits jumping between media species. From a recent essay by Svetlana Boym, The Off-Modern Mirror: “Exaptation is described in biology as an example of “lateral adaptation,” which consists in a cooption of a feature for its present role from some other origin… Exaptation is not the opposite of adaptation; neither is it merely an accident, a human error or lack of scientific data that would in the end support the concept of adaptation. Exaptation questions the very process of assigning meaning and function in hindsight, the process of assigning the prefix “post” and thus containing a complex phenomenon within the grid of familiar interpretation.” Media history is littered with exaptations. Features specific to certain media are exapted – co-opted – as matters of convenience, technical necessity or even aesthetics. Fashion has a role to play also, for instance, many of the early models of mobile phone sported huge, extendible aerials which the manufacturers now admit had no impact whatsoever on the workings of the technology. Lev Manovich’s suggestion is that as the computer has grown in its capacities, able to re-present all other forms of media on a single computer apparatus, the material traits that define a media have been co-opted by the computer at the level of software and interface. A strip of celluloid has a definite weight, chemistry and shelf-life – a material history with origins in the mechanisms of the loom. Once we encode the movie into the binary workings of a digital computer, each media-specific – material – trait can be reduced to an informational equivalent. If I want to increase the frames per second of a celluloid film I have to physically wind the reel faster. For the computer encoded, digital equivalent, a code that re-presents each frame can be introduced via my desktop video editing software. Computer code determines the content as king. In the 1960s and 70s Roland Barthes named ‘The Text’ as a network of production and exchange. Whereas ‘the work’ was concrete, final – analogous to a material – ‘the text’ was more like a flow, a field or event – open ended. Perhaps even infinite. In, From Work to Text, Barthes wrote: “The metaphor of the Text is that of the network…” This semiotic approach to discourse, by initiating the move from print culture to ‘text’ culture, also helped lay the ground for a contemporary politics of content-driven media. Skipping backwards through From Work to Text, we find this statement: “The text must not be understood as a computable object. It would be futile to attempt a material separation of works from texts.” I am struck here by Barthes’ use of the phrase ‘computable object’, as well as his attention to the ‘material’. Katherine Hayles in her essay, Text is Flat, Code is Deep, teases out the statement for us: “computable” here mean[s] to be limited, finite, bound, able to be reckoned. Written twenty years before the advent of the microcomputer, his essay stands in the ironic position of anticipating what it cannot anticipate. It calls for a movement away from works to texts, a movement so successful that the ubiquitous “text” has all but driven out the media-specific term book. Hayles notes that the ‘ubiquity’ of Barthes’ term ‘Text’ allowed – in its wake – an erasure of media-specific terms, such as ‘book’. In moving from, The Work to The Text, we move not just between different politics of exchange and dissemination, we also move between different forms and materialities of mediation. To echo (and subvert) the words of Marshall Mcluhan, not only is The Medium the Message, The Message is also the Medium. …media are only a subspecies of communications which includes all forms of communication. For example, at first people did not call the internet a medium, but now it has clearly become one… We can no longer understand any medium without language and interaction – without multimodal processing… We are now clearly moving towards an integration of all kinds of media and communications, which are deeply interconnected. Extract from a 2005 interview with Manuel Castells, Global Media and Communication Journal

(This post was written as a kind of ‘prequel’ to a previous essay, Rancière’s Ignoramus)

]]>
Mon, 06 Dec 2010 13:41:24 -0800 http://machinemachine.net/text/ideas/on-text-and-exaptation
<![CDATA[Boris Groys, Religion in the Age of Digital Reproduction]]> http://www.e-flux.com/journal/view/49

The general consensus of the contemporary mass media is that the return of religion has emerged as the most important factor in global politics and culture today. Now, those who currently refer to a revival of religion clearly do not mean anything like the second coming of the Messiah or the appearance of new gods and prophets. What they are referring to rather is that religious attitudes have moved from culturally marginal zones into the mainstream. If this is the case, and statistics would seem to corroborate the claim, the question then arises as to what may have caused religious attitudes to become mainstream.

The survival and dissemination of opinions on the global information market is regulated by a law formulated by Charles Darwin, namely, the survival of the fittest. Those opinions that best adapt to the conditions under which they are disseminated will, as a matter of course, have the best odds of becoming mainstream. Today’s opinions market, however, is clearly characterize

]]>
Sun, 17 Oct 2010 12:22:00 -0700 http://www.e-flux.com/journal/view/49
<![CDATA[Liam Gillick: The Discursive | Journal / e-flux]]> http://www.e-flux.com/journal/view/35

A discursive model of praxis has developed within the critical art context over the last twenty years. It is the offspring of critical theory and improvised, self-organized structures. It is the basis of art that involves the dissemination of information. It plays with social models and presents speculative constructs both within and beyond traditional gallery spaces. It is indebted to conceptual art’s reframing of relationships, and it requires decentered and revised histories in order to evolve.

]]>
Mon, 16 Feb 2009 05:59:00 -0800 http://www.e-flux.com/journal/view/35