MachineMachine /stream - search for computation https://machinemachine.net/stream/feed en-us http://blogs.law.harvard.edu/tech/rss LifePress therourke@gmail.com <![CDATA[Quanta Magazine]]> https://www.quantamagazine.org/which-computational-universe-do-we-live-in-20220418/

Cryptographers want to know which of five possible worlds we inhabit, which will reveal whether truly secure cryptography is even possible. Many computer scientists focus on overcoming hard computational problems.

]]>
Fri, 03 Jun 2022 05:52:34 -0700 https://www.quantamagazine.org/which-computational-universe-do-we-live-in-20220418/
<![CDATA[Quanta Magazine]]> https://www.quantamagazine.org/which-computational-universe-do-we-live-in-20220418/

Cryptographers want to know which of five possible worlds we inhabit, which will reveal whether truly secure cryptography is even possible. Many computer scientists focus on overcoming hard computational problems.

]]>
Fri, 03 Jun 2022 01:52:34 -0700 https://www.quantamagazine.org/which-computational-universe-do-we-live-in-20220418/
<![CDATA[The Staggering Ecological Impacts of Computation and the Cloud | The MIT Press Reader]]> https://thereader.mitpress.mit.edu/the-staggering-ecological-impacts-of-computation-and-the-cloud/

Anthropologist Steven Gonzalez Monserrate draws on five years of research and ethnographic fieldwork in server farms to illustrate some of the diverse environmental impacts of data storage. The Cloud is not only material, but is also an ecological force.

]]>
Mon, 28 Feb 2022 00:52:17 -0800 https://thereader.mitpress.mit.edu/the-staggering-ecological-impacts-of-computation-and-the-cloud/
<![CDATA[Why “the Mind Is Just a Computation” Is a Fatally Flawed Idea | Mind Matters]]> https://mindmatters.ai/2021/03/why-the-mind-is-just-a-computation-is-a-fatally-flawed-idea/

The computational theory of mind (CTM) is the theory that the mind is a computation (calculation) done by the brain. That is, the mind works by rule-based manipulation of symbols, which is what a computer does — computation. Thus our mental states are computational states.

]]>
Wed, 31 Mar 2021 07:55:17 -0700 https://mindmatters.ai/2021/03/why-the-mind-is-just-a-computation-is-a-fatally-flawed-idea/
<![CDATA[Everything but the Clouds]]> https://vimeo.com/241966869

In didactic texts, artist talks, personal websites, and private interviews Cory Arcangel describes Super Mario Clouds as “an old Mario Brothers cartridge which I modified to erase everything but the clouds.” Exhibited at the Whitney Museum of American Art in 2004, 2009, 2011, and 2015 the game’s blue sky and leftward floating cloud forms have come to represent not only Arcangel’s twenty-first century pop art practice but one horizon of videogames as an artistic medium. However, attempting to reverse engineer Super Mario Clouds according to the artist’s original source code distributed in exhibition catalogues, documentary videos, DIY websites, and GitHub repositories reveals that Arcangel’s ROM hack does not actually contain Nintendo’s ROM. Despite claims of erasing “everything but the clouds,” there is no erasure. There is a discrepancy between art historical accounts and the technical operations of Arcangel’s artwork. This video documents the history of Super Mario Clouds and demonstrates the results of my own attempt to “erase everything but the clouds,” a ROM hacking exercise that produces a different game altogether. This example of practice-based research and digital art history operates at the intersection of close playing, critical code studies, and media archeology to articulate the intractable materiality of the mechanical, electrical, computational, and even economic processes that characterize videogames as technical media and ultimately disrupt Arcangel’s narrative of erasure.Cast: Patrick LeMieux

]]>
Tue, 14 Nov 2017 08:44:46 -0800 https://vimeo.com/241966869
<![CDATA[A manifesto for algorithms in the environment]]> http://www.theguardian.com/science/political-science/2015/oct/05/a-manifesto-for-algorithms-in-the-environment

Algorithms – step-by-step sequences of operations that solve specific computational tasks – are transforming the world around us. They support sophisticated search engines, voice recognition software, online transactions, data compression, targeted advertising and self-driving cars.

]]>
Sun, 18 Oct 2015 08:10:43 -0700 http://www.theguardian.com/science/political-science/2015/oct/05/a-manifesto-for-algorithms-in-the-environment
<![CDATA[Fillip / Speed Trials: A Conversation about Accelerationist Politics (Mohammad Salemy, Nick Srnicek, and Alex Williams)]]> http://fillip.ca/content/speed-trials

Speed Trials: A Conversation about Accelerationist PoliticsMohammad Salemy, Nick Srnicek, and Alex Williams Mohammad Salemy – As a curator, I have been investigating the synthesis of mass computation and mass telecommunication, or what I have called telecomputation,1 and its role in both the t

]]>
Sun, 23 Aug 2015 07:55:34 -0700 http://fillip.ca/content/speed-trials
<![CDATA[Benjamin Bratton. The Post-Anthropocene. 2015]]> http://www.youtube.com/watch?v=FrNEHCZm_Sc

http://www.egs.edu Benjamin H. Bratton, born 1968, is an American theorist, sociologist and professor of visual arts, contemporary social and political theory, philosophy, and design.

The Post-Anthropocene: The Turing-incomplete Orchid Mantis Evolves Machine Vision. Public open lecture for the students and faculty of the European Graduate School EGS Media and Communication Studies department program Saas-Fee Switzerland Europe. 2015.

Benjamin H. Bratton, (b. 1968), is an American theorist, sociologist, and professor of visual arts, contemporary social and political theory, philosophy, and design. His research deals with computational media and infrastructure, design research management & methodologies, classical and contemporary sociological theory, architecture and urban design issues, and the politics of synthetic ecologies and biologies.

Bratton completed his doctoral studies in the sociology of technology at the University of California, Santa Barbara​, and was the Director of the Advanced Strategies Group at Yahoo! before expanding his cross-disciplinary research and practice in academia. He taught in the Department of Design/Media Art at UCLA from 2003-2008, and at the SCI Arc​ (Southern California Institute of Architecture)​ for a decade, and continues to teach as a member of the Visiting Faculty. While at SCI Arc, Benjamin Bratton and Hernan Diaz-Alonso co-founded the XLAB courses, which placed students in laboratory settings where they could work directly and comprehensively in robotics, scripting, biogenetics, genetic codification, and cellular systems​. Currently, in addition to his professorship at EGS, Bratton is an associate professor of Visual Arts at the University of California, San Dieg​o, where he also directs the Center for Design and Geopolitics, partnering with the California Institute of Telecommunications and Information Technology​.

In addition to his formal positions, Benjamin H. Bratton is a regular visiting lecturer at numerous universities and institutions including: Columbia University, Yale University, Pratt Institute, Bartlett School of Architecture, University of Pennsylvania, University of Southern California, University of California, Art Center College of Design, Parsons The New School for Design, University of Michigan, Brown University, The University of Applied Arts in Vienna, Bauhaus- University, Moscow State University, Moscow Institute for Higher Economics, and the Architectural Association School of Architecture in London.

Bratton's current projects focus on the political geography of cloud computing, massively- granular universal addressing systems, and alternate models of ecological governance. In his most recent book, The Stack: On Software and Sovereignty (MIT Press, 2015), Bratton asks the question, "What has planetary-scale computation done to our geopolitical realities?​" and in response, offers the proposition "that smart grids, cloud computing, mobile software and smart cities, universal addressing systems, ubiquitous computing, and other types of apparently unrelated planetary-scale computation can be viewed as forming a coherent whole—an accidental megastructure called The Stack that is both a computational apparatus and a new geopolitical architecture.​"

Other more recent texts include the following: Some Trace Effects of the Post-Anthropocene: On Accelerationist Geopolitical Aesthetics, On Apps and Elementary Forms of Interfacial Life: Object, Image, Superimposition, Deep Address, What We Do is Secrete: On Virilio, Planetarity and Data Visualization, Geoscapes & the Google Caliphate: On Mumbai Attacks, Root the Earth: On Peak Oil Apohenia and Suspicious Images/ Latent Interfaces (with Natalie Jeremijenko), iPhone City, Logistics of Habitable Circulation (introduction to the 2008 edition of Paul Virilio’s Speed and Politics). As well, recent online lectures include: 2 or 3 Things I Know About The Stack, at Bartlett School of Architecture, University of London, and University of Southampton;Cloud Feudalism at Proto/E/Co/Logics 002, Rovinj, Croatia; Nanoskin at Parsons School of Design; On the Nomos of the Cloud at Berlage Institute, Rotterdam, École Normale- Superiore, Paris, and MOCA, Los Angeles; Accidental Geopolitics at The Guardian Summit, New York; Ambivalence and/or Utopia at University of Michigan and UC Irvine, and Surviving the Interface at Parsons School of Design.

]]>
Tue, 18 Aug 2015 08:42:48 -0700 http://www.youtube.com/watch?v=FrNEHCZm_Sc
<![CDATA[The 3D Additivist Cookbook: Are You a 3D Printing Radical?]]> http://additivism.org/post/120021614696

The 3D Additivist Cookbook: Are You a 3D Printing Radical?: additivism: a movement that aims to disrupt material, social, computational, and metaphysical realities through provocation, collaboration, and ‘weird’/science fictional thinking This is your chance to get in on the 3D conversation.

]]>
Wed, 27 May 2015 07:04:15 -0700 http://additivism.org/post/120021614696
<![CDATA[Parisi: For a New Computational Aesthetics: Algorithmic Environments as Actual Objects.]]> https://vimeo.com/72181685

Abstract Algorithms are at the core of the computational logic. Formalism and axiomatics have also determined how the shortest algorithmic set or program deploys the most elegant form. This equivalence between axiomatics and beauty however hides a profound ontological ground based on order, rationality and cognition. However, this paper suggests that the pervasion of ubiquitous media and in particular of software agencies (from page ranking software to software for urban design) point to the formation of a new computational aesthetics defined by prehending algorithms. The paper will argue that this new mode of prehension defies the ontological ground of order and cognition revealing that randomness (or non-compressible data) is at the core of computation. The paper will draw on Alfred N. Whitehead’s notion of actual objects and Gregory Chaitin’s theory of the uncomputable to suggest that algorithms need to be understood in terms of ecology of prehensions. This understanding implies a notion of computational aesthetics defined by the chaotic architecture of data hosted by our programming culture. Luciana Parisi is a Senior Lecturer in Interactive Media at the Centre for Cultural Studies at Goldsmiths, University of London. She is author of Abstract Sex. Philosophy, Bio-Technology and the Mutations of Desire (London/New York 2004) and a progressive thinker in the emerging field of mediaecology and technoecology. Her research looks at the asymmetric relationship between science and philosophy, aesthetics and culture, technology and politics to investigate potential conditions for ontological and epistemological change. Her work on cybernetics and information theories, evolutionary theories, genetic coding and viral transmission has informed her analysis of culture and politics, the critique of capitalism, power and control. She has published articles about the relation between cybernetic machines, memory and perception in the context of a non-phenomenological critique of computational media and in relation to emerging strategies of branding and marketing. Her interest in interactive media has also led her research to engage more closely with computation, cognition, and algorithmic aesthetics. She is currently writing on architectural modeling and completing a monograph: Contagious Architecture. Computation, Aesthetics and the Control of Space (MIT Press, forthcoming).Cast: bkmTags: Media Science, Media, bkm, Bochumer Kolloquium Medienwissen, Computational Aesthetics, Algorithmic Environments, Luciana Parisi, Media Ecology and Ruhr-Universität Bochum

]]>
Mon, 25 May 2015 02:14:34 -0700 https://vimeo.com/72181685
<![CDATA[Algorithmic Narratives and Synthetic Subjects (paper)]]> http://machinemachine.net/portfolio/paper-at-theorizing-the-web-synthetic-subjects/

This was the paper I delivered at The Theorizing the Web Conference, New York, 18th April 2015. This video of the paper begins part way in, and misses out some important stuff. I urge you to watch the other, superb, papers on my panel by Natalie Kane, Solon Barocas, and Nick Seaver. A better video is forthcoming. I posted this up partly in response to this post at Wired about the UK election, Facebook’s echo-chamber effect, and other implications well worth reading into.

Data churning algorithms are integral to our social and economic networks. Rather than replace humans these programs are built to work with us, allowing the distinct strengths of human and computational intelligences to coalesce. As we are submerged into the era of ‘big data’, these systems have become more and more common, concentrating every terrabyte of raw data into meaningful arrangements more easily digestible by high-level human reasoning. A company calling themselves ‘Narrative Science’, based in Chicago, have established a profitable business model based on this relationship. Their slogan, ‘Tell the Stories Hidden in Your Data’, [1] is aimed at companies drowning in spreadsheets of cold information: a promise that Narrative Science can ‘humanise’ their databases with very little human input. Kristian Hammond, Chief Technology Officer of the company, claims that within 15 years over 90% of all news stories will also be written by algorithms. [2] But rather than replacing the jobs that human journalists now undertake, Hammond claims the vast majority of their ‘robonews’ output will report on data currently not covered by traditional news outlets. One family-friendly example of this is the coverage of little-league baseball games. Very few news organisations have the resources, or desire, to hire a swathe of human journalists to write-up every little-league game. Instead, Narrative Science offer leagues, parents and their children a miniature summary of each game gleaned from match statistics uploaded by diligent little league attendees, and then written up by Narrative Science in a variety of journalistic styles. In their book ‘Big Data’ from 2013, Oxford University Professor of internet governance Viktor Mayer-Schönberger, and  ‘data editor’ of The Economist, Kenneth Cukier, tell us excitedly about another data aggregation company, Prismatic, who: …rank content from the web on the basis of text analysis, user preferences, social network-popularity, and big-data analysis. [3] According to Mayer- Schönberger and Cukier this makes Prismatic able ‘to tell the world what it ought to pay attention to better than the editors of the New York Times’. [4] A situation, Steven Poole reminds us, we can little argue with so long as we agree that popularity underlies everything that is culturally valuable. Data is now the lifeblood of technocapitalism. A vast endless influx of information flowing in from the growing universe of networked and internet connected devices. As many of the papers at Theorizing the Web attest, our environment is more and more founded by systems whose job it is to mediate our relationship with this data. Technocapitalism still appears to respond to Jean Francois Lyotard’s formulation of Postmodernity: that whether something is true has less relevance, than whether it is useful. In 1973 Jean Francois Lyotard described the Postmodern Condition as a change in “the status of knowledge” brought about by new forms of techno-scienctific and techno-economic organisation. If a student could be taught effectively by a machine, rather than by another human, then the most important thing we could give the next generation was what he called, “elementary training in informatics and telematics.” In other words, as long as our students are computer literate “pedagogy would not necessarily suffer”. [5] The next passage – where Lyotard marks the Postmodern turn from the true to the useful – became one of the book’s most widely quoted, and it is worth repeating here at some length:

It is only in the context of the grand narratives of legitimation – the life of the spirit and/or the emancipation of humanity – that the partial replacement of teachers by machines may seem inadequate or even intolerable. But it is probable that these narratives are already no longer the principal driving force behind interest in acquiring knowledge. [6] Here, I want to pause to set in play at least three elements from Lyotard’s text that colour this paper. Firstly, the historical confluence between technocapitalism and the era now considered ‘postmodern’. Secondly, the association of ‘the grand-narrative’ with modern, and pre-modern conditions of knowledge. And thirdly, the idea that the relationship between the human and the machine – or computer, or software – is generally one-sided: i.e. we may shy away from the idea of leaving the responsibility of our children’s education to a machine, but Lyotard’s position presumes that since the machine was created and programmed by humans, it will therefore necessarily be understandable and thus controllable, by humans. Today, Lyotard’s vision of an informatically literate populous has more or less come true. Of course we do not completely understand the intimate workings of all our devices or the software that runs them, but the majority of the world population has some form of regular relationship with systems simulated on silicon. And as Lyotard himself made clear, the uptake of technocapitalism, and therefore the devices and systems it propagates, is piece-meal and difficult to predict or trace. At the same time Google’s fleet of self-driving motor vehicles are let-loose on Californian state highways, in parts of sub-Saharan Africa models of mobile-phones designed 10 or more years ago are allowing farming communities to aggregate their produce into quantities with greater potential to make profit on a world market. As Brian Massumi remarks, network technology allows us the possibility of “bringing to full expression a prehistory of the human”, a “worlding of the human” that marks the “becoming-planetary” of the body itself. [7] This “worlding of the human” represents what Edmund Berger argues is the death of the Postmodern condition itself: [T]he largest bankruptcy of Postmodernism is that the grand narrative of human mastery over the cosmos was never unmoored and knocked from its pulpit. Instead of making the locus of this mastery large aggregates of individuals and institutions – class formations, the state, religion, etc. – it simply has shifted the discourse towards the individual his or herself, promising them a modular dreamworld for their participation… [8] Algorithmic narratives appear to continue this trend. They are piece-meal, tending to feedback user’s dreams, wants and desires, through carefully aggregated, designed, packaged Narratives for individual ‘use’. A world not of increasing connectivity and understanding between entities, but a network worlded to each individual’s data-shadow. This situation is reminiscent of the problem pointed out by Eli Pariser of the ‘filter bubble’, or the ‘you loop’, a prevalent outcome of social media platforms tweaked and personalised by algorithms to echo at the user exactly the kind of thing they want to hear. As algorithms develop in complexity the stories they tell us about the vast sea of data will tend to become more and more enamoring, more and more palatable. Like some vast synthetic evolutionary experiment, those algorithms that devise narratives users dislike, will tend to be killed off in the feedback loop, in favour of other algorithms whose turn of phrase, or ability to stoke our egos, is more pronounced. For instance, Narrative Science’s early algorithms for creating little league narratives tended to focus on the victors of each game. What Narrative Science found is that parents were more interested in hearing about their own children, the tiny ups and downs that made the game significant to them. So the algorithms were tweaked in response. Again, to quote chief scientist Kris Hammond from Narrative Science: These are narratives generated by systems that understand data, that give us information to support the decisions we need to make about tomorrow. [9] Whilst we can program software to translate the informational nuances of a baseball game, or internet social trends, into human palatable narratives, larger social, economic and environmental events also tend to get pushed through an algorithmic meatgrinder to make them more palatable. The ‘tomorrow’ that Hammond claims his company can help us prepare for is one that, presumably, companies like Narrative Science and Prismatic will play an ever larger part in realising. In her recently published essay on Crisis and the Temporality of Networks, Wendy Chun reminds us of the difference between the user and the agent in the machinic assemblage: Celebrations of an all powerful user/agent – ‘you’ as the network, ‘you’ as the producer- counteract concerns over code as law as police by positing ‘you’ as the sovereign subject, ‘you’ as the decider. An agent however, is one who does the  actual labor, hence agent is one who acts on behalf of another. On networks, the agent would seem to be technology, rather than the users or programmers who authorize actions through their commands and clicks. [10] In order to unpack Wendy Chun’s proposition here we need only look at two of the most powerful, and impactful algorithms from the last ten years of the web. Firstly, Amazon’s recommendation system, which I assume you have all interacted with at some point. And secondly, Facebook’s news feed algorithm, that ranks and sorts posts on your personalised stream. Both these algorithms rely on a community of user interactions to establish a hierarchy of products, or posts, based on popularity. Both these algorithms also function in response to user’s past activity, and both, of course, have been tweaked and altered over time by the design and programming teams of the respective companies. As we are all no doubt aware, one of the most significant driving principles behind these extraordinarily successful pieces of code is capitalism itself. The drive for profit, and the relationship that has on distinguishing between a successful or failing company, service or product. Wendy Chun’s reminder that those that carry out an action, that program and click, are not the agents here should give use solace. We are positioned as sovereign subjects over our data, because that idea is beneficial to the propagation of the ‘product’. Whether we are told how well our child has done at baseball, or what particular kinds of news stories we might like, personally, to read right now, it is to the benefit of technocapitalism that those narratives are positive, palatable and uncompromising. However the aggregation and dissemination of big data effects our lives over the coming years, the likelihood is that at the surface – on our screens, and ubiquitous handheld devices – everything will seem rosey, comfortable, and suited to the ‘needs’ and ‘use’ of each sovereign subject.

TtW15 #A7 @npseaver @nd_kane @s010n @smwat pic.twitter.com/BjJndzaLz1

— Daniel Rourke (@therourke) April 17, 2015

So to finish I just want to gesture towards a much much bigger debate that I think we need to have about big data, technocapitalism and its algorithmic agents. To do this I just want to read a short paragraph which, as far as I know, was not written by an algorithm: Surface temperature is projected to rise over the 21st century under all assessed emission scenarios. It is very likely that heat waves will occur more often and last longer, and that extreme precipitation events will become more intense and frequent in many regions. The ocean will continue to warm and acidify, and global mean sea level to rise. [11] This is from a document entitled ‘Synthesis Report for Policy Makers’ drafted by The Intergovernmental Panel on Climate Change – another organisation who rely on a transnational network of computers, sensors, and programs capable of modeling atmospheric, chemical and wider environmental processes to collate data on human environmental impact. Ironically then, perhaps the most significant tool we have to understand the world, at present, is big data. Never before has humankind had so much information to help us make decisions, and help us enact changes on our world, our society, and our selves. But the problem is that some of the stories big data has to tell us are too big to be narrated, they are just too big to be palatable. To quote Edmund Berger again: For these reasons we can say that the proper end of postmodernism comes in the gradual realization of the Anthropocene: it promises the death of the narrative of human mastery, while erecting an even grander narrative. If modernism was about victory of human history, and postmodernism was the end of history, the Anthropocene means that we are no longer in a “historical age but also a geological one. Or better: we are no longer to think history as exclusively human…” [12] I would argue that the ‘grand narratives of legitimation’ Lyotard claimed we left behind in the move to Postmodernity will need to return in some way if we are to manage big data in a meaningful way. Crises such as catastrophic climate change will never be made palatable in the feedback between users, programmers and  technocapitalism. Instead, we need to revisit Lyotard’s distinction between the true and the useful. Rather than ask how we can make big data useful for us, we need to ask what grand story we want that data to tell us.   References [1] Source: www.narrativescience.com, accessed 15/10/14 [2] Steven Levy, “Can an Algorithm Write a Better News Story Than a Human Reporter?,” WIRED, April 24, 2012, http://www.wired.com/2012/04/can-an-algorithm-write-a-better-news-story-than-a-human-reporter/. [3] “Steven Poole – On Algorithms,” Aeon Magazine, accessed May 8, 2015, http://aeon.co/magazine/technology/steven-poole-can-algorithms-ever-take-over-from-humans/. [4] Ibid. [5] Jean-François Lyotard, The Postmodern Condition: A Report on Knowledge, Repr, Theory and History of Literature 10 (Manchester: Univ. Pr, 1992), 50. [6] Ibid., 51. [7] Brian Massumi, Parables for the Virtual: Movement, Affect, Sensation (Duke University Press, 2002), 128. [8] Edmund Berger, “The Anthropocene and the End of Postmodernism,” Synthetic Zero, n.d., http://syntheticzero.net/2015/04/01/the-anthropocene-and-the-end-of-postmodernism/. [9] Source: www.narrativescience.com, accessed 15/10/14 [10] Wendy Chun, “Crisis and the Temporality of Networks,” in The Nonhuman Turn, ed. Richard Grusin (Minneapolis: University of Minnesota Press, 2015), 154. [11] Rajendra K. Pachauri et al., “Climate Change 2014: Synthesis Report. Contribution of Working Groups I, II and III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change,” 2014, http://epic.awi.de/37530/. [12] Berger, “The Anthropocene and the End of Postmodernism.”

]]>
Fri, 08 May 2015 04:02:51 -0700 http://machinemachine.net/portfolio/paper-at-theorizing-the-web-synthetic-subjects/
<![CDATA[Resolution Disputes: A Conversation Between Rosa Menkman and Daniel Rourke]]> http://www.furtherfield.org/features/interviews/resolution-disputes-conversation-between-rosa-menkman-and-daniel-rourke

In the lead-up to her solo show, institutions of Resolution Disputes [iRD], at Transfer Gallery, Brooklyn, I caught up with Rosa Menkman over two gallons of home-brewed coffee. We talked about what the show might become, discussing a series of alternate resolutions and realities that exist parallel to our daily modes of perception. iRD is open to visitors on Saturdays at Transfer Gallery until April 18th, and will also function as host to my and Morehshin Allahyari’s 3D Additivist Manifesto, on Thursday April 16th. Rosa Menkman: The upcoming exhibition at Transfer is an illustration of my practice based PhD research on resolutions. It will be called ‘institutions of Resolution Disputes’, in short iRD and will be about the liminal, alternative modes of data or information representation, that are obfuscated by technological conventions. The title is a bit wonky as I wish for it to reflect that kind of ambiguity that invokes curiosity. In any case, I always feel that every person, at least once in their grown-up life, wants to start an institution. There are a few of those moments in life, like “Now I am tired of the school system, I want to start my own school!”; and “Now I am ready to become an architect!”, so this is my dream after wanting to become an architect. Daniel Rourke: To establish your own institution?

RM: First of all, I am multiplexing the term institution here. ‘institutions’ and the whole setting of iRD does mimic a (white box) institute, however the iRD does not just stand for a formal organization that you can just walk into. The institutions also revisit a slightly more compound framework that hails from late 1970s, formulated by Joseph Goguen and Rod Burstall, who dealt with the growing complexities at stake when connecting different logical systems (such as databases and programming languages) within computer sciences. A main result of these non-logical institutions is that different logical systems can be ‘glued’ together at the ‘substrata levels’, the illogical frameworks through which computation also takes place. Secondly, while the term ’resolution’ generally simply refers to a standard (measurement) embedded in the technological domain, I believe that a resolution indeed functions as a settlement (solution), but at the same time exists as a space of compromise between different actors (languages, objects, materialities) who dispute their stakes (frame rate, number of pixels and colors, etc.), following rules (protocols) within the ever growing digital territories. So to answer your question; maybe in a way the iRD is sort of an anti-protological institute or institute for anti-utopic, obfuscated or dysfunctional resolutions. DR: It makes me think of Donna Haraway’s Manifesto for Cyborgs, and especially a line that has been echoing around my head recently:

“No objects, spaces, or bodies are sacred in themselves; any component can be interfaced with any other if the proper standard, the proper code, can be constructed for processing signals in a common language.”

By using the terms ‘obfuscation’ and ‘dysfunction’ you are invoking a will – perhaps on your part, but also on the part of the resolutions themselves – to be recognised. I love that gesture. I can hear the objects in iRD speaking out; making themselves heard, perhaps for the first time. In The 3D Additivist Manifesto we set out to imagine what the existence of Haraway’s ‘common language’ might mean for the unrealised, “the powerless to be born.” Can I take it that your institute has a similar aim in mind? A place for the ‘otherwise’ to be empowered, or at least to be recognised?

RM: The iRD indeed kind of functions as a stage for non-protocological resolutions, or radical digital materialism. I always feel like I should say here, that generally, I am not against function or efficiency. These are good qualities, they make the world move forward. On the other hand, I do believe that there is a covert, nepotist cartel of protocols that governs the flows and resolutions of data and information just for the sake of functionality and efficiency. The sole aim of this cartel is to uphold the dogma of modern computation, which is about making actors function together (resonate) as efficiently as possible, tweaking out resources to maximum capacity, without bottlenecks, clicks, hicks or cuts, etc. But this dogma also obfuscates a compromise that we never question. And this is where my problem lies: efficiency and functionality are shaping our objects. Any of these actors could also operate under lower, worse or just different resolutions. Yet we have not been taught to see, think or question any of these resolutions. They are obfuscated and we are blind to them. I want to be able to at least entertain the option of round video (strip video from its interface!), to write inside non-quadrilateral, modular text editors (no more linear reading!) or to listen to (sonify) my rainbows (gradients). Right now, the protocols in place simply do not make this possible, or even worse, they have blocked these functionalities. There is this whole alternate universe of computational objects, ways that our data would look or be used like, if the protocols and their resolutions had been tweaked differently. The iRD reflects on this, and searches, if you will, a computation of many dimensions. DR: Meaning that a desktop document could have its corners folded back, and odd, non standard tessellations would be possible, with overlapping and intersecting work spaces?

RM: Yes! Exactly! Right now in the field of imagery, all compressions are quadrilateral, ecology dependent, standard solutions (compromises) following an equation in which data flows are plotted against actors that deal with the efficiency/functionality duality in storage, processing and transmission. I am interested in creating circles, pentagons and other more organic manifolds! If we would do this, the whole machine would work differently. We could create a modular and syphoning relationships between files, and just as in jon Satroms’ 2011 QTzrk installation, video would have multiple timelines and soundtracks, it could even contain some form of layer-space! DR: So the iRD is also a place for some of those alternate ‘solutions’ that are in dispute? RM: Absolutely. However, while I am not a programmer, I also don’t believe that imagining new resolutions means to absolve of all existing resolutions and their inherent artifacts. History and ecology play a big role in the construction of a resolution, which is why I will also host some of my favorite, classic solutions and their inherent (normally obfuscated) artifacts at the iRD, such as scan lines, DCT blocks, and JPEG2000 wavelets.

The iRD could easily function as a Wunderkammer for artifacts that already exist within our current resolutions. But to me this would be a needles move towards the style of the Evil Media Distribution Center, created by YoHa (Matsuko Yokokoji and Graham Harwood) for the 2013 Transmediale. I love to visit Curiosity Cabinets, but at the same time, these places are kind of dead, celebrating objects that are often shielded behind glass (or plastic). I can imagine the man responsible for such a collection. There he sits, in the corner, smoking a pipe, looking over his conquests. But this kind of collection does not activate anything! Its just ones own private boutique collection of evil! For a dispute to take place we need action! Objects need to have – or be given – a voice! DR: …and the alternate possible resolutions can be played out, can be realised, without solidifying them as symbols of something dead and forgotten. RM: Right! It would be easy and pretty to have those objects in a Wunderkammer type of display. Or as Readymades in a Boîte-en-valise but it just feels so sad. That would not be zombie like but dead-dead. A static capture of hopelessness. DR: The Wunderkammer had a resurgence a few years ago. Lots of artists used the form as a curatorial paradigm, allowing them to enact their practice as artist and curator. A response, perhaps, to the web, the internet, and the archive. Aggregated objects, documents and other forms placed together to create essayistic exhibitions. RM: I feel right now, this could be an easy way out. It would be a great way out, however, as I said, I feel the need to do something else, something more active. I will smoke that cigar some other day.

DR: So you wouldn’t want to consider the whole of Transfer Gallery as a Wunderkammer that you were working inside of? RM: It is one possibility. But it is not my favorite. I would rather make works against the established resolutions, works that are built to break out of a pre-existing mediatic flow. Works that were built to go beyond a specific conventional use. For example, I recently did this exhibition in The Netherlands where I got to install a really big wallpaper, which I think gained me a new, alternative perspectives on digital materiality. I glitched a JPEG and zoomed in on its DCT blocks and it was sooo beautiful, but also so scalable and pokable. It became an alternative level of real to me, somehow. DR: Does it tesselate and repeat, like conventional wallpaper? RM: It does repeat in places. I would do it completely differently if I did it again. Actually, for the iRD I am considering to zoom into the JPEG2000 wavelets. I thought it would be interesting to make a psychedelic installation like this. It’s like somebody vomited onto the wall.

DR: [laughs] It does look organic, like bacteria trying to organise. RM: Yeah. It really feels like something that has its own agency somehow.

DR: That’s the thing about JPEG2000 – and the only reason I know about that format, by the way, is because of your Vernacular of File Formats - the idea that they had to come up with a non-regular block shape for the image format that didn’t contradict with the artifacts in the bones and bodies that were being imaged. It feels more organic because of that. It doesn’t look like what you expect an image format to look like, it looks like what I expect life to look like, close up. RM: It looks like ‘Game of Life’. DR: Yes! Like Game of Life. And I assume that now they don’t need to use JPEG2000 because the imaging resolution is high enough on the machines to supersede bone artifacts. I love that. I love the effect caused when you’ve blown it up here. It looks wonderful. What is the original source for this? RM: I would blow this image [the one from A Vernacular of File Formats] up to hell. Blow it up until there is no pixel anymore. It shouldn’t be too cute. These structures are built to be bigger. Have you seen the Glitch Timond (2014)? The work itself is about glitches that have gained a folkloric meaning over time, these artifact now refer to hackers, ghosts or AI. They are hung in the shape of a diamond. The images themselves are not square, and I can install them on top of the wallpaper somehow, at different depths. Maybe I could expand on that piece, by putting broken shaped photos, and shadows flying around. It could be beautiful like that.

DR: It makes me think of the spatiality of the gallery. So that the audience would feel like they were inside a broken codec or something. Inside the actual coding mechanism of the image, rather than the standardised image at the point of its visual resolution. RM: Oh! And I want to have a smoke machine! There should be something that breaks up vision and then reveals something. DR: I like that as a metaphor for how the gallery functions as well. There are heaps of curatorial standards, like placing works at line of sight, or asking the audience to travel through the space in a particular order and mode of viewing. The gallery space itself is already limited and constructed through a huge, long history of standardisations, by external influences of fashion and tradition, and others enforced by the standards of the printing press, or the screen etc. So how do you make it so that when an audience walks into the gallery they feel as though they are not in a normal, euclidean space anymore? Like they have gone outside normal space? RM: That’s what I want! Disintegrate the architecture. But now I am like, “Yo guys, I want to dream, and I want it to be real in three weeks…” DR: “Hey guys, I want to break your reality!” [laughs] RM: One step is in place, Do you remember Ryan Maguire who is responsible for The Ghost in the MP3? His research is about MP3 compressions and basically what sounds are cut away by this compression algorithm, simply put: it puts shows what sounds the MP3 compression normally cuts out as irrelevant – in a way it inverses the compression and puts the ‘irrelevant’ or deleted data on display. I asked him to rework the soundtrack to ‘Beyond Resolution’, one of the two videowork of the iRD that is accompanied by my remix of professional grin by Knalpot and Ryan said yes! And so it was done! Super exciting.   DR: Yes. I thought that was a fantastic project. I love that as a proposition too… What would the equivalent of that form of ghosting be in terms of these alternate, disputed resolutions? What’s the remainder? I don’t understand technical formats as clearly as you do, so abstract things like ‘the ghost’, ‘the remainder’ are my way into understanding them. An abstract way in to a technical concept. So what is the metaphoric equivalent of that remainder in your work? For instance, I think it depends on what this was originally an image of. I think that is important. RM: The previous image of JPEG2000 does not deal with the question of lost information. I think what you are after is an inversed Alvin Lucier ‘Sitting in a Room’ experiment, one that only shows the “generation loss” (instead of the generation left over, which is what we usually get to see or hear in art projects). I think that would be a reasonable equivalent to Ryan Maguires MP3 compression work. Or maybe Supraconductivity. I can struggle with this for… for at least two more days. In any case I want the iRD to have a soundtrack. Actually, it would like there to be a spatial soundtrack; the ghost soundtrack in the room and the original available only on a wifi access point. DR: I’m really excited by that idea of ghostly presence and absence, you know. In terms of spatiality, scan lines, euclidean space… RM: It’s a whole bundle of things! [laughs] “Come on scan lines, come to the institutions, swim with the ghosts!” DR: It makes me think of cheesy things you get in a children’s museum. Those illusion rooms, that look normal through a little window, but when you go into them they are slanted in a certain way, so that a child can look bigger than an adult through the window frame. You know what I mean? They play with perspective in a really simple way, it’s all about the framing mechanism, the way the audience’s view has been controlled, regulated and perverted. RM: I was almost at a point where I was calling people in New York and asked, “Can you produce a huge stained glass window, in 2 weeks?” I think it would be beautiful if the Institute had its own window. I would take a photo of what you could see out of the real window, and then make the resolution of that photo really crappy, and create a real stained glass window, and install that in the gallery at its original place. If I have time one day I would love to do that, working with real craftspeople on that. I think that in the future the iRD might have a window through which we interface the outside. Every group of people that share the same ideas and perspectives on obfuscation need to have a secret handshake. So that is what I am actually working on right now. Ha, You didn’t see that coming? [Laughs] DR: [Laughs] No… that’s a different angle. RM: I want people to have a patch! A secret patch. You remember Trevor Paglen’s book on the symbology of military patches?

DR: Oh yeah. Where he tries to decode the military patches? Yes, I love that. RM: Yeah, I don’t think the world will ever have enough patches. They are such an icon for secret handshakes. I have been playing around with this DCT image. I want to use it as a key to the institutions, which basically are a manifest to the reasonings behind this whole exhibition, but then encrypted in a macroblock font (I embedded an image of Institution 1 earlier). There was one of Paglen’s patches that really stood out for me; the black on black one. The iRD patch should be inspired by that.

DR: Hito Steyerl’s work How Not to be Seen: A Fucking Didactic Educational .MOV File, centres on the grid used by the military to calibrate their satellites from space. The DCT structure looks a lot like that, but I know the DCT is not about calibration. It contains all the shapes necessary to compose any image? RM: If you look up close at a badly compressed JPEG, you will notice the image consist of macroblocks. A macroblock is a block organizations, usually consisting of 8×8 pixels, that posses color (chrominance) and light (luminance) values embedded via DCT (discrete cosine transform). Basically all JPEGs you have ever seen are build out of this finite set of 64 macroblocks. Considering that JPEGs make up the vast majority of images we encounter on a daily basis, I think it is pretty amazing how simple this part of the JPEG compression really is. But the patch should of course not just be square. Do you know the TV series Battlestar Galactica, where they have the corners cut off all their books? All the paper in that world follows this weird, octagonal shape? Or Borges Library and its crimson hexagon, that holds all knowledge. I love those randomly cryptic geometric forms… DR: It reminds me of a 1987 anime film, Wings of Honneamise, that had a really wonderfully designed world. Everything is different, from paper sizes and shapes, through to their cutlery. Really detailed design from the ground up, all the standards and traditions. RM: Like this Minecraft book too. The Blockpedia. DR: Oh that’s great. I love the Minecraft style and the mythos that has arisen around it. RM: So Minecraft and Borges follow a 6 corner resolution, and Battlestar paper has 8 corners… Discrepancy! I want to reference them all! DR: So these will go into the badges? RM: I want to have a black on black embroidered patch with corners. Don’t you think this would be so pretty? This black on black. I want to drop a reference to 1984, too, Orwell or Apple, the decoder can decide. These kind of secret, underground references, I like those. DR: A crypto exhibition. RM: It’s so hot right now (and with hot I do not mean cool). Since the 90s musicians encrypt or transcode things in their sounds, from Aphex Twin, to Goodiepal and now TCF, who allegedly encrypted an image from the police riots in Athens into one of his songs. However, he is a young Scandinavian musician so that makes me wonder if the crypto design in this case is confusingly non-political. Either way, I want to rebel against this apparent new found hotness of crypto-everything, which is why I made Tacit:Blue.

Tacit:Blue uses a very basic form of encryption. Its archaic, dumb and decommissioned. Every flash shows a next line of my ‘secret message’ encrypted in masonic pigpen. When it flickers it gives a little piece of the message which really is just me ranting about secrecy. So if someone is interested in my opinion, they can decode that.

Actually, the technology behind the video is much more interesting. Do you know The Nova Drone? Its a small AV synthesizer designed by Casper Electronics. The the flickr frequency of this military RGB LED on the top of the board can be altered by turning the RGB oscillators. When I come close to the LED with the lens of my iphone, the frequencies of the LED and the iphone camera do not sync up. What happens is a rolling shutter effect. The camera has to interpret the input and something is gone, lost in translation. In fact, a Resolutional Dispute takes place right there. DR: So the dispute happens because framerate of the camera conflicts with the flicker of the LED? RM: And the sound is the actual sound of the electronics. In Tacit:Blue I do not use the NovaDrone in a ‘clean’ way, I am actually misusing it (if there is such a thing when it comes to a device of dispute). Some of the sounds and disruptions of flow are created in this patch bay, which is where you can patch the LFOs, etc. Anyway, when you disconnect the patch it flickers, but I never take it out fully so it creates this classic, noisy electric effect. What do you think about the text? Do you think this works? I like this masonic pigpen, its a very simple, nostalgic old quiff. DR: It reminds me of the title sequence for Alien. Dave Addey did a close visual, sci-fi etymological, analysis of the typography in Alien. It went viral online recently. Did you see that?

RM: No! DR: It is fantastic. Everything from the title sequence to the buttons on the control panel in the background. Full of amazing insights.

RM: Wow, inspiring!

So with any cypher you also need a key, which is why I named the video Tacit:Blue, a reference to the old Northrop Tacit Blue stealth surveillance aircraft. The aircraft was used to develop techniques against passive radar detection, but has been decommissioned now, just like the masonic pigpen encryption. DR: This reminds me of Eyal Weizman. He has written a lot on the Israeli / Palestinian conflict as a spatial phenomena. So we don’t think about territory merely as a series of lines drawn on a globe anymore, but as a stack, including everything from airspace, all the way down beneath the ground, where waste, gas and water are distributed. The mode by which water is delivered underground often cuts across conflicted territories on the surface. A stacked vision of territory brings into question the very notion of a ‘conflict’ and a ‘resolution’. I recently saw him give a lecture on the Forensic Architecture project, which engages in disputes metered against US Military activities. Military drones are now so advanced that they can target a missile through the roof of a house, and have it plunge several floors before it explodes. It means that individual people can be targeted on a particular floor. The drone strike leaves a mark in the roof which is – and this is Weizman’s terminology - ‘beneath the threshold of detectability’. And that threshold also happens to be the size of a human body: about 1 metre square. Military satellites have a pixel size that effectively translates to 1 metre square at ground level. So to be invisible, or technically undetectable, a strike needs only to fall within a single pixel of a satellite imaging system. These drone strikes are designed to work beneath that threshold. In terms of what you are talking about in Trevor Paglen’s work, and the Northrop Tacit Blue, those technologies were designed to exist beneath, or parallel to, optic thresholds, but now those thresholds are not optic as much as they are about digital standards and resolution densities. So that shares the same space as the codecs and file formats you are interested in. Your patch seems to bring that together, the analogue pixel calibration that Steyerl refers to is also part of that history. So I wonder whether there are images that cannot possibly be resolved out of DCT blocks. You know what I mean? I think your work asks that question. What images, shapes, and objects exist that are not possible to construct out of this grid? What realities are outside of the threshold of these blocks to resolve? It may even be the case that we are not capable of imagining such things, because of course these blocks have been formed in conjunction with the human visual system. The image is always already a compromise between the human perceptual limit and a separately defined technical limit. RM: Yes, well I can imagine vector graphics, or mesh based graphics where the lines are not just a connection between two points, but also a value could be what you are after. But I am not sure. At some point I thought that people entering the iRD could pay a couple of dollars for one of these patches, but if they don’t put the money down, then they would be obliged to go into the exhibition wearing earplugs. DR: [Laughs] So they’d be allowed in, but they’d have one of their senses dampened? RM: Yes, wearing earmuffs, or weird glasses or something like that. [Laughs] DR: Glasses with really fine scan lines on them that conflict with TV images or whatever. RM: [Laughs] And I was thinking, well, there should be a divide between people. To realise that what you see is just one threshold that has been lifted to only a few. There are always thresholds, you know. DR: Ways to invite the audience into the spaces and thresholds that are beneath the zones of resolutional detectability? RM: Or maybe just to show the mechanics behind objects and thresholds. DR: Absolutely. So to go back to your Tacit:Blue video, in regards the font, I like the aesthetic, but I wonder whether you could play with that zone of detectability a little more. You could have the video display at a frequency that is hard for people to concentrate on, for instance, and then put the cryptographic message at a different frequency. Having zones that do not match up, so that different elements of the work cut through different disputed spaces. Much harder to detect. And more subliminal, because video adheres to other sets of standards and processes beyond scan lines, the conflict between those standards opens up another space of possibilities. It makes me think about Takeshi Murata’s Untitled (Pink Dot). I love that work because it uses datamoshing to question more about video codecs than just I and P frames. That’s what sets this work apart, for me, from other datamoshed works. He also plays with layers, and post production in the way the pink dot is realised. As it unfolds you see the pink dot as a layer behind the Rambo footage, and then it gets datamoshed into the footage, and then it is a layer in front of it, and then the datamosh tears into it and the dot become part of the Rambo miasma, and then the dot comes back as a surface again. So all the time he is playing with the layering of the piece, and the framing is not just about one moment to the next, but it also it exposes something about Murata’s super slick production process. He must have datamoshed parts of the video, and then post-produced the dot onto the surface of that, and then exported that and datamoshed that, and then fed it back into the studio again to add more layers. So it is not one video being datamoshed, but a practice unfolding, and the pink dot remains a kind of standard that runs through the whole piece, resonating in the soundtrack, and pushing to all elements of the image. The work is spatialised and temporalised in a really interesting way, because of how Murata uses datamoshing and postproduction to question frames, and layers, by ‘glitching’ between those formal elements. And as a viewer of Pink Dot, your perception is founded by those slips between the spatial surface and the temporal layers. RM: Yeah, wow. I never looked at that work in terms of layers of editing. The vectors of these blocks that smear over the video, the movement of those macroblocks, which is what this video technologically is about, is also about time and editing. So Murata effectively emulates that datamosh technique back into the editing of the work before and after the actual datamosh. That is genius! DR: If it wasn’t for Pink Dot I probably wouldn’t sit here with you now. It’s such an important work for me and my thinking.

Working with Morehshin Allahyari on The 3D Additivist Manifesto has brought a lot of these processes into play for me. The compressed labour behind a work can often get lost, because a final digital video is just a surface, just a set of I and P frames. The way Murata uses datamoshing calls that into play. It brings back some of the temporal depth. Additivism is also about calling those processes and conflicts to account, in the move between digital and material forms. Oil is a compressed form of time, and that time and matter is extruded into plastic, and that plastic has other modes of labour compressed into it, and the layers of time and space are built on top of one another constantly – like the layers of a 3D print. When we rendered our Manifesto video we did it on computers plugged into aging electricity infrastructures that run on burnt coal and oil. Burning off one form of physical compressed time to compress another set of times and labours into a ‘digital work’. RM: But you can feel that there is more to that video than its surface! If I remember correctly you and Morehshin wrote an open invitation to digital artists to send in their left over 3D objects. So every object in that dark gooey ocean in The 3D Additivist Manifesto actually represents a piece of artistic digital garbage. It’s like a digital emulation of the North Pacific Gyre, which you also talked about in your lecture at Goldsmiths, but then solely consisting of Ready-Made art trash.

The actual scale and form of the Gyre is hard to catch, it seems to be unimaginable even to the people devoting their research to it; it’s beyond resolution. Which is why it is still such an under acknowledged topic. We don’t really want to know what the Gyre looks or feels like; it’s just like the clutter inside my desktop folder inside my desktop folder, inside the desktop folder. It represents an amalgamation of histories that moved further away from us over time and we don’t necessarily like to revisit, or realise that we are responsible for. I think The 3D Additivist Manifesto captures that resemblance between the way we handle our digital detritus and our physical garbage in a wonderfully grimm manner. DR: I’m glad you sense the grimness of that image. And yes, as well as sourcing objects from friends and collaborators we also scraped a lot from online 3D object repositories. So the gyre is full of Ready-Mades divorced from their conditions of creation, use, or meaning. Like any discarded plastic bottle floating out in the middle of the pacific ocean. Eventually Additivist technologies could interface all aspects of material reality, from nanoparticles, to proprietary components, all the way through to DNA, bespoke drugs, and forms of life somewhere between the biological and the synthetic. We hope that our call to submit to The 3D Additivist Cookbook will provoke what you term ‘disputes’. Objects, software, texts and blueprints that gesture to the possibility of new political and ontological realities. It sounds far-fetched, but we need that kind of thinking. Alternate possibilities often get lost in a particular moment of resolution. A single moment of reception. But your exhibition points to the things beyond our recognition. Or perhaps more importantly, it points to the things we have refused to recognise. So, from inside the iRD technical ‘literacy’ might be considered as a limit, not a strength. RM: Often the densities of the works we create, in terms of concept, but also collage, technology and source materials move quite far away or even beyond a fold. I suppose that’s why we make our work pretty. To draw in the people that are not technically literate or have no back knowledge. And then perhaps later they wonder about the technical aspects and the meaning behind the composition of the work and want to learn more. To me, the process of creating, but also seeing an interesting digital art work often feels like swimming inside an abyss of increments. DR: What is that? RM: I made that up. An abyss is something that goes on and on and on. Modern lines used to go on, postmodern lines are broken up as they go on. Thats how I feel we work on our computers, its a metaphor for scanlines. DR: In euclidean space two parallel lines will go on forever and not meet. But on the surface of a globe, and other, non-euclidean spaces, those lines can be made to converge or diverge. * RM: I have been trying to read up on my euclidean geometry. DR: And I am thinking now about Flatland again, A Romance in Many Dimensions. RM: Yeah, it’s funny that in the end, it is all about Flatland. That’s where this all started, so thats where it has to end; Flatland seems like an eternal ouroboros inside of digital art. DR: It makes me think too about holographic theory. You can encode on a 2D surface the information necessary to construct a 3D image. And there are theories that suggest that a black hole has holographic properties. The event horizon of a black hole can be thought of as a flat surface, and contains all the information necessary to construct the black hole. And because a black hole is a singularity, and the universe can be considered as a singularity too – in time and space – some theories suggest that the universe is a hologram encoded on its outer surface. So the future state of the universe encodes all the prior states. Or something like that. RM: I once went to a lecture by Raphael Bousso, a professor at Department of Physics, UC Berkeley. He was talking about black holes, it was super intense. I was sitting on the end of my seat and nearly felt like I was riding a dark star right towards my own event horizon. DR: [laughs] Absolutely. I suppose I came to understand art and theory through things I knew before, which is pop science and science fiction. I tend to read everything through those things. Those are my starting points. But yes, holograms are super interesting. RM: I want to be careful not to go into the wunderkammer, because if there are too many things, then each one of them turns into a fetish object; a gimmick. DR: There was a lot of talk a few years ago about holographic storage, because basically all our storage – CDs, DVDs, hard drive platters, SSD drives – are 2D. All the information spinning on your screen right now, all those rich polygons or whatever, it all begins from data stored on a two dimensional surface. But you could have a holographic storage medium with three dimensions. They have built these things in the laboratory. There goes my pop science knowledge again. RM: When I was at Transmediale last year, the Internet Yami-ichi (Internet Black Market) was on. There I sold some custom videos for self cracked LCD screens. DR: Broken on purpose? RM: Yes, and you’d be allowed to touch it so the screen would go multidimensional. Liquid crystals are such a beautiful technology. DR: Yes. And they are a 3D image medium. But they don’t get used much anymore, right? LEDS are the main image format. RM: People miss LCDS! I saw a beautiful recorded talk from the Torque event, Esther Leslie talking about Walter Benjamin who writes about snow flakes resembling white noise. Liquid crystals and flatness and flatland. I want to thank you Dan, just to talk through this stuff has been really helpful. You have no idea. Thank you so much! DR: Putting ideas in words is always helpful. RM: I never do that, in preparation, to talk about things I am still working on, semi-completed. It’s scary to open up the book of possibilities. When you say things out loud you somehow commit to them. Like, Trevor Paglen, Jon Satrom are huge inspirations, I would like to make work inspired by them, that is a scary thing to say out loud. DR: That’s good. We don’t work in a vacuum. Trevor Paglen’s stuff is often about photography as a mode of non-resolved vision. I think that does fit with your work here, but you have the understanding and wherewithal to transform these concerns into work about the digital media. Maybe you need to build a tiny model of the gallery and create it all in miniature. RM: That’s what Alma Alloro said! DR: I think it would be really helpful. You don’t have to do it in meatspace. You could render a version of the gallery space with software. RM: Haha great idea, but that would take too much time. iRD needs to open to the public in 3 weeks! * DR originally stated here that a globe was a euclidean space. This was corrected, with thanks to Matthew Austin.

]]>
Mon, 13 Apr 2015 05:50:53 -0700 http://www.furtherfield.org/features/interviews/resolution-disputes-conversation-between-rosa-menkman-and-daniel-rourke
<![CDATA[transmediale 2014 afterglow keynote -- The Black Stack]]> http://www.youtube.com/watch?v=3c3jXPBG-NY&feature=youtube_gdata

Keynote with Ryan Bishop (Winchester School of Art), Benjamin H. Bratton and Metahaven At Haus der Kulturen der Welt Berlin 31.1.2014

Conference stream An Afterglow of The Mediatic

Planetary computation and its geographies can be modeled as a coherent platform, a vertical software/hardware "stack". In his forthcoming book, "The Stack: On Software and Sovereignty", Benjamin Bratton explores this topography as a geopolitical framework, one defined by accidents and contradictions as much as by inventions and efficiencies. The design group Metahaven's forthcoming publication, "Black Transparency", focuses on the political and aesthetic regimes of contemporary transparency, and their coexistence with networks, institutions, and various (dis)organised groups. The latest in their series of ongoing collaborative discussions, Metahaven and Bratton will take turns using the stack's six layers—Earth, Cloud, City, Address, Interface, and User—offering proposals on the future of each.

]]>
Tue, 10 Feb 2015 04:22:56 -0800 http://www.youtube.com/watch?v=3c3jXPBG-NY&feature=youtube_gdata
<![CDATA[Data as Culture]]> https://furtherfield.org/features/reviews/data-culture#new_tab

For my latest Furtherfield review I wallowed in curator Shiri Shalmy’s ongoing project Data as Culture, examining works by Paolo Cirio and James Bridle that deal explicitly with the concatenation of data. What happens when society is governed by a regime of data about data, increasingly divorced from the symbolic?

In a work commissioned by curator Shiri Shalmy for her ongoing project Data as Culture, artist Paolo Cirio confronts the prerequisites of art in the era of the user. Your Fingerprints on the Artwork are the Artwork Itself [YFOTAATAI] hijacks loopholes, glitches and security flaws in the infrastructure of the world wide web in order to render every passive website user as pure material. In an essay published on a backdrop of recombined RAW tracking data, Cirio states: Data is the raw material of a new industrial, cultural and artistic revolution. It is a powerful substance, yet when displayed as a raw stream of digital material, represented and organised for computational interpretation only, it is mostly inaccessible and incomprehensible. In fact, there isn’t any meaning or value in data per se. It is human activity that gives sense to it. It can be useful, aesthetic or informative, yet it will always be subject to our perception, interpretation and use. It is the duty of the contemporary artist to explore what it really looks like and how it can be altered beyond the common conception. Even the nondescript use patterns of the Data as Culture website can be figured as an artwork, Cirio seems to be saying, but the art of the work requires an engagement that contradicts the passivity of a mere ‘user’. YFOTAATAI is a perfect accompaniment to Shiri Shalmy’s curatorial project, generating questions around security, value and production before any link has been clicked or artwork entertained. Feeling particularly receptive I click on James Bridle’s artwork/website  A Quiet Disposition and ponder on the first hyperlink that surfaces: the link reads “Keanu Reeves“: “Keanu Reeves” is the name of a person known to the system.  Keanu Reeves has been encountered once by the system and is closely associated with Toronto, Enter The Dragon, The Matrix, Surfer and Spacey Dentist.  In 1999 viewers were offered a visual metaphor of ‘The Matrix’: a stream of flickering green signifiers ebbing, like some half-living fungus of binary digits, beneath our apparently solid, Technicolor world. James Bridle‘s expansive work A Quiet Disposition [AQD] could be considered as an antidote to this millennial cliché, founded on the principle that we are in fact ruled by a third, much more slippery, realm of information superior to both the Technicolor and the digital fungus. Our socio-political, geo-economic, rubber bullet, blood and guts world, as Bridle envisages it, relies on data about data. Read the rest of this review at Furtherfield.org

]]>
Wed, 01 Oct 2014 07:37:48 -0700 https://furtherfield.org/features/reviews/data-culture#new_tab
<![CDATA[Data as Culture]]> http://furtherfield.org/features/reviews/data-culture

For my latest Furtherfield review I wallowed in curator Shiri Shalmy’s ongoing project Data as Culture, examining works by Paolo Cirio and James Bridle that deal explicitly with the concatenation of data. What happens when society is governed by a regime of data about data, increasingly divorced from the symbolic? In a work commissioned by curator Shiri Shalmy for her ongoing project Data as Culture, artist Paolo Cirio confronts the prerequisites of art in the era of the user. Your Fingerprints on the Artwork are the Artwork Itself [YFOTAATAI] hijacks loopholes, glitches and security flaws in the infrastructure of the world wide web in order to render every passive website user as pure material. In an essay published on a backdrop of recombined RAW tracking data, Cirio states: Data is the raw material of a new industrial, cultural and artistic revolution. It is a powerful substance, yet when displayed as a raw stream of digital material, represented and organised for computational interpretation only, it is mostly inaccessible and incomprehensible. In fact, there isn’t any meaning or value in data per se. It is human activity that gives sense to it. It can be useful, aesthetic or informative, yet it will always be subject to our perception, interpretation and use. It is the duty of the contemporary artist to explore what it really looks like and how it can be altered beyond the common conception. Even the nondescript use patterns of the Data as Culture website can be figured as an artwork, Cirio seems to be saying, but the art of the work requires an engagement that contradicts the passivity of a mere ‘user’. YFOTAATAI is a perfect accompaniment to Shiri Shalmy’s curatorial project, generating questions around security, value and production before any link has been clicked or artwork entertained. Feeling particularly receptive I click on James Bridle’s artwork/website  A Quiet Disposition and ponder on the first hyperlink that surfaces: the link reads “Keanu Reeves“: “Keanu Reeves” is the name of a person known to the system.  Keanu Reeves has been encountered once by the system and is closely associated with Toronto, Enter The Dragon, The Matrix, Surfer and Spacey Dentist.  In 1999 viewers were offered a visual metaphor of ‘The Matrix’: a stream of flickering green signifiers ebbing, like some half-living fungus of binary digits, beneath our apparently solid, Technicolor world. James Bridle‘s expansive work A Quiet Disposition [AQD] could be considered as an antidote to this millennial cliché, founded on the principle that we are in fact ruled by a third, much more slippery, realm of information superior to both the Technicolor and the digital fungus. Our socio-political, geo-economic, rubber bullet, blood and guts world, as Bridle envisages it, relies on data about data. Read the rest of this review at Furtherfield.org

]]>
Wed, 01 Oct 2014 06:37:48 -0700 http://furtherfield.org/features/reviews/data-culture
<![CDATA[Synthetic Assistants]]> http://www.grafik.net/category/screenshot/synthetic-assistants

I wrote a short piece for Grafik Magazine’s Screenshot feature: Moravec’s Paradox states that ‘low-level’ sensorimotor skills require far more computational resources than ‘high-level’ abstract reasoning. In general terms, this translates into the doctrine that computers are very good at solving some types of problems, humans at others. Picking out the face of a loved one in a packed crowd and walking over to embrace them is laughably easy for a human to do, but not a robot. Alternatively, calculating the square-root of 1,276,433,9 takes a cheap pocket calculator a few nanoseconds. As for a human? Well, try it out for yourself * Sustained by these principles, a new breed of machine/human hybrid systems have begun infecting our social and economic networks. Rather than imitate tasks that humans can do effortlessly, these programs are built to work with us, allowing the distinct strengths of human and ‘artificial’ intelligences to coalesce. One particularly intriguing example of this is the reCaptcha password system. Maintained by Google, reCaptcha is employed hundreds of millions of times every day, according to Google’s own promotional blurb, to ‘stop spam, read books’. You yourself — perhaps without knowing it — have taken part in a vast online act of computation, donating a short burst of your highly evolved pattern recognition skill to Google’s project of digitising every one of the world’s printed books. The reCaptcha system is doubly fascinating in regards Moravec’s Paradox because it marks the meeting-point between low-level and high-level computable problems. Every password is guessable given enough time and computer resources. Alternatively, the smudged word on page 286, line forty three of the Magna Carta is incredibly difficult for a computer to recognise. If it fails, a different smudge with a different ‘solution’ is pulled from the database, ensuring your email account remains secure. Whilst determining whether or not you are a human the reCaptcha software quietly hijacks your biological brain, translating the task it has been allotted to protect your data into a moment of distributed, invisible labour. The question is: who or what is using who or what, for what or whom? Systems like reCaptcha could be hailed as the birth of a ‘world brain’: a thinking web connecting everyone on Earth into a vast meta-mind capable of incredible feats of computation. The truth, however, is both far more mundane and far more profound in its implications. A generation or two ago we envisaged the future as a place where intricate machines would carry out most menial tasks, leaving humans free to contemplate their place in the universe, embrace loved ones in crowds, and sunbathe under the depleted ozone layer. Instead, we have inherited a world where humans carry out menial tasks at the bequest of machines, whilst maintaining the illusion that it is we, personally, who have benefited from each transaction. Every click and swipe of your finger is a collaboration between invisible entities — corporate, synthetic or not-even-invented yet. Next time you scan your own produce at the supermarket, track your eating and exercise habits, and upload them to a corporately maintained database, follow the advice of a piece of software on which stock to sell, or which car to buy, search Google for a weird string of misspelt terms, or retweet a Twitter bot, you are taking part in a vast experiment that has already evolved beyond any single person or machine’s ability to comprehend. The future of information is augmented, symbiotic, invisible and incessant. But does it belong to users? Corporations? Or semi-autonomous machines? Only you and your synthetic assistants can decide. * The answer, according to my smartphone, is 3572.7215116770576

]]>
Thu, 28 Aug 2014 01:42:31 -0700 http://www.grafik.net/category/screenshot/synthetic-assistants
<![CDATA[Meet the Father of Digital Life]]> http://nautil.us/issue/14/mutation/meet-the-father-of-digital-life

n 1953, at the dawn of modern computing, Nils Aall Barricelli played God. Clutching a deck of playing cards in one hand and a stack of punched cards in the other, Barricelli hovered over one of the world’s earliest and most influential computers, the IAS machine, at the Institute for Advanced Study in Princeton, New Jersey. During the day the computer was used to make weather forecasting calculations; at night it was commandeered by the Los Alamos group to calculate ballistics for nuclear weaponry. Barricelli, a maverick mathematician, part Italian and part Norwegian, had finagled time on the computer to model the origins and evolution of life.

Inside a simple red brick building at the northern corner of the Institute’s wooded wilds, Barricelli ran models of evolution on a digital computer. His artificial universes, which he fed with numbers drawn from shuffled playing cards, teemed with creatures of code—morphing, mutating, melting, maintaining. He created laws that determined, independent of any foreknowledge on his part, which assemblages of binary digits lived, which died, and which adapted. As he put it in a 1961 paper, in which he speculated on the prospects and conditions for life on other planets, “The author has developed numerical organisms, with properties startlingly similar to living organisms, in the memory of a high speed computer.” For these coded critters, Barricelli became a maker of worlds.

Until his death in 1993, Barricelli floated between biological and mathematical sciences, questioning doctrine, not quite fitting in. “He was a brilliant, eccentric genius,” says George Dyson, the historian of technology and author of Darwin Among The Machines and Turing’s Cathedral, which feature Barricelli’s work. “And the thing about geniuses is that they just see things clearly that other people don’t see.”

Barricelli programmed some of the earliest computer algorithms that resemble real-life processes: a subdivision of what we now call “artificial life,” which seeks to simulate living systems—evolution, adaptation, ecology—in computers. Barricelli presented a bold challenge to the standard Darwinian model of evolution by competition by demonstrating that organisms evolved by symbiosis and cooperation.

Pixar cofounder Alvy Ray Smith says Barricelli influenced his earliest thinking about the possibilities for computer animation.

In fact, Barricelli’s projects anticipated many current avenues of research, including cellular automata, computer programs involving grids of numbers paired with local rules that can produce complicated, unpredictable behavior. His models bear striking resemblance to the one-dimensional cellular automata—life-like lattices of numerical patterns—championed by Stephen Wolfram, whose search tool Wolfram Alpha helps power the brain of Siri on the iPhone. Nonconformist biologist Craig Venter, in defending his creation of a cell with a synthetic genome—“the first self-replicating species we’ve had on the planet whose parent is a computer”—echoes Barricelli.

Barricelli’s experiments had an aesthetic side, too. Uncommonly for the time, he converted the digital 1s and 0s of the computer’s stored memory into pictorial images. Those images, and the ideas behind them, would influence computer animators in generations to come. Pixar cofounder Alvy Ray Smith, for instance, says Barricelli stirred his earliest thinking about the possibilities for computer animation, and beyond that, his philosophical muse. “What we’re really talking about here is the notion that living things are computations,” he says. “Look at how the planet works and it sure does look like a computation.”

Despite Barricelli’s pioneering experiments, barely anyone remembers him. “I have not heard of him to tell you the truth,” says Mark Bedau, professor of humanities and philosophy at Reed College and editor of the journal Artificial Life. “I probably know more about the history than most in the field and I’m not aware of him.”

Barricelli was an anomaly, a mutation in the intellectual zeitgeist, an unsung hero who has mostly languished in obscurity for the past half century. “People weren’t ready for him,” Dyson says. That a progenitor has not received much acknowledgment is a failing not unique to science. Visionaries often arrive before their time. Barricelli charted a course for the digital revolution, and history has been catching up ever since.

Barricelli_BREAKER-02 EVOLUTION BY THE NUMBERS: Barricelli converted his computer tallies of 1s and 0s into images. In this 1953 Barricelli print, explains NYU associate professor Alexander Galloway, the chaotic center represents mutation and disorganization. The more symmetrical fields toward the margins depict Barricelli’s evolved numerical organisms.From the Shelby White and Leon Levy Archives Center, Institute for Advanced Study, Princeton. Barricelli was born in Rome on Jan. 24, 1912. According to Richard Goodman, a retired microbiologist who met and befriended the mathematician in the 1960s, Barricelli claimed to have invented calculus before his tenth birthday. When the young boy showed the math to his father, he learned that Newton and Leibniz had preempted him by centuries. While a student at the University of Rome, Barricelli studied mathematics and physics under Enrico Fermi, a pioneer of quantum theory and nuclear physics. A couple of years after graduating in 1936, he immigrated to Norway with his recently divorced mother and younger sister.

As World War II raged, Barricelli studied. An uncompromising oddball who teetered between madcap and mastermind, Barricelli had a habit of exclaiming “Absolut!” when he agreed with someone, or “Scandaloos!” when he found something disagreeable. His accent was infused with Scandinavian and Romantic pronunciations, making it occasionally challenging for colleagues to understand him. Goodman recalls one of his colleagues at the University of California, Los Angeles who just happened to be reading Barricelli’s papers “when the mathematician himself barged in and, without ceremony, began rattling off a stream of technical information about his work on phage genetics,” a science that studies gene mutation, replication, and expression through model viruses. Goodman’s colleague understood only fragments of the speech, but realized it pertained to what he had been reading.

“Are you familiar with the work of Nils Barricelli?” he asked.

“Barricelli! That’s me!” the mathematician cried.

Notwithstanding having submitted a 500-page dissertation on the statistical analysis of climate variation in 1946, Barricelli never completed his Ph.D. Recalling the scene in the movie Amadeus in which the Emperor of Austria commends Mozart’s performance, save for there being “too many notes,” Barricelli’s thesis committee directed him to slash the paper to a tenth of the size, or else it would not accept the work. Rather than capitulate, Barricelli forfeited the degree.

Barricelli began modeling biological phenomena on paper, but his calculations were slow and limited. He applied to study in the United States as a Fulbright fellow, where he could work with the IAS machine. As he wrote on his original travel grant submission in 1951, he sought “to perform numerical experiments by means of great calculating machines,” in order to clarify, through mathematics, “the first stages of evolution of a species.” He also wished to mingle with great minds—“to communicate with American statisticians and evolution-theorists.” By then he had published papers on statistics and genetics, and had taught Einstein’s theory of relativity. In his application photo, he sports a pyramidal moustache, hair brushed to the back of his elliptic head, and hooded, downturned eyes. At the time of his application, he was a 39-year-old assistant professor at the University of Oslo.

Although the program initially rejected him due to a visa issue, in early 1953 Barricelli arrived at the Institute for Advanced Study as a visiting member. “I hope that you will be finding Mr. Baricelli [sic] an interesting person to talk with,” wrote Ragnar Frisch, a colleague of Barricelli’s who would later win the first Nobel Prize in Economics, in a letter to John von Neumann, a mathematician at IAS, who helped devise the institute’s groundbreaking computer. “He is not very systematic always in his exposition,” Frisch continued, “but he does have interesting ideas.”

Barricelli_BREAKER_2crop PSYCHEDELIC BARRICELLI: In this recreation of a Barricelli experiment, NYU associate professor Alexander Galloway has added color to show the gene groups more clearly. Each swatch of color signals a different organism. Borders between the color fields represent turbulence as genes bounce off and meld with others, symbolizing Barricelli’s symbiogenesis.Courtesy Alexander Galloway Centered above Barricelli’s first computer logbook entry at the Institute for Advanced Study, in handwritten pencil script dated March 3, 1953, is the title “Symbiogenesis problem.” This was his theory of proto-genes, virus-like organisms that teamed up to become complex organisms: first chromosomes, then cellular organs, onward to cellular organisms and, ultimately, other species. Like parasites seeking a host, these proto-genes joined together, according to Barricelli, and through their mutual aid and dependency, originated life as we know it.

Standard neo-Darwinian doctrine maintained that natural selection was the main means by which species formed. Slight variations and mutations in genes combined with competition led to gradual evolutionary change. But Barricelli disagreed. He pictured nimbler genes acting as a collective, cooperative society working together toward becoming species. Darwin’s theory, he concluded, was inadequate. “This theory does not answer our question,” he wrote in 1954, “it does not say why living organisms exist.”

Barricelli coded his numerical organisms on the IAS machine in order to prove his case. “It is very easy to fabricate or simply define entities with the ability to reproduce themselves, e.g., within the realm of arithmetic,” he wrote.

The early computer looked sort of like a mix between a loom and an internal combustion engine. Lining the middle region were 40 Williams cathode ray tubes, which served as the machine’s memory. Within each tube, a beam of electrons (the cathode ray) bombarded one end, creating a 32-by-32 grid of points, each consisting of a slight variation in electrical charge. There were five kilobytes of memory total stored in the machine. Not much by today’s standards, but back then it was an arsenal.

Barricelli saw his computer organisms as a blueprint of life—on this planet and any others.

Inside the device, Barricelli programmed steadily mutable worlds each with rows of 512 “genes,” represented by integers ranging from negative to positive 18. As the computer cycled through hundreds and thousands of generations, persistent groupings of genes would emerge, which Barricelli deemed organisms. The trick was to tweak his manmade laws of nature—“norms,” as he called them—which governed the universe and its entities just so. He had to maintain these ecosystems on the brink of pandemonium and stasis. Too much chaos and his beasts would unravel into a disorganized shamble; too little and they would homogenize. The sweet spot in the middle, however, sustained life-like processes.

Barricelli’s balancing act was not always easygoing. His first trials were riddled with pests: primitive, often single numeric genes invaded the space and gobbled their neighbors. Typically, he was only able to witness a couple of hereditary changes, or a handful at best, before the world unwound. To create lasting evolutionary processes, he needed to handicap these pests’ ability to rapidly reproduce. By the time he returned to the Institute in 1954 to begin a second round of experiments, Barricelli made some critical changes. First, he capped the proliferation of the pests to once per generation. That constraint allowed his numerical organisms enough leeway to outpace the pests. Second, he began employing different norms to different sections of his universes. That forced his numerical organisms always to adapt.

Even in the earlier universes, Barricelli realized that mutation and natural selection alone were insufficient to account for the genesis of species. In fact, most single mutations were harmful. “The majority of the new varieties which have shown the ability to expand are a result of crossing-phenomena and not of mutations, although mutations (especially injurious mutations) have been much more frequent than hereditary changes by crossing in the experiments performed,” he wrote.

When an organism became maximally fit for an environment, the slightest variation would only weaken it. In such cases, it took at least two modifications, effected by a cross-fertilization, to give the numerical organism any chance of improvement. This indicated to Barricelli that symbioses, gene crossing, and “a primitive form of sexual reproduction,” were essential to the emergence of life.

“Barricelli immediately figured out that random mutation wasn’t the important thing; in his first experiment he figured out that the important thing was recombination and sex,” Dyson says. “He figured out right away what took other people much longer to figure out.” Indeed, Barricelli’s theory of symbiogenesis can be seen as anticipating the work of independent-thinking biologist Lynn Margulis, who in the 1960s showed that it was not necessarily genetic mutations over generations, but symbiosis, notably of bacteria, that produced new cell lineages.

Barricelli saw his computer organisms as a blueprint of life—on this planet and any others. “The question whether one type of symbio-organism is developed in the memory of a digital computer while another type is developed in a chemical laboratory or by a natural process on some planet or satellite does not add anything fundamental to this difference,” he wrote. A month after Barricelli began his experiments on the IAS machine, Crick and Watson announced the shape of DNA as a double helix. But learning about the shape of biological life didn’t put a dent in Barricelli’s conviction that he had captured the mechanics of life on a computer. Let Watson and Crick call DNA a double helix. Barricelli called it “molecule-shaped numbers.”

Barricelli_BREAKER

What buried Barricelli in obscurity is something of a mystery. “Being uncompromising in his opinions and not a team player,” says Dyson, no doubt led to Barricelli’s “isolation from the academic mainstream.” Dyson also suspects Barricelli and the indomitable Hungarian mathematician von Neumann, an influential leader at the Institute of Advanced Study, didn’t hit it off. Von Neumann appears to have ignored Barricelli. “That was sort of fatal because everybody looked to von Neumann as the grandfather of self-replicating machines.”

Ever so slowly, though, Barricelli is gaining recognition. That stems in part from another of Barricelli’s remarkable developments; certainly one of his most beautiful. He didn’t rest with creating a universe of numerical organisms, he converted his organisms into images. His computer tallies of 1s and 0s would then self-organize into visual grids of exquisite variety and texture. According to Alexander Galloway, associate professor in the department of media, culture, and communication at New York University, a finished Barricelli “image yielded a snapshot of evolutionary time.”

When Barricelli printed sections of his digitized universes, they were dazzling. To modern eyes they might look like satellite imagery of an alien geography: chaotic oceans, stratigraphic outcrops, and the contours of a single stream running down the center fold, fanning into a delta at the patchwork’s bottom. “Somebody needs to do a museum show and show this stuff because they’re outrageous,” Galloway says.

Barricelli was an uncompromising oddball who teetered between madcap and mastermind.

Today, Galloway, a member of Barricelli’s small but growing cadre of boosters, has recreated the images. Following methods described by Barricelli in one of his papers, Galloway has coded an applet using the computer language Processing to revive Barricelli’s numerical organisms—with slight variation. While Barricelli encoded his numbers as eight-unit-long proto-pixels, Galloway condensed each to a single color-coded cell. By collapsing each number into a single pixel, Galloway has been able to fit eight times as many generations in the frame. These revitalized mosaics look like psychedelic cross-sections of the fossil record. Each swatch of color represents an organism, and when one color field bumps up against another one, that’s where cross-fertilization takes place.

“You can see these kinds of points of turbulence where the one color meets another color,” Galloway says, showing off the images on a computer in his office. “That’s a point where a number would be—or a gene would be—sort of jumping from one organism to another.” Here, in other words, is artificial life—Barricelli’s symbiogenesis—frozen in amber. And cyan and lavender and teal and lime and fuchsia.

Galloway is not the only one to be struck by the beauty of Barricelli’s computer-generated digital images. As a doctoral student, Pixar cofounder Smith became familiar with Barricelli’s work while researching the history of cellular automata for his dissertation. When he came across Barricelli’s prints he was astonished. “It was remarkable to me that with such crude computing facilities in the early 50s, he was able to be making pictures,” Smith says. “I guess in a sense you can say that Barricelli got me thinking about computer animation before I thought about computer animation. I never thought about it that way, but that’s essentially what it was.”

Cyberspace now swells with Barricelli’s progeny. Self-replicating strings of arithmetic live out their days in the digital wilds, increasingly independent of our tampering. The fittest bits survive and propagate. Researchers continue to model reduced, pared-down versions of life artificially, while the real world bursts with Boolean beings. Scientists like Venter conjure synthetic organisms, assisted by computer design. Swarms of autonomous codes thrive, expire, evolve, and mutate underneath our fingertips daily. “All kinds of self-reproducing codes are out there doing things,” Dyson says. In our digital lives, we are immersed in Barricelli’s world.

]]>
Fri, 20 Jun 2014 06:08:03 -0700 http://nautil.us/issue/14/mutation/meet-the-father-of-digital-life
<![CDATA[Ecstatic Computation]]> http://vimeo.com/94023435

Ecastatic Computation is a VR Ritual that explores the moment of ecstasy when thought becomes bit and electrons become ideas. I take the role of the technoshaman and manifest and guide the participant on this journey so they make experience the existential quantum relationship between humans and their computational tools. More information michaelpallison.com/projects/ecstaticcomputation 2014 ITP Thesis Documentation Special thanks to Anne-Marie Lavigne Filmed by Roy Rochlin royrochlin.com/Cast: Michael AllisonTags: Virtual Reality, oculus rift, ITP, interactive, ritual and tehcnoshamanism

]]>
Sat, 14 Jun 2014 15:27:00 -0700 http://vimeo.com/94023435
<![CDATA[Rhizome | Prosthetic Knowledge Picks: Computational Photography]]> http://rhizome.org/editorial/2013/oct/3/prosthetic-knowledge-computational-photography/

The digital eye is an ubiquitous feature of current portable technology—webcams, DSLRs, mobile phones, tablets, even MP3 players.

]]>
Sun, 29 Dec 2013 09:42:27 -0800 http://rhizome.org/editorial/2013/oct/3/prosthetic-knowledge-computational-photography/
<![CDATA[Four Notes Towards Post-Digital Propaganda | post-digital-research]]> http://post-digital.projects.cavi.dk/?p=475

“Propaganda is called upon to solve problems created by technology, to play on maladjustments and to integrate the individual into a technological world” (Ellul xvii).

How might future research into digital culture approach a purported “post-digital” age? How might this be understood?

1.

A problem comes from the discourse of ‘the digital’ itself: a moniker which points towards units of Base-2 arbitrary configuration, impersonal architectures of code, massive extensions of modern communication and ruptures in post-modern identity. Terms are messy, and it has never been easy to establish a ‘post’ from something, when pre-discourse definitions continue to hang in the air. As Florian Cramer has articulated so well, ‘post-digital’ is something of a loose, ‘hedge your bets’ term, denoting a general tendency to criticise the digital revolution as a modern innovation (Cramer).

Perhaps it might be aligned with what some have dubbed “solutionism” (Morozov) or “computationalism” (Berry 129; Golumbia 8): the former critiquing a Silicon Valley-led ideology oriented towards solving liberalised problems through efficient computerised means. The latter establishing the notion (and critique thereof) that the mind is inherently computable, and everything associated with it. In both cases, digital technology is no longer just a business that privatises information, but the business of extending efficient, innovative logic to all corners of society and human knowledge, condemning everything else through a cultural logic of efficiency.

In fact, there is a good reason why ‘digital’ might as well be an synonym for ‘efficiency’. Before any consideration is assigned to digital media objects (i.e. platforms, operating systems, networks), consider the inception of ‘the digital’ inception as such: that is information theory. If information was a loose, shabby, inefficient method of vagueness specific to various mediums of communication, Claude Shannon compressed all forms of communication into a universal system with absolute mathematical precision (Shannon). Once information became digital, the conceptual leap of determined symbolic logic was set into motion, and with it, the ‘digital’ became synonymous with an ideology of effectivity. No longer would miscommunication be subject to human finitude, nor be subject to matters of distance and time, but only the limits of entropy and the matter of automating messages through the support of alternating ‘true’ or ‘false’ relay systems.

However, it would be quite difficult to envisage any ‘post-computational’ break from such discourses – and with good reason: Shannon’s breakthrough was only systematically effective through the logic of computation. So the old missed encounter goes: Shannon presupposed Alan Turing’s mathematical idea of computation to transmit digital information, and Turing presupposed Shannon’s information theory to understand what his Universal Turing Machines were actually transmitting. The basic theories of both have not changed, but the materials affording greater processing power, extensive server infrastructure and larger storage space have simply increased the means for these ideas to proliferate, irrespective of what Turing and Shannon actually thought of them (some historians even speculate that Turing may have made the link between information and entropy two years before Bell Labs did) (Good).

Thus a ‘post-digital’ reference point might encompass the historical acknowledgment of Shannon’s digital efficiency, and Turing’s logic but by the same measure, open up a space for critical reflection, and how such efficiencies have transformed not only work, life and culture but also artistic praxis and aesthetics. This is not to say that digital culture is reducibly predicated on efforts made in computer science, but instead fully acknowledges these structures and accounts for how ideologies propagate reactionary attitudes and beliefs within them, whilst restricting other alternatives which do not fit their ‘vision’. Hence, the post-digital ‘task’ set for us nowadays might consist in critiquing digital efficiency and how it has come to work against commonality, despite transforming the majority of Western infrastructure in its wake.

The purpose of these notes is to outline how computation has imparted an unwarranted effect of totalised efficiency, and to label this effect the type of description it deserves: propaganda. The fact that Shannon and Turing had multiple lunches together at Bell labs in 1943, held conversations and exchanged ideas, but not detailed methods of cryptanalysis (Price & Shannon) provides a nice contextual allegory for how digital informatics strategies fail to be transparent.

But in saying this, I do not mean that companies only use digital networks for propagative means (although that happens), but that the very means of computing a real concrete function is constitutively propagative. In this sense, propaganda resembles a post-digital understanding of what it means to be integrated into an ecology of efficiency, and how technical artefacts are literally enacted as propagative decisions. Digital information often deceives us into accepting its transparency, and of holding it to that account: yet in reality it does the complete opposite, with no given range of judgements available to detect manipulation from education, or persuasion from smear. It is the procedural act of interacting with someone else’s automated conceptual principles, embedding pre-determined decisions which not only generate but pre-determine ones ability to make choices about such decisions, like propaganda.

This might consist in distancing ideological definitions of false consciousness as an epistemological limit to knowing alternatives within thought, to engaging with a real programmable systems which embeds such limits concretely, withholding the means to transform them. In other words, propaganda incorporates how ‘decisional structures’ structure other decisions, either conceptually or systematically.

2.

Two years before Shannon’s famous Masters thesis, Turing published what would be a theoretical basis for computation in his 1936 paper “On Computable Numbers, with an Application to the Entscheidungsproblem.” The focus of the paper was to establish the idea of computation within a formal system of logic, which when automated would solve particular mathematical problems put into function (Turing, An Application). What is not necessarily taken into account is the mathematical context to that idea: for the foundations of mathematics were already precarious, way before Turing outlined anything in 1936. Contra the efficiency of the digital, this is a precariousness built-in to computation from its very inception: the precariousness of solving all problems in mathematics.

The key word of that paper, its key focus, was on the Entscheidungsproblem, or decision problem. Originating from David Hilbert’s mathematical school of formalism, ‘decision’ means something more rigorous than the sorts of decisions in daily life. It really means a ‘proof theory’, or how analytic problems in number theory and geometry could be formalised, and thus efficiently solved (Hilbert 3). Solving a theorem is simply finding a provable ‘winning position’ in a game. Similar to Shannon, ‘decision’ is what happens when an automated system of function is constructed in such a sufficiently complex way, that an algorithm can always ‘decide’ a binary, yes or no answer to a mathematical problem, when given an arbitrary input, in a sufficient amount of time. It does not require ingenuity, intuition or heuristic gambles, just a combination of simple consistent formal rules and a careful avoidance of contradiction.

The two key words there are ‘always’ and ‘decide’. The progressive end-game of twentieth century mathematicians who, like Hilbert, sought after a simple totalising conceptual system to decide every mathematical problem and work towards absolute knowledge. All Turing had to do was make explicit Hilbert’s implicit computational treatment of formal rules, manipulate symbol strings and automate them using an ’effective’ or “systematic method” (Turing, Solvable and Unsolvable Problems 584) encoded into a machine. This is what Turing’s thesis meant (discovered independently to Alonzo Church’s equivalent thesis (Church)): any systematic algorithm solved by a mathematical theorem can be computed by a Turing machine (Turing, An Application), or in Robin Gandy’s words, “[e]very effectively calculable function is a computable function” (Gandy).

Thus effective procedures decide problems, and they resolve puzzles providing winning positions (like theorems) in the game of functional rules and formal symbols. In Turing’s words, “a systematic procedure is just a puzzle in which there is never more than one possible move in any of the positions which arise and in which some significance is attached to the final result” (Turing, Solvable and Unsolvable Problems 590). The significance, or the winning position, becomes the crux of the matter for the decision: what puzzles or problems are to be decided? This is what formalism attempted to do: encode everything through the regime of formalised efficiency, so that all of mathematically inefficient problems are, in principle, ready to be solved. Programs are simply proofs: if it could be demonstrated mathematically, it could be automated.

In 1936, Turing had showed some complex mathematical concepts of effective procedures could simulate the functional decisions of all the other effective procedures (such as the Universal Turing Machine). Ten years later, Turing and John von Neumann would independently show how physical general purpose computers, offered the same thing and from that moment on, efficient digital decisions manifested themselves in the cultural application of physical materials. Before Shannon’s information theory offered the precision of transmitting information, Hilbert and Turing developed the structure of its transmission in the underlying regime of formal decision.

Yet, there was also a non-computational importance here, for Turing was also fascinated by what decisions couldn’t compute. His thesis was quite precise, so as to elucidate that if no mathematical problem could be proved, a computer was not of any use. In fact, the entire focus of his 1936 paper, often neglected by Silicon Valley cohorts, was to show that Hilbert’s particular decision problem could not be solved. Unlike Hilbert, Turing was not interested in using computation to solve every problem, but as a curious endeavour for surprising intuitive behaviour. The most important of all, Turing’s halting, or printing problem was influential, precisely as it was undecidable; a decision problem which couldn’t be decided.

We can all picture the halting problem, even obliquely. Picture the frustrated programmer or mathematician starting at their screen, waiting to know when an algorithm will either halt and spit out a result, or provide no answer. The computer itself has already determined the answer for us, the programmer just has to know when to give up. But this is a myth, inherited with a bias towards human knowledge, and a demented understanding of machines as infinite calculating engines, rather than concrete entities of decision. For reasons that escape word space, Turing didn’t understand the halting problem in this way: instead he understood it as a contradictory example of computational decisions failing to decide on each other, on the account that there could never be one totalising decision or effective procedure. There is no guaranteed effective procedure to decide on all the others, and any attempt to build one (or invest in a view which might help build one), either has too much investment in absolute formal reason, or it ends up with ineffective procedures.

Undecidable computation might be looked at as a dystopian counterpart against the efficiency of Shannon’s ‘digital information’ theory. A base 2 binary system of information resembling one of two possible states, whereby a system can communicate with one digit, only in virtue of the fact that there is one other digit alternative to it. Yet the perfect transmission of that information, is only subject to a system which can ‘decide’ on the digits in question, and establish a proof to calculate a success rate. If there is no mathematical proof to decide a problem, then transmitting information becomes problematic for establishing a solution.

3.

What has become clear is that our world is no longer simply accountable to human decision alone. Decisions are no longer limited to the borders of human decisions and ‘culture’ is no longer simply guided by a collective whole of social human decisions. Nor is it reducible to one harmonious ‘natural’ collective decision which prompts and pre-empts everything else. Instead we seem to exist in an ecology of decisions: or better yet decisional ecologies. Before there was ever the networked protocol (Galloway), there was the computational decision. Decision ecologies are already set up before we enter the world, implicitly coterminous with our lives: explicitly determining a quantified or bureaucratic landscape upon which an individual has limited manoeuvrability.

Decisions are not just digital, they are continuous as computers can be: yet decisions are at their most efficient when digitally transferred. Decisions are everywhere and in everything. Look around. We are constantly told by governments and states that are they making tough decisions in the face of austerity. CEOs and Directors make tough decisions for the future of their companies and ‘great’ leaders are revered for being ‘great decisive leaders’: not just making decisions quickly and effectively, but also settling issues and producing definite results.

Even the word ‘decide’, comes from the Latin origin of ‘decidere’, which means to determine something and ‘to cut off.’ Algorithms in financial trading know not of value, but of decision: whether something is marked by profit or loss. Drones know not of human ambiguity, but can only decide between kill and ignore, cutting off anything in-between. Constructing a system which decides between one of two digital values, even repeatedly, means cutting off and excluding all other possible variables, leaving a final result at the end of the encoded message. Making a decision, or building a system to decide a particular ideal or judgement must force other alternatives outside of it. Decisions are always-already embedded into the framework of digital action, always already deciding what is to be done, how it can be done or what is threatening to be done. It would make little sense to suggest that these entities ‘make decisions’ or ‘have decisions’, it would be better to say that they are decisions and ecologies are constitutively constructed by them.

The importance of neo-liberal digital transmissions are not that they become innovative, or worthy of a zeitgeist break: but that they demonstrably decide problems whose predominant significance is beneficial for self-individual efficiency and accumulation of capital. Digital efficiency is simply about the expansion of automating decisions and what sort of formalised significances must be propagated to solve social and economic problems, which creates new problems in a vicious circle.

The question can no longer simply be ‘who decides’, but now, ‘what decides?’ Is it the cafe menu board, the dinner party etiquette, the NASDAQ share price, Google Pagerank, railway network delays, unmanned combat drones, the newspaper crossword, the javascript regular expression or the differential calculus? It’s not quite right to say that algorithms rule the world, whether in algo-trading or in data capture, but the uncomfortable realisation that real entities are built to determine provable outcomes time and time again: most notably ones for cumulating profit and extracting revenue from multiple resources.

One pertinent example: consider George Dantzig’s simplex algorithm: this effective procedure (whose origins began in multidimensional geometry) can always decide solutions for large scale optimisation problems which continually affect multi-national corporations. The simplex algorithm’s proliferation and effectiveness has been critical since its first commercial application in 1952, when Abraham Charnes and William Cooper used it to decide how best to optimally blend four different petroleum products at the Gulf Oil Company (Elwes 35; Gass & Assad 79). Since then the simplex algorithm has had years of successful commercial use, deciding almost everything from bus timetables and work shift patterns to trade shares and Amazon warehouse configurations. According to the optimisation specialist Jacek Gondzio, the simplex algorithm runs at “tens, probably hundreds of thousands of calls every minute” (35), always deciding the most efficient method of extracting optimisation.

In contemporary times, nearly all decision ecologies work in this way, accompanying and facilitating neo-liberal methods of self-regulation and processing all resources through a standardised efficiency: from bureaucratic methods of formal standardisation, banal forms ready to be analysed one central system, to big-data initiatives and simple procedural methods of measurement and calculation. The technique of decision is a propagative method of embedding knowledge, optimisation and standardisation techniques in order to solve problems and an urge to solve the most unsolvable ones, including us.

Google do not build into their services an option to pay for the privilege of protecting privacy: the entire point of providing a free service which purports to improve daily life, is that it primarily benefits the interests of shareholders and extend commercial agendas. James Grimmelmann gave a heavily detailed exposition on Google’s own ‘net neutrality’ algorithms and how biased they happen to be. In short, PageRank does not simply decide relevant results, it decides visitor numbers and he concluded on this note.

With disturbing frequency, though, websites are not users’ friends. Sometimes they are, but often, the websites want visitors, and will be willing to do what it takes to grab them (Grimmelmann 458).

If the post-digital stands for the self-criticality of digitalisation already underpinning contemporary regimes of digital consumption and production, then its saliency lies in understanding the logic of decision inherent to such regimes. The reality of the post-digital, shows that machines remain curiously efficient whether we relish in cynicism or not. Such regimes of standardisation and determined results, were already ‘mistakenly built in’ to the theories which developed digital methods and means, irrespective of what computers can or cannot compute.

4.

Why then should such post-digital actors be understood as instantiations of propaganda? The familiarity of propaganda is manifestly evident in religious and political acts of ideological persuasion: brainwashing, war activity, political spin, mind control techniques, subliminal messages, political campaigns, cartoons, belief indoctrination, media bias, advertising or news reports. A definition of propaganda might follow from all of these examples: namely, the systematic social indoctrination of biased information that persuades the masses to take action on something which is neither beneficial to them, nor in their best interests: or as Peter Kenez writes, propaganda is “the attempt to transmit social and political values in the hope of affecting people’s thinking, emotions, and thereby behaviour” (Kenez 4) Following Stanley B. Cunningham’s watered down definition, propaganda might also denote a helpful and pragmatic “shorthand statement about the quality of information transmitted and received in the twentieth century” (Cunningham 3).

But propaganda isn’t as clear as this general definition makes out: in fact what makes propaganda studies such a provoking topic is that nearly every scholar agrees that no stable definition exists. Propaganda moves beyond simple ‘manipulation’ and ‘lies’ or derogatory, jingoistic representation of an unsubtle mood – propaganda is as much about the paradox of constructing truth, and the irrational spread of emotional pleas, as well as endorsing rational reason. As the master propagandist William J. Daugherty wrote;

It is a complete delusion to think of the brilliant propagandist as being a professional liar. The brilliant propagandist […] tells the truth, or that selection of the truth which is requisite for his purpose, and tells it in such a way that the recipient does not think that he is receiving any propaganda…. (Daugherty 39).

Propaganda, like ideology works by being inherently implicit and social. In the same way that post-ideology apologists ignore their symptom, propaganda is also ignored. It isn’t to be taken as a shadowy fringe activity, blown apart by the democratising fairy-dust of ‘the Internet’. As many others have noted, the purported ‘decentralising’ power of online networks, offer new methods for propagative techniques, or ‘spinternet’ strategies, evident in China (Brady). Iran’s recent investment into video game technology only makes sense, only when you discover that 70% of Iran’s population are under 30 years of age, underscoring a suitable contemporary method of dissemination. Similarly in 2011, the New York City video game developer Kuma Games was mired in controversy when it was discovered that an alleged CIA agent, Amir Mirza Hekmati, had been recruited to make an episodic video game series intending to “change the public opinion’s mindset in the Middle East.” (Tehran Times). The game in question, Kuma\War (2006 – 2011) was a free-to-play First-Person Shooter series, delivered in episodic chunks, the format of which attempted to simulate biased re-enactments of real-life conflicts, shortly after they reached public consciousness.

Despite his unremarkable leanings towards Christian realism, Jacques Ellul famously updated propaganda’s definition as the end product of what he previously lamented as ‘technique’. Instead of viewing propaganda as a highly organised systematic strategy for extending the ideologues of peaceful warfare, he understood it as a general social phenomenon in contemporary society.

Ellul outlined two types: political and sociological propaganda: Political propaganda involves government, administrative techniques which intend to directly change the political beliefs of an intended audience. By contrast, sociological propaganda is the implicit unification of involuntary public behaviour which creates images, aesthetics, problems, stereotypes, the purpose of which aren’t explicitly direct, nor overtly militaristic. Ellul argues that sociological propaganda exists; “in advertising, in the movies (commercial and non-political films), in technology in general, in education, in the Reader’s Digest; and in social service, case work, and settlement houses” (Ellul 64). It is linked to what Ellul called “pre” or “sub-propaganda”: that is, an imperceptible persuasion, silently operating within ones “style of life” or permissible attitude (63). Faintly echoing Louis Althusser’s Ideological State Apparatuses (Althusser 182) nearly ten years prior, Ellul defines it as “the penetration of an ideology by means of its sociological context.” (63) Sociological propaganda is inadequate for decisive action, paving the way for political propaganda – its strengthened explicit cousin – once the former’s implicitness needs to be transformed into the latter’s explicitness.

In a post-digital world, such implicitness no longer gathers wartime spirits, but instead propagates a neo-liberal way of life that is individualistic, wealth driven and opinionated. Ellul’s most powerful assertion is that ‘facts’ and ‘education’ are part and parcel of the sociological propagative effect: nearly everyone faces a compelling need to be opinionated and we are all capable of judging for ourselves what decisions should be made, without at first considering the implicit landscape from which these judgements take place. One can only think of the implicit digital landscape of Twitter: the archetype for self-promotion and snippets of opinions and arguments – all taking place within Ellul’s sub-propaganda of data collection and concealment. Such methods, he warns, will have “solved the problem of man” (xviii).

But information is of relevance here, and propaganda is only effective within a social community when it offers the means to solve problems using the communicative purview of information:

Thus, information not only provides the basis for propaganda but gives propaganda the means to operate; for information actually generates the problems that propaganda exploits and for which it pretends to offer solutions. In fact, no propaganda can work until the moment when a set of facts has become a problem in the eyes of those who constitute public opinion (114).

]]>
Wed, 11 Dec 2013 15:42:45 -0800 http://post-digital.projects.cavi.dk/?p=475