MachineMachine /stream - search for election https://machinemachine.net/stream/feed en-us http://blogs.law.harvard.edu/tech/rss LifePress therourke@gmail.com <![CDATA[AI Will Upend Election Season - The Atlantic]]> https://www.theatlantic.com/technology/archive/2023/04/ai-generated-political-ads-election-candidate-voter-interaction-transparency/673893/

Artificial intelligence is already showing up in political ads. Soon, it will completely change the nature of campaigning. Earlier this week, the Republican National Committee released a video that it claims was “built entirely with AI imagery.

]]>
Mon, 10 Jul 2023 03:51:29 -0700 https://www.theatlantic.com/technology/archive/2023/04/ai-generated-political-ads-election-candidate-voter-interaction-transparency/673893/
<![CDATA[How Do You Spot a Deepfake? It Might Not Matter]]> https://nymag.com/intelligencer/2019/06/how-do-you-spot-a-deepfake-it-might-not-matter.html

A shadow looms over the 2020 election: Deepfakes! The newish video-editing technology (or really, host of technologies) used to seamlessly paste one person’s face on another’s body, has activated a panic among pundits and politicians.

]]>
Tue, 06 Sep 2022 11:51:50 -0700 https://nymag.com/intelligencer/2019/06/how-do-you-spot-a-deepfake-it-might-not-matter.html
<![CDATA[How Do You Spot a Deepfake? It Might Not Matter]]> https://nymag.com/intelligencer/2019/06/how-do-you-spot-a-deepfake-it-might-not-matter.html

A shadow looms over the 2020 election: Deepfakes! The newish video-editing technology (or really, host of technologies) used to seamlessly paste one person’s face on another’s body, has activated a panic among pundits and politicians.

]]>
Tue, 06 Sep 2022 07:51:50 -0700 https://nymag.com/intelligencer/2019/06/how-do-you-spot-a-deepfake-it-might-not-matter.html
<![CDATA[Where Do Men Go When They Get Lost? | by Zoetica Ebb | Mar, 2021 | Medium]]> https://zoetica.medium.com/where-do-men-go-when-they-get-lost-996651359ade

I just lost a friend. In addition to the immense loss of life to Covid-19, we have lost people in other ways over the past year, when the confluence of quarantine, fear, and US-election frenzy tossed our friends and family at the cliffs of radical beliefs.

]]>
Wed, 31 Mar 2021 07:55:30 -0700 https://zoetica.medium.com/where-do-men-go-when-they-get-lost-996651359ade
<![CDATA[Survivorship bias - Wikipedia]]> https://en.wikipedia.org/wiki/Survivorship_bias

Survivorship bias or survival bias is the logical error of concentrating on the people or things that made it past some selection process and overlooking those that did not, typically because of their lack of visibility. This can lead to false conclusions in several different ways.

]]>
Sun, 23 Feb 2020 12:37:24 -0800 https://en.wikipedia.org/wiki/Survivorship_bias
<![CDATA[How Do You Spot a Deepfake? It Might Not Matter]]> http://nymag.com/intelligencer/2019/06/how-do-you-spot-a-deepfake-it-might-not-matter.html

A shadow looms over the 2020 election: Deepfakes! The newish video-editing technology (or really, host of technologies) used to seamlessly paste one person’s face on another’s body, has activated a panic among pundits and politicians.

]]>
Mon, 01 Jul 2019 07:34:35 -0700 http://nymag.com/intelligencer/2019/06/how-do-you-spot-a-deepfake-it-might-not-matter.html
<![CDATA[Seduced & Abandoned: The Body in the Virtual World - The Feminine Cyberspace]]> https://www.youtube.com/watch?v=doL9mRMEUGw

Seduced & Abandoned was one of a series of ICA conferences (spanning 12-13 March, 1994) held under the umbrella title Towards the Aesthetics of the Future that explored the connections between culture, society, politics and the impact upon them of new digital processes and technologies.

In this selection, Sadie Plant argues that cyberspace is a potentially radical space which uses modes of thinking and operating that have traditionally been seen as female. She also considers the relationship between cyberspace and immaterial space and speculates on what this could mean for the future. Christine Tamblyn and Pat Cadigan contribute to the discussion/Q&A but do not make individual presentations.

Digitisation supported by Virtual Futures http://virtualfutures.co.uk

Extra title music by Vapor Lanes https://vaporlanes.bandcamp.com/

Review of the proceedings http://www.independent.co.uk/extras/indybest/gadgets-tech/computers-more-theorists-than-you-could-shake-a-stick-at-rupert-goodwins-floats-in-organic-creme-de-1429850.html

]]>
Fri, 04 Aug 2017 04:48:09 -0700 https://www.youtube.com/watch?v=doL9mRMEUGw
<![CDATA[The Dark Side Of The Singularity | Answers With Joe]]> https://www.youtube.com/watch?v=bJ6QmZ48jY4

Or... How To Not Be A Horse. Automation and AI promise to usher in an era of amazing productivity and innovation. But they also threaten our very way of life.

Support me on Patreon! http://www.patreon.com/answerswithjoe

Follow me at all my places! Instagram: https://instagram.com/answerswithjoe Snapchat: https://www.snapchat.com/add/answerswithjoe Facebook: http://www.facebook.com/answerswithjoe Twitter: https://www.twitter.com/answerswithjoe

LINKS LINKS LINKS:

Tony Seba's talk about why transportation and energy will be obsolete by 2030: https://www.youtube.com/watch?v=Kxryv2XrnqM

http://www.chicagotribune.com/classified/automotive/ct-self-driving-cars-now-20160818-story.html

Okuma Automation: https://www.youtube.com/watch?v=3d-kPBbxb0Q

CNet News on the automated Amazon fulfillment centers: https://www.youtube.com/watch?v=UtBa9yVZBJM

Fully Charged - Self-Driving Nissan Leaf: https://www.youtube.com/watch?v=cfRqNAhAe6c

Partial Transcript:

For hundreds, even thousands of years, the horse was humanity’s go-to form of transportation. And in 13 years, that all changed.

Right now, we are on the cusp of a technological disruption that will make the switch from horses to cars look like switching from Coke to Pepsi.

So we talk a lot on this channel about exponential growth, artificial intelligence, the singularity, and that’s a lot of fun, but there is a dark side to all this change, one that really needs to be talked about because the way we respond to it is going to significantly alter our future as a species.

The BBC released a report just a few weeks ago that said that 30% of jobs are going to go away in the next 10 years because of automation.

In the U.S., we’ve heard a lot over the last election about the proverbial coal miners and our current president specifically campaigning to bring back coal jobs.

But coal is just one of hundreds of industries that are taking advantage of employees that can work 24/7, never need a bathroom break, never sleep, never make a mistake and work twice as fast. Oh, and you don’t have to pay them.

Factories already decimated by outsourcing are now losing even more jobs to automation. And as automation becomes more sophisticated, more industries are at risk.

The transportation sector actually makes up 25% of the jobs in the United States, if you can believe that. A full quarter of the population. And autonomous cars… They’re pretty much here, guys.

Famously, the Tesla Model 3, going into production this year, will have autonomous capability, though it may not have the software available, it will have the hardware ready for it.

But less famously, there are a lot of other car companies trying to beat Tesla to market with this. Nissan has a fully self-driving prototype in development that they took a drive in on Fully Charged and it was spooky how good it was.

Cadillac is so bullish on self-driving technology, they spent millions of dollars to create a lidar map of every highway in the United States using their own proprietary system.

This way their cars won’t just rely on sensors and GPS to find their way, the Cadillac system will contain a 3D map of everything, including the roadsigns.

Google’s working on a car, Apple supposedly is working on a car, but the people who are really big on this technology are the service providers.

Uber made over 2 billion dollars last year. Imagine how much they could make if they didn’t have to pay their drivers...

Uber has been working for years on a transportation fleet of autonomous cars, and even Ford has made some intentions known of pivoting in a similar direction.

Many are predicting that cars will go from a retail industry to a service industry, with Peter Diamandis saying that in ten years, car ownership will be an outdated idea.

The fact of the matter is, you can be for automation or against it, you can agree with its use or not, but this is happening. And we need to be ready for it.

Some people are talking about a basic minimum income, a flat amount of money that everybody in a society makes, as a safety net to keep people above water. This is an interesting idea that’s even being tested in some places.

There is a coming change on a fundamental and massive level in this world. One that is filled with amazing advancements and technological wonders. The question is, will we be able to change with it?

]]>
Mon, 01 May 2017 05:30:01 -0700 https://www.youtube.com/watch?v=bJ6QmZ48jY4
<![CDATA[Fake news is a red herring | World | DW.COM | 25.01.2017]]> http://www.dw.com/en/fake-news-is-a-red-herring/a-37269377

Watching the 2016 US presidential election was already a surreal experience, as dozens of qualified candidates lost out to a failed businessman and reality television star.

]]>
Mon, 20 Feb 2017 23:01:06 -0800 http://www.dw.com/en/fake-news-is-a-red-herring/a-37269377
<![CDATA[Exhibist Magazine Issue 11]]> http://exhibist.com/index.php/magazine/print-magazine?id=277

A selection of works from The 3D Additivist Cookbook were printed in issue 11 of Exhibist Magazine, including my essay Becoming Horror in The Plasticene. The magazine published in Turkey features interviews with media theorist and curator Ebru Yetişkin and Kristoffer Gansing, artistic director of transmediale festival. The current issue includes an essay by Ceylan Önalp titled ‘A Journey Through Time in Turkey’s New Media Art Scene’ featuring Ayşe Gül Süter, Ebru Kurbak, Can Büyükberber and Nihat Karataşlı and a selection of texts and projects from ‘The 3D Additivist Cookbook’ edited by Daniel Rourke and Morehshin Allahyari: Daniel Rourke’s ‘Becoming Horror in The Plasticene’; A Parede’s ‘Cheat Sheet for a Non- (or Less-) Colonialist Speculative Design’; Marija Bozinovska Jones + IYDES’ ‘Echoes of Earth: The Rocks of Us’; Symrin Chawla’s ‘Blood Bath’ curated by Browntourage for the 3D Additivist Cookbook The magazine introduces established artists working in the field of new media from Turkey such as Ali Miharbi, Erdal Inci, NOHlab, Pınar Yoldaş, Burak Arıkan and Refik Anadol and the work of artists and collectives such as Memo Akten, Selçuk Artut, Büşra Tunç, Ouchhh, DECOL, Iskele47, Osman Koç, Bager Akbay, Zeynep Nal Sezer, Uğur Engin Deniz, Epitome and Ozan Türkkan.

Interviews EVER ELUSIVE – A POST-DIGITAL INSTITUTION Tuce Erel talks to Kristoffer Gansing < Force Quit > + < Esc > = [ New Media Art ] Mine Kaplangı talks to Ebru Yetişkin Essays A JOURNEY THROUGH TIME IN TURKEY’S NEW MEDIA ART SCENE by Ceylan Önalp A SELECTION FROM THE 3D ADDITIVIST COOKBOOK Daniel Rourke, ‘Becoming Horror in The Plasticene’ A Parede, ‘Cheat Sheet for a Non- (or Less-) Colonialist Speculative Design’ Marija Bozinovska Jones + IYDES, ‘Echoes of Earth: The Rocks of Us’ Symrin Chawla, ‘Blood Bath’ curated by Browntourage for the 3D Additivist Cookbook

]]>
Tue, 31 Jan 2017 03:51:36 -0800 http://exhibist.com/index.php/magazine/print-magazine?id=277
<![CDATA[Your Echo Chamber is Destroying Democracy | WIRED]]> https://www.wired.com/2016/11/filter-bubble-destroying-democracy/

On November 7, 2016, the day before the US election, I compared the number of social media followers, website performance, and Google search statistics of Hillary Clinton and Donald Trump.  I was shocked when the data revealed the extent of Trump’s popularity.

]]>
Fri, 09 Dec 2016 11:21:30 -0800 https://www.wired.com/2016/11/filter-bubble-destroying-democracy/
<![CDATA[Spandrel (biology) - Wikipedia, the free encyclopedia]]> https://en.wikipedia.org/wiki/Spandrel_(biology)

In evolutionary biology, a spandrel is a phenotypic characteristic that is a byproduct of the evolution of some other characteristic, rather than a direct product of adaptive selection.

]]>
Mon, 01 Feb 2016 04:27:28 -0800 https://en.wikipedia.org/wiki/Spandrel_(biology)
<![CDATA[VIA Festival Announces Visual Artists, Exhibitions This...]]> http://additivism.org/post/129518079011

VIA Festival Announces Visual Artists, Exhibitions This selection of local and international emerging artists who work fluidly between a variety of digital media (video, animation, computer-generated imagery, augmented reality) have been paired with the festival’s headlining acts.

]]>
Sun, 20 Sep 2015 13:51:00 -0700 http://additivism.org/post/129518079011
<![CDATA[Algorithmic Narratives and Synthetic Subjects (paper)]]> http://machinemachine.net/portfolio/paper-at-theorizing-the-web-synthetic-subjects/

This was the paper I delivered at The Theorizing the Web Conference, New York, 18th April 2015. This video of the paper begins part way in, and misses out some important stuff. I urge you to watch the other, superb, papers on my panel by Natalie Kane, Solon Barocas, and Nick Seaver. A better video is forthcoming. I posted this up partly in response to this post at Wired about the UK election, Facebook’s echo-chamber effect, and other implications well worth reading into.

Data churning algorithms are integral to our social and economic networks. Rather than replace humans these programs are built to work with us, allowing the distinct strengths of human and computational intelligences to coalesce. As we are submerged into the era of ‘big data’, these systems have become more and more common, concentrating every terrabyte of raw data into meaningful arrangements more easily digestible by high-level human reasoning. A company calling themselves ‘Narrative Science’, based in Chicago, have established a profitable business model based on this relationship. Their slogan, ‘Tell the Stories Hidden in Your Data’, [1] is aimed at companies drowning in spreadsheets of cold information: a promise that Narrative Science can ‘humanise’ their databases with very little human input. Kristian Hammond, Chief Technology Officer of the company, claims that within 15 years over 90% of all news stories will also be written by algorithms. [2] But rather than replacing the jobs that human journalists now undertake, Hammond claims the vast majority of their ‘robonews’ output will report on data currently not covered by traditional news outlets. One family-friendly example of this is the coverage of little-league baseball games. Very few news organisations have the resources, or desire, to hire a swathe of human journalists to write-up every little-league game. Instead, Narrative Science offer leagues, parents and their children a miniature summary of each game gleaned from match statistics uploaded by diligent little league attendees, and then written up by Narrative Science in a variety of journalistic styles. In their book ‘Big Data’ from 2013, Oxford University Professor of internet governance Viktor Mayer-Schönberger, and  ‘data editor’ of The Economist, Kenneth Cukier, tell us excitedly about another data aggregation company, Prismatic, who: …rank content from the web on the basis of text analysis, user preferences, social network-popularity, and big-data analysis. [3] According to Mayer- Schönberger and Cukier this makes Prismatic able ‘to tell the world what it ought to pay attention to better than the editors of the New York Times’. [4] A situation, Steven Poole reminds us, we can little argue with so long as we agree that popularity underlies everything that is culturally valuable. Data is now the lifeblood of technocapitalism. A vast endless influx of information flowing in from the growing universe of networked and internet connected devices. As many of the papers at Theorizing the Web attest, our environment is more and more founded by systems whose job it is to mediate our relationship with this data. Technocapitalism still appears to respond to Jean Francois Lyotard’s formulation of Postmodernity: that whether something is true has less relevance, than whether it is useful. In 1973 Jean Francois Lyotard described the Postmodern Condition as a change in “the status of knowledge” brought about by new forms of techno-scienctific and techno-economic organisation. If a student could be taught effectively by a machine, rather than by another human, then the most important thing we could give the next generation was what he called, “elementary training in informatics and telematics.” In other words, as long as our students are computer literate “pedagogy would not necessarily suffer”. [5] The next passage – where Lyotard marks the Postmodern turn from the true to the useful – became one of the book’s most widely quoted, and it is worth repeating here at some length:

It is only in the context of the grand narratives of legitimation – the life of the spirit and/or the emancipation of humanity – that the partial replacement of teachers by machines may seem inadequate or even intolerable. But it is probable that these narratives are already no longer the principal driving force behind interest in acquiring knowledge. [6] Here, I want to pause to set in play at least three elements from Lyotard’s text that colour this paper. Firstly, the historical confluence between technocapitalism and the era now considered ‘postmodern’. Secondly, the association of ‘the grand-narrative’ with modern, and pre-modern conditions of knowledge. And thirdly, the idea that the relationship between the human and the machine – or computer, or software – is generally one-sided: i.e. we may shy away from the idea of leaving the responsibility of our children’s education to a machine, but Lyotard’s position presumes that since the machine was created and programmed by humans, it will therefore necessarily be understandable and thus controllable, by humans. Today, Lyotard’s vision of an informatically literate populous has more or less come true. Of course we do not completely understand the intimate workings of all our devices or the software that runs them, but the majority of the world population has some form of regular relationship with systems simulated on silicon. And as Lyotard himself made clear, the uptake of technocapitalism, and therefore the devices and systems it propagates, is piece-meal and difficult to predict or trace. At the same time Google’s fleet of self-driving motor vehicles are let-loose on Californian state highways, in parts of sub-Saharan Africa models of mobile-phones designed 10 or more years ago are allowing farming communities to aggregate their produce into quantities with greater potential to make profit on a world market. As Brian Massumi remarks, network technology allows us the possibility of “bringing to full expression a prehistory of the human”, a “worlding of the human” that marks the “becoming-planetary” of the body itself. [7] This “worlding of the human” represents what Edmund Berger argues is the death of the Postmodern condition itself: [T]he largest bankruptcy of Postmodernism is that the grand narrative of human mastery over the cosmos was never unmoored and knocked from its pulpit. Instead of making the locus of this mastery large aggregates of individuals and institutions – class formations, the state, religion, etc. – it simply has shifted the discourse towards the individual his or herself, promising them a modular dreamworld for their participation… [8] Algorithmic narratives appear to continue this trend. They are piece-meal, tending to feedback user’s dreams, wants and desires, through carefully aggregated, designed, packaged Narratives for individual ‘use’. A world not of increasing connectivity and understanding between entities, but a network worlded to each individual’s data-shadow. This situation is reminiscent of the problem pointed out by Eli Pariser of the ‘filter bubble’, or the ‘you loop’, a prevalent outcome of social media platforms tweaked and personalised by algorithms to echo at the user exactly the kind of thing they want to hear. As algorithms develop in complexity the stories they tell us about the vast sea of data will tend to become more and more enamoring, more and more palatable. Like some vast synthetic evolutionary experiment, those algorithms that devise narratives users dislike, will tend to be killed off in the feedback loop, in favour of other algorithms whose turn of phrase, or ability to stoke our egos, is more pronounced. For instance, Narrative Science’s early algorithms for creating little league narratives tended to focus on the victors of each game. What Narrative Science found is that parents were more interested in hearing about their own children, the tiny ups and downs that made the game significant to them. So the algorithms were tweaked in response. Again, to quote chief scientist Kris Hammond from Narrative Science: These are narratives generated by systems that understand data, that give us information to support the decisions we need to make about tomorrow. [9] Whilst we can program software to translate the informational nuances of a baseball game, or internet social trends, into human palatable narratives, larger social, economic and environmental events also tend to get pushed through an algorithmic meatgrinder to make them more palatable. The ‘tomorrow’ that Hammond claims his company can help us prepare for is one that, presumably, companies like Narrative Science and Prismatic will play an ever larger part in realising. In her recently published essay on Crisis and the Temporality of Networks, Wendy Chun reminds us of the difference between the user and the agent in the machinic assemblage: Celebrations of an all powerful user/agent – ‘you’ as the network, ‘you’ as the producer- counteract concerns over code as law as police by positing ‘you’ as the sovereign subject, ‘you’ as the decider. An agent however, is one who does the  actual labor, hence agent is one who acts on behalf of another. On networks, the agent would seem to be technology, rather than the users or programmers who authorize actions through their commands and clicks. [10] In order to unpack Wendy Chun’s proposition here we need only look at two of the most powerful, and impactful algorithms from the last ten years of the web. Firstly, Amazon’s recommendation system, which I assume you have all interacted with at some point. And secondly, Facebook’s news feed algorithm, that ranks and sorts posts on your personalised stream. Both these algorithms rely on a community of user interactions to establish a hierarchy of products, or posts, based on popularity. Both these algorithms also function in response to user’s past activity, and both, of course, have been tweaked and altered over time by the design and programming teams of the respective companies. As we are all no doubt aware, one of the most significant driving principles behind these extraordinarily successful pieces of code is capitalism itself. The drive for profit, and the relationship that has on distinguishing between a successful or failing company, service or product. Wendy Chun’s reminder that those that carry out an action, that program and click, are not the agents here should give use solace. We are positioned as sovereign subjects over our data, because that idea is beneficial to the propagation of the ‘product’. Whether we are told how well our child has done at baseball, or what particular kinds of news stories we might like, personally, to read right now, it is to the benefit of technocapitalism that those narratives are positive, palatable and uncompromising. However the aggregation and dissemination of big data effects our lives over the coming years, the likelihood is that at the surface – on our screens, and ubiquitous handheld devices – everything will seem rosey, comfortable, and suited to the ‘needs’ and ‘use’ of each sovereign subject.

TtW15 #A7 @npseaver @nd_kane @s010n @smwat pic.twitter.com/BjJndzaLz1

— Daniel Rourke (@therourke) April 17, 2015

So to finish I just want to gesture towards a much much bigger debate that I think we need to have about big data, technocapitalism and its algorithmic agents. To do this I just want to read a short paragraph which, as far as I know, was not written by an algorithm: Surface temperature is projected to rise over the 21st century under all assessed emission scenarios. It is very likely that heat waves will occur more often and last longer, and that extreme precipitation events will become more intense and frequent in many regions. The ocean will continue to warm and acidify, and global mean sea level to rise. [11] This is from a document entitled ‘Synthesis Report for Policy Makers’ drafted by The Intergovernmental Panel on Climate Change – another organisation who rely on a transnational network of computers, sensors, and programs capable of modeling atmospheric, chemical and wider environmental processes to collate data on human environmental impact. Ironically then, perhaps the most significant tool we have to understand the world, at present, is big data. Never before has humankind had so much information to help us make decisions, and help us enact changes on our world, our society, and our selves. But the problem is that some of the stories big data has to tell us are too big to be narrated, they are just too big to be palatable. To quote Edmund Berger again: For these reasons we can say that the proper end of postmodernism comes in the gradual realization of the Anthropocene: it promises the death of the narrative of human mastery, while erecting an even grander narrative. If modernism was about victory of human history, and postmodernism was the end of history, the Anthropocene means that we are no longer in a “historical age but also a geological one. Or better: we are no longer to think history as exclusively human…” [12] I would argue that the ‘grand narratives of legitimation’ Lyotard claimed we left behind in the move to Postmodernity will need to return in some way if we are to manage big data in a meaningful way. Crises such as catastrophic climate change will never be made palatable in the feedback between users, programmers and  technocapitalism. Instead, we need to revisit Lyotard’s distinction between the true and the useful. Rather than ask how we can make big data useful for us, we need to ask what grand story we want that data to tell us.   References [1] Source: www.narrativescience.com, accessed 15/10/14 [2] Steven Levy, “Can an Algorithm Write a Better News Story Than a Human Reporter?,” WIRED, April 24, 2012, http://www.wired.com/2012/04/can-an-algorithm-write-a-better-news-story-than-a-human-reporter/. [3] “Steven Poole – On Algorithms,” Aeon Magazine, accessed May 8, 2015, http://aeon.co/magazine/technology/steven-poole-can-algorithms-ever-take-over-from-humans/. [4] Ibid. [5] Jean-François Lyotard, The Postmodern Condition: A Report on Knowledge, Repr, Theory and History of Literature 10 (Manchester: Univ. Pr, 1992), 50. [6] Ibid., 51. [7] Brian Massumi, Parables for the Virtual: Movement, Affect, Sensation (Duke University Press, 2002), 128. [8] Edmund Berger, “The Anthropocene and the End of Postmodernism,” Synthetic Zero, n.d., http://syntheticzero.net/2015/04/01/the-anthropocene-and-the-end-of-postmodernism/. [9] Source: www.narrativescience.com, accessed 15/10/14 [10] Wendy Chun, “Crisis and the Temporality of Networks,” in The Nonhuman Turn, ed. Richard Grusin (Minneapolis: University of Minnesota Press, 2015), 154. [11] Rajendra K. Pachauri et al., “Climate Change 2014: Synthesis Report. Contribution of Working Groups I, II and III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change,” 2014, http://epic.awi.de/37530/. [12] Berger, “The Anthropocene and the End of Postmodernism.”

]]>
Fri, 08 May 2015 04:02:51 -0700 http://machinemachine.net/portfolio/paper-at-theorizing-the-web-synthetic-subjects/
<![CDATA[Meet the Father of Digital Life]]> http://nautil.us/issue/14/mutation/meet-the-father-of-digital-life

n 1953, at the dawn of modern computing, Nils Aall Barricelli played God. Clutching a deck of playing cards in one hand and a stack of punched cards in the other, Barricelli hovered over one of the world’s earliest and most influential computers, the IAS machine, at the Institute for Advanced Study in Princeton, New Jersey. During the day the computer was used to make weather forecasting calculations; at night it was commandeered by the Los Alamos group to calculate ballistics for nuclear weaponry. Barricelli, a maverick mathematician, part Italian and part Norwegian, had finagled time on the computer to model the origins and evolution of life.

Inside a simple red brick building at the northern corner of the Institute’s wooded wilds, Barricelli ran models of evolution on a digital computer. His artificial universes, which he fed with numbers drawn from shuffled playing cards, teemed with creatures of code—morphing, mutating, melting, maintaining. He created laws that determined, independent of any foreknowledge on his part, which assemblages of binary digits lived, which died, and which adapted. As he put it in a 1961 paper, in which he speculated on the prospects and conditions for life on other planets, “The author has developed numerical organisms, with properties startlingly similar to living organisms, in the memory of a high speed computer.” For these coded critters, Barricelli became a maker of worlds.

Until his death in 1993, Barricelli floated between biological and mathematical sciences, questioning doctrine, not quite fitting in. “He was a brilliant, eccentric genius,” says George Dyson, the historian of technology and author of Darwin Among The Machines and Turing’s Cathedral, which feature Barricelli’s work. “And the thing about geniuses is that they just see things clearly that other people don’t see.”

Barricelli programmed some of the earliest computer algorithms that resemble real-life processes: a subdivision of what we now call “artificial life,” which seeks to simulate living systems—evolution, adaptation, ecology—in computers. Barricelli presented a bold challenge to the standard Darwinian model of evolution by competition by demonstrating that organisms evolved by symbiosis and cooperation.

Pixar cofounder Alvy Ray Smith says Barricelli influenced his earliest thinking about the possibilities for computer animation.

In fact, Barricelli’s projects anticipated many current avenues of research, including cellular automata, computer programs involving grids of numbers paired with local rules that can produce complicated, unpredictable behavior. His models bear striking resemblance to the one-dimensional cellular automata—life-like lattices of numerical patterns—championed by Stephen Wolfram, whose search tool Wolfram Alpha helps power the brain of Siri on the iPhone. Nonconformist biologist Craig Venter, in defending his creation of a cell with a synthetic genome—“the first self-replicating species we’ve had on the planet whose parent is a computer”—echoes Barricelli.

Barricelli’s experiments had an aesthetic side, too. Uncommonly for the time, he converted the digital 1s and 0s of the computer’s stored memory into pictorial images. Those images, and the ideas behind them, would influence computer animators in generations to come. Pixar cofounder Alvy Ray Smith, for instance, says Barricelli stirred his earliest thinking about the possibilities for computer animation, and beyond that, his philosophical muse. “What we’re really talking about here is the notion that living things are computations,” he says. “Look at how the planet works and it sure does look like a computation.”

Despite Barricelli’s pioneering experiments, barely anyone remembers him. “I have not heard of him to tell you the truth,” says Mark Bedau, professor of humanities and philosophy at Reed College and editor of the journal Artificial Life. “I probably know more about the history than most in the field and I’m not aware of him.”

Barricelli was an anomaly, a mutation in the intellectual zeitgeist, an unsung hero who has mostly languished in obscurity for the past half century. “People weren’t ready for him,” Dyson says. That a progenitor has not received much acknowledgment is a failing not unique to science. Visionaries often arrive before their time. Barricelli charted a course for the digital revolution, and history has been catching up ever since.

Barricelli_BREAKER-02 EVOLUTION BY THE NUMBERS: Barricelli converted his computer tallies of 1s and 0s into images. In this 1953 Barricelli print, explains NYU associate professor Alexander Galloway, the chaotic center represents mutation and disorganization. The more symmetrical fields toward the margins depict Barricelli’s evolved numerical organisms.From the Shelby White and Leon Levy Archives Center, Institute for Advanced Study, Princeton. Barricelli was born in Rome on Jan. 24, 1912. According to Richard Goodman, a retired microbiologist who met and befriended the mathematician in the 1960s, Barricelli claimed to have invented calculus before his tenth birthday. When the young boy showed the math to his father, he learned that Newton and Leibniz had preempted him by centuries. While a student at the University of Rome, Barricelli studied mathematics and physics under Enrico Fermi, a pioneer of quantum theory and nuclear physics. A couple of years after graduating in 1936, he immigrated to Norway with his recently divorced mother and younger sister.

As World War II raged, Barricelli studied. An uncompromising oddball who teetered between madcap and mastermind, Barricelli had a habit of exclaiming “Absolut!” when he agreed with someone, or “Scandaloos!” when he found something disagreeable. His accent was infused with Scandinavian and Romantic pronunciations, making it occasionally challenging for colleagues to understand him. Goodman recalls one of his colleagues at the University of California, Los Angeles who just happened to be reading Barricelli’s papers “when the mathematician himself barged in and, without ceremony, began rattling off a stream of technical information about his work on phage genetics,” a science that studies gene mutation, replication, and expression through model viruses. Goodman’s colleague understood only fragments of the speech, but realized it pertained to what he had been reading.

“Are you familiar with the work of Nils Barricelli?” he asked.

“Barricelli! That’s me!” the mathematician cried.

Notwithstanding having submitted a 500-page dissertation on the statistical analysis of climate variation in 1946, Barricelli never completed his Ph.D. Recalling the scene in the movie Amadeus in which the Emperor of Austria commends Mozart’s performance, save for there being “too many notes,” Barricelli’s thesis committee directed him to slash the paper to a tenth of the size, or else it would not accept the work. Rather than capitulate, Barricelli forfeited the degree.

Barricelli began modeling biological phenomena on paper, but his calculations were slow and limited. He applied to study in the United States as a Fulbright fellow, where he could work with the IAS machine. As he wrote on his original travel grant submission in 1951, he sought “to perform numerical experiments by means of great calculating machines,” in order to clarify, through mathematics, “the first stages of evolution of a species.” He also wished to mingle with great minds—“to communicate with American statisticians and evolution-theorists.” By then he had published papers on statistics and genetics, and had taught Einstein’s theory of relativity. In his application photo, he sports a pyramidal moustache, hair brushed to the back of his elliptic head, and hooded, downturned eyes. At the time of his application, he was a 39-year-old assistant professor at the University of Oslo.

Although the program initially rejected him due to a visa issue, in early 1953 Barricelli arrived at the Institute for Advanced Study as a visiting member. “I hope that you will be finding Mr. Baricelli [sic] an interesting person to talk with,” wrote Ragnar Frisch, a colleague of Barricelli’s who would later win the first Nobel Prize in Economics, in a letter to John von Neumann, a mathematician at IAS, who helped devise the institute’s groundbreaking computer. “He is not very systematic always in his exposition,” Frisch continued, “but he does have interesting ideas.”

Barricelli_BREAKER_2crop PSYCHEDELIC BARRICELLI: In this recreation of a Barricelli experiment, NYU associate professor Alexander Galloway has added color to show the gene groups more clearly. Each swatch of color signals a different organism. Borders between the color fields represent turbulence as genes bounce off and meld with others, symbolizing Barricelli’s symbiogenesis.Courtesy Alexander Galloway Centered above Barricelli’s first computer logbook entry at the Institute for Advanced Study, in handwritten pencil script dated March 3, 1953, is the title “Symbiogenesis problem.” This was his theory of proto-genes, virus-like organisms that teamed up to become complex organisms: first chromosomes, then cellular organs, onward to cellular organisms and, ultimately, other species. Like parasites seeking a host, these proto-genes joined together, according to Barricelli, and through their mutual aid and dependency, originated life as we know it.

Standard neo-Darwinian doctrine maintained that natural selection was the main means by which species formed. Slight variations and mutations in genes combined with competition led to gradual evolutionary change. But Barricelli disagreed. He pictured nimbler genes acting as a collective, cooperative society working together toward becoming species. Darwin’s theory, he concluded, was inadequate. “This theory does not answer our question,” he wrote in 1954, “it does not say why living organisms exist.”

Barricelli coded his numerical organisms on the IAS machine in order to prove his case. “It is very easy to fabricate or simply define entities with the ability to reproduce themselves, e.g., within the realm of arithmetic,” he wrote.

The early computer looked sort of like a mix between a loom and an internal combustion engine. Lining the middle region were 40 Williams cathode ray tubes, which served as the machine’s memory. Within each tube, a beam of electrons (the cathode ray) bombarded one end, creating a 32-by-32 grid of points, each consisting of a slight variation in electrical charge. There were five kilobytes of memory total stored in the machine. Not much by today’s standards, but back then it was an arsenal.

Barricelli saw his computer organisms as a blueprint of life—on this planet and any others.

Inside the device, Barricelli programmed steadily mutable worlds each with rows of 512 “genes,” represented by integers ranging from negative to positive 18. As the computer cycled through hundreds and thousands of generations, persistent groupings of genes would emerge, which Barricelli deemed organisms. The trick was to tweak his manmade laws of nature—“norms,” as he called them—which governed the universe and its entities just so. He had to maintain these ecosystems on the brink of pandemonium and stasis. Too much chaos and his beasts would unravel into a disorganized shamble; too little and they would homogenize. The sweet spot in the middle, however, sustained life-like processes.

Barricelli’s balancing act was not always easygoing. His first trials were riddled with pests: primitive, often single numeric genes invaded the space and gobbled their neighbors. Typically, he was only able to witness a couple of hereditary changes, or a handful at best, before the world unwound. To create lasting evolutionary processes, he needed to handicap these pests’ ability to rapidly reproduce. By the time he returned to the Institute in 1954 to begin a second round of experiments, Barricelli made some critical changes. First, he capped the proliferation of the pests to once per generation. That constraint allowed his numerical organisms enough leeway to outpace the pests. Second, he began employing different norms to different sections of his universes. That forced his numerical organisms always to adapt.

Even in the earlier universes, Barricelli realized that mutation and natural selection alone were insufficient to account for the genesis of species. In fact, most single mutations were harmful. “The majority of the new varieties which have shown the ability to expand are a result of crossing-phenomena and not of mutations, although mutations (especially injurious mutations) have been much more frequent than hereditary changes by crossing in the experiments performed,” he wrote.

When an organism became maximally fit for an environment, the slightest variation would only weaken it. In such cases, it took at least two modifications, effected by a cross-fertilization, to give the numerical organism any chance of improvement. This indicated to Barricelli that symbioses, gene crossing, and “a primitive form of sexual reproduction,” were essential to the emergence of life.

“Barricelli immediately figured out that random mutation wasn’t the important thing; in his first experiment he figured out that the important thing was recombination and sex,” Dyson says. “He figured out right away what took other people much longer to figure out.” Indeed, Barricelli’s theory of symbiogenesis can be seen as anticipating the work of independent-thinking biologist Lynn Margulis, who in the 1960s showed that it was not necessarily genetic mutations over generations, but symbiosis, notably of bacteria, that produced new cell lineages.

Barricelli saw his computer organisms as a blueprint of life—on this planet and any others. “The question whether one type of symbio-organism is developed in the memory of a digital computer while another type is developed in a chemical laboratory or by a natural process on some planet or satellite does not add anything fundamental to this difference,” he wrote. A month after Barricelli began his experiments on the IAS machine, Crick and Watson announced the shape of DNA as a double helix. But learning about the shape of biological life didn’t put a dent in Barricelli’s conviction that he had captured the mechanics of life on a computer. Let Watson and Crick call DNA a double helix. Barricelli called it “molecule-shaped numbers.”

Barricelli_BREAKER

What buried Barricelli in obscurity is something of a mystery. “Being uncompromising in his opinions and not a team player,” says Dyson, no doubt led to Barricelli’s “isolation from the academic mainstream.” Dyson also suspects Barricelli and the indomitable Hungarian mathematician von Neumann, an influential leader at the Institute of Advanced Study, didn’t hit it off. Von Neumann appears to have ignored Barricelli. “That was sort of fatal because everybody looked to von Neumann as the grandfather of self-replicating machines.”

Ever so slowly, though, Barricelli is gaining recognition. That stems in part from another of Barricelli’s remarkable developments; certainly one of his most beautiful. He didn’t rest with creating a universe of numerical organisms, he converted his organisms into images. His computer tallies of 1s and 0s would then self-organize into visual grids of exquisite variety and texture. According to Alexander Galloway, associate professor in the department of media, culture, and communication at New York University, a finished Barricelli “image yielded a snapshot of evolutionary time.”

When Barricelli printed sections of his digitized universes, they were dazzling. To modern eyes they might look like satellite imagery of an alien geography: chaotic oceans, stratigraphic outcrops, and the contours of a single stream running down the center fold, fanning into a delta at the patchwork’s bottom. “Somebody needs to do a museum show and show this stuff because they’re outrageous,” Galloway says.

Barricelli was an uncompromising oddball who teetered between madcap and mastermind.

Today, Galloway, a member of Barricelli’s small but growing cadre of boosters, has recreated the images. Following methods described by Barricelli in one of his papers, Galloway has coded an applet using the computer language Processing to revive Barricelli’s numerical organisms—with slight variation. While Barricelli encoded his numbers as eight-unit-long proto-pixels, Galloway condensed each to a single color-coded cell. By collapsing each number into a single pixel, Galloway has been able to fit eight times as many generations in the frame. These revitalized mosaics look like psychedelic cross-sections of the fossil record. Each swatch of color represents an organism, and when one color field bumps up against another one, that’s where cross-fertilization takes place.

“You can see these kinds of points of turbulence where the one color meets another color,” Galloway says, showing off the images on a computer in his office. “That’s a point where a number would be—or a gene would be—sort of jumping from one organism to another.” Here, in other words, is artificial life—Barricelli’s symbiogenesis—frozen in amber. And cyan and lavender and teal and lime and fuchsia.

Galloway is not the only one to be struck by the beauty of Barricelli’s computer-generated digital images. As a doctoral student, Pixar cofounder Smith became familiar with Barricelli’s work while researching the history of cellular automata for his dissertation. When he came across Barricelli’s prints he was astonished. “It was remarkable to me that with such crude computing facilities in the early 50s, he was able to be making pictures,” Smith says. “I guess in a sense you can say that Barricelli got me thinking about computer animation before I thought about computer animation. I never thought about it that way, but that’s essentially what it was.”

Cyberspace now swells with Barricelli’s progeny. Self-replicating strings of arithmetic live out their days in the digital wilds, increasingly independent of our tampering. The fittest bits survive and propagate. Researchers continue to model reduced, pared-down versions of life artificially, while the real world bursts with Boolean beings. Scientists like Venter conjure synthetic organisms, assisted by computer design. Swarms of autonomous codes thrive, expire, evolve, and mutate underneath our fingertips daily. “All kinds of self-reproducing codes are out there doing things,” Dyson says. In our digital lives, we are immersed in Barricelli’s world.

]]>
Fri, 20 Jun 2014 06:08:03 -0700 http://nautil.us/issue/14/mutation/meet-the-father-of-digital-life
<![CDATA[How the Movies of Tomorrow Will Play With Your Mind - Pacific Standard: The Science of Society]]> http://www.psmag.com/navigation/books-and-culture/movies-tomorrow-will-play-mind-79245/

Since the dawn of cinema, the cut has been one of the most powerful tools in a director’s kit. If we see a man walk through a door and turn his head to the right, and the scene immediately cuts to an image of an apple on a side table, our brain fills in the gap, and we understand that this man is looking at the apple. That’s because the brain has a natural propensity for smoothing over interruptions of stimuli. Whenever we blink, our eyes close for up to half a second, but we don’t notice the breaks. We also make rapid eye movements called saccades several times a second as we adjust to a constantly shifting environment, and we lose access to visual information until the eye movement settles down. This may why we generally don’t notice cuts in movies—they work like saccades. But neuroscientist Sergei Gepshtein dreams of a new visual vocabulary for cinema—one that relies much less on the cut, or perhaps even eliminates the cut altogether. “The film industry rests on a narrow selection of possibilities that got discovered early on and then got canonized by the force of inertia and entrenched by filmmaking technology and habit,” he says. Gepshtein sees some of the most disagreeable traits of entrenched movie technology in today’s blockbuster action movies. In these films, shots last only seconds, and there are regular barrages of rapid-fire cuts. Think Transformers, Battleship, the Bourne trilogy, or Pacific Rim. As Scott Derrickson, director of recent thrillers like Sinister and The Day the Earth Stood Still, laments, “The story is happening to you, but you are not interacting with the story.” But Gepshtein thinks he can offer an alternative to this trend—and it doesn’t necessarily involve long takes in the style of directors like Alfonso Cuarón, who recently snagged a directing Oscar for Gravity. Instead, it involves harnessing the modern science of vision. In December, I paid a visit to Gepshtein at his workplace, the Systems Neurobiology Laboratories of the Salk Institute in La Jolla, California, its sleek whi

]]>
Wed, 07 May 2014 13:33:58 -0700 http://www.psmag.com/navigation/books-and-culture/movies-tomorrow-will-play-mind-79245/
<![CDATA[Four Notes Towards Post-Digital Propaganda | post-digital-research]]> http://post-digital.projects.cavi.dk/?p=475

“Propaganda is called upon to solve problems created by technology, to play on maladjustments and to integrate the individual into a technological world” (Ellul xvii).

How might future research into digital culture approach a purported “post-digital” age? How might this be understood?

1.

A problem comes from the discourse of ‘the digital’ itself: a moniker which points towards units of Base-2 arbitrary configuration, impersonal architectures of code, massive extensions of modern communication and ruptures in post-modern identity. Terms are messy, and it has never been easy to establish a ‘post’ from something, when pre-discourse definitions continue to hang in the air. As Florian Cramer has articulated so well, ‘post-digital’ is something of a loose, ‘hedge your bets’ term, denoting a general tendency to criticise the digital revolution as a modern innovation (Cramer).

Perhaps it might be aligned with what some have dubbed “solutionism” (Morozov) or “computationalism” (Berry 129; Golumbia 8): the former critiquing a Silicon Valley-led ideology oriented towards solving liberalised problems through efficient computerised means. The latter establishing the notion (and critique thereof) that the mind is inherently computable, and everything associated with it. In both cases, digital technology is no longer just a business that privatises information, but the business of extending efficient, innovative logic to all corners of society and human knowledge, condemning everything else through a cultural logic of efficiency.

In fact, there is a good reason why ‘digital’ might as well be an synonym for ‘efficiency’. Before any consideration is assigned to digital media objects (i.e. platforms, operating systems, networks), consider the inception of ‘the digital’ inception as such: that is information theory. If information was a loose, shabby, inefficient method of vagueness specific to various mediums of communication, Claude Shannon compressed all forms of communication into a universal system with absolute mathematical precision (Shannon). Once information became digital, the conceptual leap of determined symbolic logic was set into motion, and with it, the ‘digital’ became synonymous with an ideology of effectivity. No longer would miscommunication be subject to human finitude, nor be subject to matters of distance and time, but only the limits of entropy and the matter of automating messages through the support of alternating ‘true’ or ‘false’ relay systems.

However, it would be quite difficult to envisage any ‘post-computational’ break from such discourses – and with good reason: Shannon’s breakthrough was only systematically effective through the logic of computation. So the old missed encounter goes: Shannon presupposed Alan Turing’s mathematical idea of computation to transmit digital information, and Turing presupposed Shannon’s information theory to understand what his Universal Turing Machines were actually transmitting. The basic theories of both have not changed, but the materials affording greater processing power, extensive server infrastructure and larger storage space have simply increased the means for these ideas to proliferate, irrespective of what Turing and Shannon actually thought of them (some historians even speculate that Turing may have made the link between information and entropy two years before Bell Labs did) (Good).

Thus a ‘post-digital’ reference point might encompass the historical acknowledgment of Shannon’s digital efficiency, and Turing’s logic but by the same measure, open up a space for critical reflection, and how such efficiencies have transformed not only work, life and culture but also artistic praxis and aesthetics. This is not to say that digital culture is reducibly predicated on efforts made in computer science, but instead fully acknowledges these structures and accounts for how ideologies propagate reactionary attitudes and beliefs within them, whilst restricting other alternatives which do not fit their ‘vision’. Hence, the post-digital ‘task’ set for us nowadays might consist in critiquing digital efficiency and how it has come to work against commonality, despite transforming the majority of Western infrastructure in its wake.

The purpose of these notes is to outline how computation has imparted an unwarranted effect of totalised efficiency, and to label this effect the type of description it deserves: propaganda. The fact that Shannon and Turing had multiple lunches together at Bell labs in 1943, held conversations and exchanged ideas, but not detailed methods of cryptanalysis (Price & Shannon) provides a nice contextual allegory for how digital informatics strategies fail to be transparent.

But in saying this, I do not mean that companies only use digital networks for propagative means (although that happens), but that the very means of computing a real concrete function is constitutively propagative. In this sense, propaganda resembles a post-digital understanding of what it means to be integrated into an ecology of efficiency, and how technical artefacts are literally enacted as propagative decisions. Digital information often deceives us into accepting its transparency, and of holding it to that account: yet in reality it does the complete opposite, with no given range of judgements available to detect manipulation from education, or persuasion from smear. It is the procedural act of interacting with someone else’s automated conceptual principles, embedding pre-determined decisions which not only generate but pre-determine ones ability to make choices about such decisions, like propaganda.

This might consist in distancing ideological definitions of false consciousness as an epistemological limit to knowing alternatives within thought, to engaging with a real programmable systems which embeds such limits concretely, withholding the means to transform them. In other words, propaganda incorporates how ‘decisional structures’ structure other decisions, either conceptually or systematically.

2.

Two years before Shannon’s famous Masters thesis, Turing published what would be a theoretical basis for computation in his 1936 paper “On Computable Numbers, with an Application to the Entscheidungsproblem.” The focus of the paper was to establish the idea of computation within a formal system of logic, which when automated would solve particular mathematical problems put into function (Turing, An Application). What is not necessarily taken into account is the mathematical context to that idea: for the foundations of mathematics were already precarious, way before Turing outlined anything in 1936. Contra the efficiency of the digital, this is a precariousness built-in to computation from its very inception: the precariousness of solving all problems in mathematics.

The key word of that paper, its key focus, was on the Entscheidungsproblem, or decision problem. Originating from David Hilbert’s mathematical school of formalism, ‘decision’ means something more rigorous than the sorts of decisions in daily life. It really means a ‘proof theory’, or how analytic problems in number theory and geometry could be formalised, and thus efficiently solved (Hilbert 3). Solving a theorem is simply finding a provable ‘winning position’ in a game. Similar to Shannon, ‘decision’ is what happens when an automated system of function is constructed in such a sufficiently complex way, that an algorithm can always ‘decide’ a binary, yes or no answer to a mathematical problem, when given an arbitrary input, in a sufficient amount of time. It does not require ingenuity, intuition or heuristic gambles, just a combination of simple consistent formal rules and a careful avoidance of contradiction.

The two key words there are ‘always’ and ‘decide’. The progressive end-game of twentieth century mathematicians who, like Hilbert, sought after a simple totalising conceptual system to decide every mathematical problem and work towards absolute knowledge. All Turing had to do was make explicit Hilbert’s implicit computational treatment of formal rules, manipulate symbol strings and automate them using an ’effective’ or “systematic method” (Turing, Solvable and Unsolvable Problems 584) encoded into a machine. This is what Turing’s thesis meant (discovered independently to Alonzo Church’s equivalent thesis (Church)): any systematic algorithm solved by a mathematical theorem can be computed by a Turing machine (Turing, An Application), or in Robin Gandy’s words, “[e]very effectively calculable function is a computable function” (Gandy).

Thus effective procedures decide problems, and they resolve puzzles providing winning positions (like theorems) in the game of functional rules and formal symbols. In Turing’s words, “a systematic procedure is just a puzzle in which there is never more than one possible move in any of the positions which arise and in which some significance is attached to the final result” (Turing, Solvable and Unsolvable Problems 590). The significance, or the winning position, becomes the crux of the matter for the decision: what puzzles or problems are to be decided? This is what formalism attempted to do: encode everything through the regime of formalised efficiency, so that all of mathematically inefficient problems are, in principle, ready to be solved. Programs are simply proofs: if it could be demonstrated mathematically, it could be automated.

In 1936, Turing had showed some complex mathematical concepts of effective procedures could simulate the functional decisions of all the other effective procedures (such as the Universal Turing Machine). Ten years later, Turing and John von Neumann would independently show how physical general purpose computers, offered the same thing and from that moment on, efficient digital decisions manifested themselves in the cultural application of physical materials. Before Shannon’s information theory offered the precision of transmitting information, Hilbert and Turing developed the structure of its transmission in the underlying regime of formal decision.

Yet, there was also a non-computational importance here, for Turing was also fascinated by what decisions couldn’t compute. His thesis was quite precise, so as to elucidate that if no mathematical problem could be proved, a computer was not of any use. In fact, the entire focus of his 1936 paper, often neglected by Silicon Valley cohorts, was to show that Hilbert’s particular decision problem could not be solved. Unlike Hilbert, Turing was not interested in using computation to solve every problem, but as a curious endeavour for surprising intuitive behaviour. The most important of all, Turing’s halting, or printing problem was influential, precisely as it was undecidable; a decision problem which couldn’t be decided.

We can all picture the halting problem, even obliquely. Picture the frustrated programmer or mathematician starting at their screen, waiting to know when an algorithm will either halt and spit out a result, or provide no answer. The computer itself has already determined the answer for us, the programmer just has to know when to give up. But this is a myth, inherited with a bias towards human knowledge, and a demented understanding of machines as infinite calculating engines, rather than concrete entities of decision. For reasons that escape word space, Turing didn’t understand the halting problem in this way: instead he understood it as a contradictory example of computational decisions failing to decide on each other, on the account that there could never be one totalising decision or effective procedure. There is no guaranteed effective procedure to decide on all the others, and any attempt to build one (or invest in a view which might help build one), either has too much investment in absolute formal reason, or it ends up with ineffective procedures.

Undecidable computation might be looked at as a dystopian counterpart against the efficiency of Shannon’s ‘digital information’ theory. A base 2 binary system of information resembling one of two possible states, whereby a system can communicate with one digit, only in virtue of the fact that there is one other digit alternative to it. Yet the perfect transmission of that information, is only subject to a system which can ‘decide’ on the digits in question, and establish a proof to calculate a success rate. If there is no mathematical proof to decide a problem, then transmitting information becomes problematic for establishing a solution.

3.

What has become clear is that our world is no longer simply accountable to human decision alone. Decisions are no longer limited to the borders of human decisions and ‘culture’ is no longer simply guided by a collective whole of social human decisions. Nor is it reducible to one harmonious ‘natural’ collective decision which prompts and pre-empts everything else. Instead we seem to exist in an ecology of decisions: or better yet decisional ecologies. Before there was ever the networked protocol (Galloway), there was the computational decision. Decision ecologies are already set up before we enter the world, implicitly coterminous with our lives: explicitly determining a quantified or bureaucratic landscape upon which an individual has limited manoeuvrability.

Decisions are not just digital, they are continuous as computers can be: yet decisions are at their most efficient when digitally transferred. Decisions are everywhere and in everything. Look around. We are constantly told by governments and states that are they making tough decisions in the face of austerity. CEOs and Directors make tough decisions for the future of their companies and ‘great’ leaders are revered for being ‘great decisive leaders’: not just making decisions quickly and effectively, but also settling issues and producing definite results.

Even the word ‘decide’, comes from the Latin origin of ‘decidere’, which means to determine something and ‘to cut off.’ Algorithms in financial trading know not of value, but of decision: whether something is marked by profit or loss. Drones know not of human ambiguity, but can only decide between kill and ignore, cutting off anything in-between. Constructing a system which decides between one of two digital values, even repeatedly, means cutting off and excluding all other possible variables, leaving a final result at the end of the encoded message. Making a decision, or building a system to decide a particular ideal or judgement must force other alternatives outside of it. Decisions are always-already embedded into the framework of digital action, always already deciding what is to be done, how it can be done or what is threatening to be done. It would make little sense to suggest that these entities ‘make decisions’ or ‘have decisions’, it would be better to say that they are decisions and ecologies are constitutively constructed by them.

The importance of neo-liberal digital transmissions are not that they become innovative, or worthy of a zeitgeist break: but that they demonstrably decide problems whose predominant significance is beneficial for self-individual efficiency and accumulation of capital. Digital efficiency is simply about the expansion of automating decisions and what sort of formalised significances must be propagated to solve social and economic problems, which creates new problems in a vicious circle.

The question can no longer simply be ‘who decides’, but now, ‘what decides?’ Is it the cafe menu board, the dinner party etiquette, the NASDAQ share price, Google Pagerank, railway network delays, unmanned combat drones, the newspaper crossword, the javascript regular expression or the differential calculus? It’s not quite right to say that algorithms rule the world, whether in algo-trading or in data capture, but the uncomfortable realisation that real entities are built to determine provable outcomes time and time again: most notably ones for cumulating profit and extracting revenue from multiple resources.

One pertinent example: consider George Dantzig’s simplex algorithm: this effective procedure (whose origins began in multidimensional geometry) can always decide solutions for large scale optimisation problems which continually affect multi-national corporations. The simplex algorithm’s proliferation and effectiveness has been critical since its first commercial application in 1952, when Abraham Charnes and William Cooper used it to decide how best to optimally blend four different petroleum products at the Gulf Oil Company (Elwes 35; Gass & Assad 79). Since then the simplex algorithm has had years of successful commercial use, deciding almost everything from bus timetables and work shift patterns to trade shares and Amazon warehouse configurations. According to the optimisation specialist Jacek Gondzio, the simplex algorithm runs at “tens, probably hundreds of thousands of calls every minute” (35), always deciding the most efficient method of extracting optimisation.

In contemporary times, nearly all decision ecologies work in this way, accompanying and facilitating neo-liberal methods of self-regulation and processing all resources through a standardised efficiency: from bureaucratic methods of formal standardisation, banal forms ready to be analysed one central system, to big-data initiatives and simple procedural methods of measurement and calculation. The technique of decision is a propagative method of embedding knowledge, optimisation and standardisation techniques in order to solve problems and an urge to solve the most unsolvable ones, including us.

Google do not build into their services an option to pay for the privilege of protecting privacy: the entire point of providing a free service which purports to improve daily life, is that it primarily benefits the interests of shareholders and extend commercial agendas. James Grimmelmann gave a heavily detailed exposition on Google’s own ‘net neutrality’ algorithms and how biased they happen to be. In short, PageRank does not simply decide relevant results, it decides visitor numbers and he concluded on this note.

With disturbing frequency, though, websites are not users’ friends. Sometimes they are, but often, the websites want visitors, and will be willing to do what it takes to grab them (Grimmelmann 458).

If the post-digital stands for the self-criticality of digitalisation already underpinning contemporary regimes of digital consumption and production, then its saliency lies in understanding the logic of decision inherent to such regimes. The reality of the post-digital, shows that machines remain curiously efficient whether we relish in cynicism or not. Such regimes of standardisation and determined results, were already ‘mistakenly built in’ to the theories which developed digital methods and means, irrespective of what computers can or cannot compute.

4.

Why then should such post-digital actors be understood as instantiations of propaganda? The familiarity of propaganda is manifestly evident in religious and political acts of ideological persuasion: brainwashing, war activity, political spin, mind control techniques, subliminal messages, political campaigns, cartoons, belief indoctrination, media bias, advertising or news reports. A definition of propaganda might follow from all of these examples: namely, the systematic social indoctrination of biased information that persuades the masses to take action on something which is neither beneficial to them, nor in their best interests: or as Peter Kenez writes, propaganda is “the attempt to transmit social and political values in the hope of affecting people’s thinking, emotions, and thereby behaviour” (Kenez 4) Following Stanley B. Cunningham’s watered down definition, propaganda might also denote a helpful and pragmatic “shorthand statement about the quality of information transmitted and received in the twentieth century” (Cunningham 3).

But propaganda isn’t as clear as this general definition makes out: in fact what makes propaganda studies such a provoking topic is that nearly every scholar agrees that no stable definition exists. Propaganda moves beyond simple ‘manipulation’ and ‘lies’ or derogatory, jingoistic representation of an unsubtle mood – propaganda is as much about the paradox of constructing truth, and the irrational spread of emotional pleas, as well as endorsing rational reason. As the master propagandist William J. Daugherty wrote;

It is a complete delusion to think of the brilliant propagandist as being a professional liar. The brilliant propagandist […] tells the truth, or that selection of the truth which is requisite for his purpose, and tells it in such a way that the recipient does not think that he is receiving any propaganda…. (Daugherty 39).

Propaganda, like ideology works by being inherently implicit and social. In the same way that post-ideology apologists ignore their symptom, propaganda is also ignored. It isn’t to be taken as a shadowy fringe activity, blown apart by the democratising fairy-dust of ‘the Internet’. As many others have noted, the purported ‘decentralising’ power of online networks, offer new methods for propagative techniques, or ‘spinternet’ strategies, evident in China (Brady). Iran’s recent investment into video game technology only makes sense, only when you discover that 70% of Iran’s population are under 30 years of age, underscoring a suitable contemporary method of dissemination. Similarly in 2011, the New York City video game developer Kuma Games was mired in controversy when it was discovered that an alleged CIA agent, Amir Mirza Hekmati, had been recruited to make an episodic video game series intending to “change the public opinion’s mindset in the Middle East.” (Tehran Times). The game in question, Kuma\War (2006 – 2011) was a free-to-play First-Person Shooter series, delivered in episodic chunks, the format of which attempted to simulate biased re-enactments of real-life conflicts, shortly after they reached public consciousness.

Despite his unremarkable leanings towards Christian realism, Jacques Ellul famously updated propaganda’s definition as the end product of what he previously lamented as ‘technique’. Instead of viewing propaganda as a highly organised systematic strategy for extending the ideologues of peaceful warfare, he understood it as a general social phenomenon in contemporary society.

Ellul outlined two types: political and sociological propaganda: Political propaganda involves government, administrative techniques which intend to directly change the political beliefs of an intended audience. By contrast, sociological propaganda is the implicit unification of involuntary public behaviour which creates images, aesthetics, problems, stereotypes, the purpose of which aren’t explicitly direct, nor overtly militaristic. Ellul argues that sociological propaganda exists; “in advertising, in the movies (commercial and non-political films), in technology in general, in education, in the Reader’s Digest; and in social service, case work, and settlement houses” (Ellul 64). It is linked to what Ellul called “pre” or “sub-propaganda”: that is, an imperceptible persuasion, silently operating within ones “style of life” or permissible attitude (63). Faintly echoing Louis Althusser’s Ideological State Apparatuses (Althusser 182) nearly ten years prior, Ellul defines it as “the penetration of an ideology by means of its sociological context.” (63) Sociological propaganda is inadequate for decisive action, paving the way for political propaganda – its strengthened explicit cousin – once the former’s implicitness needs to be transformed into the latter’s explicitness.

In a post-digital world, such implicitness no longer gathers wartime spirits, but instead propagates a neo-liberal way of life that is individualistic, wealth driven and opinionated. Ellul’s most powerful assertion is that ‘facts’ and ‘education’ are part and parcel of the sociological propagative effect: nearly everyone faces a compelling need to be opinionated and we are all capable of judging for ourselves what decisions should be made, without at first considering the implicit landscape from which these judgements take place. One can only think of the implicit digital landscape of Twitter: the archetype for self-promotion and snippets of opinions and arguments – all taking place within Ellul’s sub-propaganda of data collection and concealment. Such methods, he warns, will have “solved the problem of man” (xviii).

But information is of relevance here, and propaganda is only effective within a social community when it offers the means to solve problems using the communicative purview of information:

Thus, information not only provides the basis for propaganda but gives propaganda the means to operate; for information actually generates the problems that propaganda exploits and for which it pretends to offer solutions. In fact, no propaganda can work until the moment when a set of facts has become a problem in the eyes of those who constitute public opinion (114).

]]>
Wed, 11 Dec 2013 15:42:45 -0800 http://post-digital.projects.cavi.dk/?p=475
<![CDATA[What Is a JPEG? The Invisible Object You See Every Day - Paul Caplan - The Atlantic]]> http://www.theatlantic.com/technology/archive/2013/09/what-is-a-jpeg-the-invisible-object-you-see-every-day/279954/

You're looking at dozens of JPEGs right now. In 2012, the photograph of Barack and Michelle Obama embracing after his re-election was 'liked' over 4 million times. That photo, like the 250 million other images uploaded to Facebook every day is standardized; it is a JPEG-encoded image.

]]>
Thu, 26 Sep 2013 00:49:45 -0700 http://www.theatlantic.com/technology/archive/2013/09/what-is-a-jpeg-the-invisible-object-you-see-every-day/279954/
<![CDATA[Referencing a Tweet in an Academic Paper? Here's an Automatic Citation Generator - Rebecca J. Rosen - The Atlantic]]> http://www.theatlantic.com/technology/archive/2013/09/referencing-a-tweet-in-an-academic-paper-heres-an-automatic-citation-generator/280005/

Say you're writing a paper on Twitter during the 2012 U.S. presidential election. How do you cite all those tweets you'll be referencing? The Modern Language Association (MLA) has an answer to that: a straightforward little formula that ends with "Tweet," which is lovely.

]]>
Thu, 26 Sep 2013 00:49:41 -0700 http://www.theatlantic.com/technology/archive/2013/09/referencing-a-tweet-in-an-academic-paper-heres-an-automatic-citation-generator/280005/
<![CDATA[Digital Curation Resource Guide]]> http://digital-scholarship.org/dcrg/dcrg.htm

Digital curation involves selection and appraisal by creators and archivists; evolving provision of intellectual access; redundant storage; data transformations; and, for some materials, a commitment to long-term preservation. Digital curation is stewards

]]>
Sat, 25 Aug 2012 02:53:00 -0700 http://digital-scholarship.org/dcrg/dcrg.htm