MachineMachine /stream - search for binary https://machinemachine.net/stream/feed en-us http://blogs.law.harvard.edu/tech/rss LifePress therourke@gmail.com <![CDATA[All new MiSTer Shareware DOS Pack with new MyMenu Front end!!!!!!]]> https://www.youtube.com/watch?v=ZNWNHwluRzk

Today I am releasing the new AO486 DOS Shareware pack to the public. This includes an all new DOS Frontend interface developed by BBond007 (https://www.youtube.com/user/binarybond007). The pack is 100% shareware and opensource so it can be shared freely to give a great base for future packs and to show off all the new features. There are no Commercial products in this release. Over 100 games, 30 DOS shareware screensavers, MOD and MIDI Music, Music Players, and shareware DOS applications all built on the FreeDOS OS.

Shareware Pack includes: Over 100 DOS Shareware titles that have been tested and configured for the MiSTer AO486 PC Core. ANSI art and gamecards built for each game. MyMenu Features: MyMenu is a DOS frontend designed to allow you to quickly launch DOS games, applications, scripts, and music.

Launch scripts, exe, bat, or any custom extension that you configure in the MyMenu.ini configuration file. Add any game to C:\Games\My Cool Game Name\ and it will now show in MyMenu automatically. We have tested up to 10,000 games in the list!

Other Feature: DOS Long File Name support Autorun.bat -- Autorun any game Readme.ans -- ANSI Readme and gamecard for each game! ANSI and ASCII art support for browsing ANSI and creating custom game cards for the interface ANSi Terminal (COM) and (Console) support Quickly set MiSTer core speed and cache options Screensavers CGA/VGA Support Music player integration for MOD, MIDI, XM, A2M, and MP3. Terminal Support for MidiLink, Serial, and BBS connection.

Github scripts integration and updates coming!

bbond007's MidiLink: https://github.com/bbond007/MiSTer_MidiLink

Latest release located at: https://github.com/flynnsbit/DOS_Shareware_MyMenu Under Releases.

Introduction and History lesson 00:00 Pack Demonstration: 05:00 MyMenu DOS Interface: 06:00 Autorun.bat and README.ANS Demo: 10:05 Doom Demo: 11:23 Edit Autorun.bat: 12:30 Broken games moved: 12:55 MyMenu ANSi: 13:11 MyMenu Apps/Games/Music/Ansi: 14:00 MyMenu Music and MIDI Demo: 14:23 MyMenu ANSI Art examples: 15:51 MyMenu Quick feature list and readme: 16:21 MyMenu F1 Menu: 17:20 MyMenu MT32-Pi Integration Menu: 17:58 MyMenu Screensavers: 18:21 MyMenu.ini configuration options: 18:45 MyMenu Screensavers config and demo: 19:40 MyMenu Utilities and Memory Management: 22:37 Explosiv! Screensaver Setup: 24:00 MP3's and Internet Radio on MiSTer: 25:14 Download Midilink: 25:41 MP3 Music Tracks and Internet Radio in AO486: 26:00 WHAT IS THIS SONG!!? : 28:06 Mp3 songs as Music track in DOS games: 28:25 MyMenu MP3 Quicklinks: 29:15 Internet Radio Playlists as Music Track in DOS: 29:31 Internet Radio in DOS - Classic Rock: 30:38 Internet Radio in DOS - Dance: 31:48 MyMenu Color Templates and Themes: 32:20 MiSTer console control of MP3s from batch scripts in DOS: 33:56 DOS Doom w/ Doom Eternal Mp3 Soundtrack in DOS Demo scripted: 35:57 DOS Earthworm Jim w/ MP3 Music Playlist: 38:02 DOS SimCity 2000 w/ MP3 Music Playlist: 39:23 Conclusion and Download: 40:00

]]>
Mon, 01 Nov 2021 10:57:07 -0700 https://www.youtube.com/watch?v=ZNWNHwluRzk
<![CDATA[Gen Z Are “Puriteens,” But Not For The Reasons You Think | GQ]]> https://www.gq.com/story/gen-z-puriteens

Young adults generally have less sex these days, but it’s not about morality.Jojo has no moral qualms about casual sex. “The equation of sex and morality is bullshit, as long as everyone’s consenting,” the 23-year-old non-binary, bisexual theatre graduate in New York says.

]]>
Tue, 17 Aug 2021 07:51:38 -0700 https://www.gq.com/story/gen-z-puriteens
<![CDATA[Otherkin Are the Internet’s Punchline. They’re Also Our Future]]> https://www.dailydot.com/irl/otherkin/

Rhia is queer, trans, and nonbinary. They are also otherkin, or an individual who identifies as nonhuman on a non-physical level, according to the Otherkin Wiki.

]]>
Wed, 23 Dec 2020 01:19:44 -0800 https://www.dailydot.com/irl/otherkin/
<![CDATA[Counting the Countless — Real Life]]> https://reallifemag.com/counting-the-countless/

This piece originated as a talk at Seattle University. Well, really it originated as a question from the Seattle Non-Binary Collective, a group of Seattle residents and visitors with genders beyond the binary of “man” and “woman.

]]>
Tue, 16 Apr 2019 11:46:28 -0700 https://reallifemag.com/counting-the-countless/
<![CDATA[Why this Two Pixel Gap is Among the Most Complicated Things in Super Mario Maker.]]> https://www.youtube.com/watch?v=oQmfbfRWiKU

Since the release of Super Mario Maker the community found many many crazy ways to build levels. We found ways to activate pipes if mario takes damage, we found ways to forbid mario to jump, to run or to slow down in Super Mario Maker. We found ways to build binary storage and built turn based combat systems but there is one super weird, incredibly powerful, and unimaginably complicated Super Mario Maker technique we never discussed in detail before. Namely the giant gap, and pow block memory. So with Super Mario Maker 2 around the corner, it's time for us to tie up some loose ends, and to finally take a look at what are probably the most complex and weirdest techniques currently possible in super mario maker.


A couple of Giants fantastic Levels:

[3YMM] Life Without Mystery A2E5-0000-03C2-C05B https://supermariomakerbookmark.nintendo.net/courses/A2E5-0000-03C2-C05B

Rubik’s Stiffest Pocket Cube A09B-0000-036E-41BE https://supermariomakerbookmark.nintendo.net/courses/A09B-0000-036E-41BE

The Tower of Hanoi for n=4 7A55-0000-0354-526E https://supermariomakerbookmark.nintendo.net/courses/7A55-0000-0354-526E

--------------------Credits for the Music-------------------------- ------Holfix https://www.youtube.com/holfix HolFix - Beyond the Kingdom https://www.youtube.com/watch?v=2CiGpsBLBX8

------ Mario and Luigi: Superstar Saga OST Teehee Valley

------Kevin MacLeod "Adventure Meme", “Amazing Plan”,”The Show Must Be Go” Kevin MacLeod incompetech.com Licensed under Creative Commons: By Attribution 3.0 http://creativecommons.org/licenses/by/3.0/

]]>
Sun, 17 Mar 2019 09:00:05 -0700 https://www.youtube.com/watch?v=oQmfbfRWiKU
<![CDATA[Survival of the Richest – Future Human – Medium]]> https://medium.com/s/futurehuman/survival-of-the-richest-9ef6cddd0cc1

Last year, I got invited to a super-deluxe private resort to deliver a keynote speech to what I assumed would be a hundred or so investment bankers. It was by far the largest fee I had ever been offered for a talk — about half my annual professor’s salary — all to deliver some insight on the subject of “the future of technology.”

I’ve never liked talking about the future. The Q&A sessions always end up more like parlor games, where I’m asked to opine on the latest technology buzzwords as if they were ticker symbols for potential investments: blockchain, 3D printing, CRISPR. The audiences are rarely interested in learning about these technologies or their potential impacts beyond the binary choice of whether or not to invest in them. But money talks, so I took the gig.

]]>
Sat, 07 Jul 2018 08:32:30 -0700 https://medium.com/s/futurehuman/survival-of-the-richest-9ef6cddd0cc1
<![CDATA[WHAT IS #ADDITIVISM? Critical Perspectives on 3D Printing]]> https://vimeo.com/142438943

studioforcreativeinquiry.org/events/lecture-workshop-what-is-additivism-critical-perspectives-on-3d-printing-with-morehshin-allahyari-daniel-rourke twitter.com/morehshin twitter.com/therourke via-2015.com/ The 3D Additivist Manifesto calls creators and thinkers to action around a technology filled with hope and promise: the 3D printer. By considering this technology as a potential force for good, bad, and otherwise, visiting artists Morehshin Allahyari and Daniel Rourke aim to disrupt binary thinking entirely, drawing together makers and thinkers invested in the idea of real, radical, change. In March 2015 Allahyari and Rourke invited submissions to an open-source ‘Cookbook’ of radical ideas that cut across the arts, engineering, and sciences. Inspired, in part, by William Powell’s The Anarchist Cookbook (1969), The 3D Additivist Cookbook will contain speculative texts, templates, recipes and (im)practical designs for living in this most contradictory of times. A talk and Q&A session by Morehshin Allahyari and Daniel Rourke about The 3D Additivist Manifesto + The 3D Additivist Cookbook in addition to the screening of The 3D Additivist Manifesto video. Artists will talk about their own research and practice in relationship to Additivism and 3D printing.Cast: STUDIO for Creative InquiryTags:

]]>
Tue, 10 Nov 2015 07:20:54 -0800 https://vimeo.com/142438943
<![CDATA[Data as Culture]]> https://furtherfield.org/features/reviews/data-culture#new_tab

For my latest Furtherfield review I wallowed in curator Shiri Shalmy’s ongoing project Data as Culture, examining works by Paolo Cirio and James Bridle that deal explicitly with the concatenation of data. What happens when society is governed by a regime of data about data, increasingly divorced from the symbolic?

In a work commissioned by curator Shiri Shalmy for her ongoing project Data as Culture, artist Paolo Cirio confronts the prerequisites of art in the era of the user. Your Fingerprints on the Artwork are the Artwork Itself [YFOTAATAI] hijacks loopholes, glitches and security flaws in the infrastructure of the world wide web in order to render every passive website user as pure material. In an essay published on a backdrop of recombined RAW tracking data, Cirio states: Data is the raw material of a new industrial, cultural and artistic revolution. It is a powerful substance, yet when displayed as a raw stream of digital material, represented and organised for computational interpretation only, it is mostly inaccessible and incomprehensible. In fact, there isn’t any meaning or value in data per se. It is human activity that gives sense to it. It can be useful, aesthetic or informative, yet it will always be subject to our perception, interpretation and use. It is the duty of the contemporary artist to explore what it really looks like and how it can be altered beyond the common conception. Even the nondescript use patterns of the Data as Culture website can be figured as an artwork, Cirio seems to be saying, but the art of the work requires an engagement that contradicts the passivity of a mere ‘user’. YFOTAATAI is a perfect accompaniment to Shiri Shalmy’s curatorial project, generating questions around security, value and production before any link has been clicked or artwork entertained. Feeling particularly receptive I click on James Bridle’s artwork/website  A Quiet Disposition and ponder on the first hyperlink that surfaces: the link reads “Keanu Reeves“: “Keanu Reeves” is the name of a person known to the system.  Keanu Reeves has been encountered once by the system and is closely associated with Toronto, Enter The Dragon, The Matrix, Surfer and Spacey Dentist.  In 1999 viewers were offered a visual metaphor of ‘The Matrix’: a stream of flickering green signifiers ebbing, like some half-living fungus of binary digits, beneath our apparently solid, Technicolor world. James Bridle‘s expansive work A Quiet Disposition [AQD] could be considered as an antidote to this millennial cliché, founded on the principle that we are in fact ruled by a third, much more slippery, realm of information superior to both the Technicolor and the digital fungus. Our socio-political, geo-economic, rubber bullet, blood and guts world, as Bridle envisages it, relies on data about data. Read the rest of this review at Furtherfield.org

]]>
Wed, 01 Oct 2014 07:37:48 -0700 https://furtherfield.org/features/reviews/data-culture#new_tab
<![CDATA[Data as Culture]]> http://furtherfield.org/features/reviews/data-culture

For my latest Furtherfield review I wallowed in curator Shiri Shalmy’s ongoing project Data as Culture, examining works by Paolo Cirio and James Bridle that deal explicitly with the concatenation of data. What happens when society is governed by a regime of data about data, increasingly divorced from the symbolic? In a work commissioned by curator Shiri Shalmy for her ongoing project Data as Culture, artist Paolo Cirio confronts the prerequisites of art in the era of the user. Your Fingerprints on the Artwork are the Artwork Itself [YFOTAATAI] hijacks loopholes, glitches and security flaws in the infrastructure of the world wide web in order to render every passive website user as pure material. In an essay published on a backdrop of recombined RAW tracking data, Cirio states: Data is the raw material of a new industrial, cultural and artistic revolution. It is a powerful substance, yet when displayed as a raw stream of digital material, represented and organised for computational interpretation only, it is mostly inaccessible and incomprehensible. In fact, there isn’t any meaning or value in data per se. It is human activity that gives sense to it. It can be useful, aesthetic or informative, yet it will always be subject to our perception, interpretation and use. It is the duty of the contemporary artist to explore what it really looks like and how it can be altered beyond the common conception. Even the nondescript use patterns of the Data as Culture website can be figured as an artwork, Cirio seems to be saying, but the art of the work requires an engagement that contradicts the passivity of a mere ‘user’. YFOTAATAI is a perfect accompaniment to Shiri Shalmy’s curatorial project, generating questions around security, value and production before any link has been clicked or artwork entertained. Feeling particularly receptive I click on James Bridle’s artwork/website  A Quiet Disposition and ponder on the first hyperlink that surfaces: the link reads “Keanu Reeves“: “Keanu Reeves” is the name of a person known to the system.  Keanu Reeves has been encountered once by the system and is closely associated with Toronto, Enter The Dragon, The Matrix, Surfer and Spacey Dentist.  In 1999 viewers were offered a visual metaphor of ‘The Matrix’: a stream of flickering green signifiers ebbing, like some half-living fungus of binary digits, beneath our apparently solid, Technicolor world. James Bridle‘s expansive work A Quiet Disposition [AQD] could be considered as an antidote to this millennial cliché, founded on the principle that we are in fact ruled by a third, much more slippery, realm of information superior to both the Technicolor and the digital fungus. Our socio-political, geo-economic, rubber bullet, blood and guts world, as Bridle envisages it, relies on data about data. Read the rest of this review at Furtherfield.org

]]>
Wed, 01 Oct 2014 06:37:48 -0700 http://furtherfield.org/features/reviews/data-culture
<![CDATA[Meet the Father of Digital Life]]> http://nautil.us/issue/14/mutation/meet-the-father-of-digital-life

n 1953, at the dawn of modern computing, Nils Aall Barricelli played God. Clutching a deck of playing cards in one hand and a stack of punched cards in the other, Barricelli hovered over one of the world’s earliest and most influential computers, the IAS machine, at the Institute for Advanced Study in Princeton, New Jersey. During the day the computer was used to make weather forecasting calculations; at night it was commandeered by the Los Alamos group to calculate ballistics for nuclear weaponry. Barricelli, a maverick mathematician, part Italian and part Norwegian, had finagled time on the computer to model the origins and evolution of life.

Inside a simple red brick building at the northern corner of the Institute’s wooded wilds, Barricelli ran models of evolution on a digital computer. His artificial universes, which he fed with numbers drawn from shuffled playing cards, teemed with creatures of code—morphing, mutating, melting, maintaining. He created laws that determined, independent of any foreknowledge on his part, which assemblages of binary digits lived, which died, and which adapted. As he put it in a 1961 paper, in which he speculated on the prospects and conditions for life on other planets, “The author has developed numerical organisms, with properties startlingly similar to living organisms, in the memory of a high speed computer.” For these coded critters, Barricelli became a maker of worlds.

Until his death in 1993, Barricelli floated between biological and mathematical sciences, questioning doctrine, not quite fitting in. “He was a brilliant, eccentric genius,” says George Dyson, the historian of technology and author of Darwin Among The Machines and Turing’s Cathedral, which feature Barricelli’s work. “And the thing about geniuses is that they just see things clearly that other people don’t see.”

Barricelli programmed some of the earliest computer algorithms that resemble real-life processes: a subdivision of what we now call “artificial life,” which seeks to simulate living systems—evolution, adaptation, ecology—in computers. Barricelli presented a bold challenge to the standard Darwinian model of evolution by competition by demonstrating that organisms evolved by symbiosis and cooperation.

Pixar cofounder Alvy Ray Smith says Barricelli influenced his earliest thinking about the possibilities for computer animation.

In fact, Barricelli’s projects anticipated many current avenues of research, including cellular automata, computer programs involving grids of numbers paired with local rules that can produce complicated, unpredictable behavior. His models bear striking resemblance to the one-dimensional cellular automata—life-like lattices of numerical patterns—championed by Stephen Wolfram, whose search tool Wolfram Alpha helps power the brain of Siri on the iPhone. Nonconformist biologist Craig Venter, in defending his creation of a cell with a synthetic genome—“the first self-replicating species we’ve had on the planet whose parent is a computer”—echoes Barricelli.

Barricelli’s experiments had an aesthetic side, too. Uncommonly for the time, he converted the digital 1s and 0s of the computer’s stored memory into pictorial images. Those images, and the ideas behind them, would influence computer animators in generations to come. Pixar cofounder Alvy Ray Smith, for instance, says Barricelli stirred his earliest thinking about the possibilities for computer animation, and beyond that, his philosophical muse. “What we’re really talking about here is the notion that living things are computations,” he says. “Look at how the planet works and it sure does look like a computation.”

Despite Barricelli’s pioneering experiments, barely anyone remembers him. “I have not heard of him to tell you the truth,” says Mark Bedau, professor of humanities and philosophy at Reed College and editor of the journal Artificial Life. “I probably know more about the history than most in the field and I’m not aware of him.”

Barricelli was an anomaly, a mutation in the intellectual zeitgeist, an unsung hero who has mostly languished in obscurity for the past half century. “People weren’t ready for him,” Dyson says. That a progenitor has not received much acknowledgment is a failing not unique to science. Visionaries often arrive before their time. Barricelli charted a course for the digital revolution, and history has been catching up ever since.

Barricelli_BREAKER-02 EVOLUTION BY THE NUMBERS: Barricelli converted his computer tallies of 1s and 0s into images. In this 1953 Barricelli print, explains NYU associate professor Alexander Galloway, the chaotic center represents mutation and disorganization. The more symmetrical fields toward the margins depict Barricelli’s evolved numerical organisms.From the Shelby White and Leon Levy Archives Center, Institute for Advanced Study, Princeton. Barricelli was born in Rome on Jan. 24, 1912. According to Richard Goodman, a retired microbiologist who met and befriended the mathematician in the 1960s, Barricelli claimed to have invented calculus before his tenth birthday. When the young boy showed the math to his father, he learned that Newton and Leibniz had preempted him by centuries. While a student at the University of Rome, Barricelli studied mathematics and physics under Enrico Fermi, a pioneer of quantum theory and nuclear physics. A couple of years after graduating in 1936, he immigrated to Norway with his recently divorced mother and younger sister.

As World War II raged, Barricelli studied. An uncompromising oddball who teetered between madcap and mastermind, Barricelli had a habit of exclaiming “Absolut!” when he agreed with someone, or “Scandaloos!” when he found something disagreeable. His accent was infused with Scandinavian and Romantic pronunciations, making it occasionally challenging for colleagues to understand him. Goodman recalls one of his colleagues at the University of California, Los Angeles who just happened to be reading Barricelli’s papers “when the mathematician himself barged in and, without ceremony, began rattling off a stream of technical information about his work on phage genetics,” a science that studies gene mutation, replication, and expression through model viruses. Goodman’s colleague understood only fragments of the speech, but realized it pertained to what he had been reading.

“Are you familiar with the work of Nils Barricelli?” he asked.

“Barricelli! That’s me!” the mathematician cried.

Notwithstanding having submitted a 500-page dissertation on the statistical analysis of climate variation in 1946, Barricelli never completed his Ph.D. Recalling the scene in the movie Amadeus in which the Emperor of Austria commends Mozart’s performance, save for there being “too many notes,” Barricelli’s thesis committee directed him to slash the paper to a tenth of the size, or else it would not accept the work. Rather than capitulate, Barricelli forfeited the degree.

Barricelli began modeling biological phenomena on paper, but his calculations were slow and limited. He applied to study in the United States as a Fulbright fellow, where he could work with the IAS machine. As he wrote on his original travel grant submission in 1951, he sought “to perform numerical experiments by means of great calculating machines,” in order to clarify, through mathematics, “the first stages of evolution of a species.” He also wished to mingle with great minds—“to communicate with American statisticians and evolution-theorists.” By then he had published papers on statistics and genetics, and had taught Einstein’s theory of relativity. In his application photo, he sports a pyramidal moustache, hair brushed to the back of his elliptic head, and hooded, downturned eyes. At the time of his application, he was a 39-year-old assistant professor at the University of Oslo.

Although the program initially rejected him due to a visa issue, in early 1953 Barricelli arrived at the Institute for Advanced Study as a visiting member. “I hope that you will be finding Mr. Baricelli [sic] an interesting person to talk with,” wrote Ragnar Frisch, a colleague of Barricelli’s who would later win the first Nobel Prize in Economics, in a letter to John von Neumann, a mathematician at IAS, who helped devise the institute’s groundbreaking computer. “He is not very systematic always in his exposition,” Frisch continued, “but he does have interesting ideas.”

Barricelli_BREAKER_2crop PSYCHEDELIC BARRICELLI: In this recreation of a Barricelli experiment, NYU associate professor Alexander Galloway has added color to show the gene groups more clearly. Each swatch of color signals a different organism. Borders between the color fields represent turbulence as genes bounce off and meld with others, symbolizing Barricelli’s symbiogenesis.Courtesy Alexander Galloway Centered above Barricelli’s first computer logbook entry at the Institute for Advanced Study, in handwritten pencil script dated March 3, 1953, is the title “Symbiogenesis problem.” This was his theory of proto-genes, virus-like organisms that teamed up to become complex organisms: first chromosomes, then cellular organs, onward to cellular organisms and, ultimately, other species. Like parasites seeking a host, these proto-genes joined together, according to Barricelli, and through their mutual aid and dependency, originated life as we know it.

Standard neo-Darwinian doctrine maintained that natural selection was the main means by which species formed. Slight variations and mutations in genes combined with competition led to gradual evolutionary change. But Barricelli disagreed. He pictured nimbler genes acting as a collective, cooperative society working together toward becoming species. Darwin’s theory, he concluded, was inadequate. “This theory does not answer our question,” he wrote in 1954, “it does not say why living organisms exist.”

Barricelli coded his numerical organisms on the IAS machine in order to prove his case. “It is very easy to fabricate or simply define entities with the ability to reproduce themselves, e.g., within the realm of arithmetic,” he wrote.

The early computer looked sort of like a mix between a loom and an internal combustion engine. Lining the middle region were 40 Williams cathode ray tubes, which served as the machine’s memory. Within each tube, a beam of electrons (the cathode ray) bombarded one end, creating a 32-by-32 grid of points, each consisting of a slight variation in electrical charge. There were five kilobytes of memory total stored in the machine. Not much by today’s standards, but back then it was an arsenal.

Barricelli saw his computer organisms as a blueprint of life—on this planet and any others.

Inside the device, Barricelli programmed steadily mutable worlds each with rows of 512 “genes,” represented by integers ranging from negative to positive 18. As the computer cycled through hundreds and thousands of generations, persistent groupings of genes would emerge, which Barricelli deemed organisms. The trick was to tweak his manmade laws of nature—“norms,” as he called them—which governed the universe and its entities just so. He had to maintain these ecosystems on the brink of pandemonium and stasis. Too much chaos and his beasts would unravel into a disorganized shamble; too little and they would homogenize. The sweet spot in the middle, however, sustained life-like processes.

Barricelli’s balancing act was not always easygoing. His first trials were riddled with pests: primitive, often single numeric genes invaded the space and gobbled their neighbors. Typically, he was only able to witness a couple of hereditary changes, or a handful at best, before the world unwound. To create lasting evolutionary processes, he needed to handicap these pests’ ability to rapidly reproduce. By the time he returned to the Institute in 1954 to begin a second round of experiments, Barricelli made some critical changes. First, he capped the proliferation of the pests to once per generation. That constraint allowed his numerical organisms enough leeway to outpace the pests. Second, he began employing different norms to different sections of his universes. That forced his numerical organisms always to adapt.

Even in the earlier universes, Barricelli realized that mutation and natural selection alone were insufficient to account for the genesis of species. In fact, most single mutations were harmful. “The majority of the new varieties which have shown the ability to expand are a result of crossing-phenomena and not of mutations, although mutations (especially injurious mutations) have been much more frequent than hereditary changes by crossing in the experiments performed,” he wrote.

When an organism became maximally fit for an environment, the slightest variation would only weaken it. In such cases, it took at least two modifications, effected by a cross-fertilization, to give the numerical organism any chance of improvement. This indicated to Barricelli that symbioses, gene crossing, and “a primitive form of sexual reproduction,” were essential to the emergence of life.

“Barricelli immediately figured out that random mutation wasn’t the important thing; in his first experiment he figured out that the important thing was recombination and sex,” Dyson says. “He figured out right away what took other people much longer to figure out.” Indeed, Barricelli’s theory of symbiogenesis can be seen as anticipating the work of independent-thinking biologist Lynn Margulis, who in the 1960s showed that it was not necessarily genetic mutations over generations, but symbiosis, notably of bacteria, that produced new cell lineages.

Barricelli saw his computer organisms as a blueprint of life—on this planet and any others. “The question whether one type of symbio-organism is developed in the memory of a digital computer while another type is developed in a chemical laboratory or by a natural process on some planet or satellite does not add anything fundamental to this difference,” he wrote. A month after Barricelli began his experiments on the IAS machine, Crick and Watson announced the shape of DNA as a double helix. But learning about the shape of biological life didn’t put a dent in Barricelli’s conviction that he had captured the mechanics of life on a computer. Let Watson and Crick call DNA a double helix. Barricelli called it “molecule-shaped numbers.”

Barricelli_BREAKER

What buried Barricelli in obscurity is something of a mystery. “Being uncompromising in his opinions and not a team player,” says Dyson, no doubt led to Barricelli’s “isolation from the academic mainstream.” Dyson also suspects Barricelli and the indomitable Hungarian mathematician von Neumann, an influential leader at the Institute of Advanced Study, didn’t hit it off. Von Neumann appears to have ignored Barricelli. “That was sort of fatal because everybody looked to von Neumann as the grandfather of self-replicating machines.”

Ever so slowly, though, Barricelli is gaining recognition. That stems in part from another of Barricelli’s remarkable developments; certainly one of his most beautiful. He didn’t rest with creating a universe of numerical organisms, he converted his organisms into images. His computer tallies of 1s and 0s would then self-organize into visual grids of exquisite variety and texture. According to Alexander Galloway, associate professor in the department of media, culture, and communication at New York University, a finished Barricelli “image yielded a snapshot of evolutionary time.”

When Barricelli printed sections of his digitized universes, they were dazzling. To modern eyes they might look like satellite imagery of an alien geography: chaotic oceans, stratigraphic outcrops, and the contours of a single stream running down the center fold, fanning into a delta at the patchwork’s bottom. “Somebody needs to do a museum show and show this stuff because they’re outrageous,” Galloway says.

Barricelli was an uncompromising oddball who teetered between madcap and mastermind.

Today, Galloway, a member of Barricelli’s small but growing cadre of boosters, has recreated the images. Following methods described by Barricelli in one of his papers, Galloway has coded an applet using the computer language Processing to revive Barricelli’s numerical organisms—with slight variation. While Barricelli encoded his numbers as eight-unit-long proto-pixels, Galloway condensed each to a single color-coded cell. By collapsing each number into a single pixel, Galloway has been able to fit eight times as many generations in the frame. These revitalized mosaics look like psychedelic cross-sections of the fossil record. Each swatch of color represents an organism, and when one color field bumps up against another one, that’s where cross-fertilization takes place.

“You can see these kinds of points of turbulence where the one color meets another color,” Galloway says, showing off the images on a computer in his office. “That’s a point where a number would be—or a gene would be—sort of jumping from one organism to another.” Here, in other words, is artificial life—Barricelli’s symbiogenesis—frozen in amber. And cyan and lavender and teal and lime and fuchsia.

Galloway is not the only one to be struck by the beauty of Barricelli’s computer-generated digital images. As a doctoral student, Pixar cofounder Smith became familiar with Barricelli’s work while researching the history of cellular automata for his dissertation. When he came across Barricelli’s prints he was astonished. “It was remarkable to me that with such crude computing facilities in the early 50s, he was able to be making pictures,” Smith says. “I guess in a sense you can say that Barricelli got me thinking about computer animation before I thought about computer animation. I never thought about it that way, but that’s essentially what it was.”

Cyberspace now swells with Barricelli’s progeny. Self-replicating strings of arithmetic live out their days in the digital wilds, increasingly independent of our tampering. The fittest bits survive and propagate. Researchers continue to model reduced, pared-down versions of life artificially, while the real world bursts with Boolean beings. Scientists like Venter conjure synthetic organisms, assisted by computer design. Swarms of autonomous codes thrive, expire, evolve, and mutate underneath our fingertips daily. “All kinds of self-reproducing codes are out there doing things,” Dyson says. In our digital lives, we are immersed in Barricelli’s world.

]]>
Fri, 20 Jun 2014 06:08:03 -0700 http://nautil.us/issue/14/mutation/meet-the-father-of-digital-life
<![CDATA[Four Notes Towards Post-Digital Propaganda | post-digital-research]]> http://post-digital.projects.cavi.dk/?p=475

“Propaganda is called upon to solve problems created by technology, to play on maladjustments and to integrate the individual into a technological world” (Ellul xvii).

How might future research into digital culture approach a purported “post-digital” age? How might this be understood?

1.

A problem comes from the discourse of ‘the digital’ itself: a moniker which points towards units of Base-2 arbitrary configuration, impersonal architectures of code, massive extensions of modern communication and ruptures in post-modern identity. Terms are messy, and it has never been easy to establish a ‘post’ from something, when pre-discourse definitions continue to hang in the air. As Florian Cramer has articulated so well, ‘post-digital’ is something of a loose, ‘hedge your bets’ term, denoting a general tendency to criticise the digital revolution as a modern innovation (Cramer).

Perhaps it might be aligned with what some have dubbed “solutionism” (Morozov) or “computationalism” (Berry 129; Golumbia 8): the former critiquing a Silicon Valley-led ideology oriented towards solving liberalised problems through efficient computerised means. The latter establishing the notion (and critique thereof) that the mind is inherently computable, and everything associated with it. In both cases, digital technology is no longer just a business that privatises information, but the business of extending efficient, innovative logic to all corners of society and human knowledge, condemning everything else through a cultural logic of efficiency.

In fact, there is a good reason why ‘digital’ might as well be an synonym for ‘efficiency’. Before any consideration is assigned to digital media objects (i.e. platforms, operating systems, networks), consider the inception of ‘the digital’ inception as such: that is information theory. If information was a loose, shabby, inefficient method of vagueness specific to various mediums of communication, Claude Shannon compressed all forms of communication into a universal system with absolute mathematical precision (Shannon). Once information became digital, the conceptual leap of determined symbolic logic was set into motion, and with it, the ‘digital’ became synonymous with an ideology of effectivity. No longer would miscommunication be subject to human finitude, nor be subject to matters of distance and time, but only the limits of entropy and the matter of automating messages through the support of alternating ‘true’ or ‘false’ relay systems.

However, it would be quite difficult to envisage any ‘post-computational’ break from such discourses – and with good reason: Shannon’s breakthrough was only systematically effective through the logic of computation. So the old missed encounter goes: Shannon presupposed Alan Turing’s mathematical idea of computation to transmit digital information, and Turing presupposed Shannon’s information theory to understand what his Universal Turing Machines were actually transmitting. The basic theories of both have not changed, but the materials affording greater processing power, extensive server infrastructure and larger storage space have simply increased the means for these ideas to proliferate, irrespective of what Turing and Shannon actually thought of them (some historians even speculate that Turing may have made the link between information and entropy two years before Bell Labs did) (Good).

Thus a ‘post-digital’ reference point might encompass the historical acknowledgment of Shannon’s digital efficiency, and Turing’s logic but by the same measure, open up a space for critical reflection, and how such efficiencies have transformed not only work, life and culture but also artistic praxis and aesthetics. This is not to say that digital culture is reducibly predicated on efforts made in computer science, but instead fully acknowledges these structures and accounts for how ideologies propagate reactionary attitudes and beliefs within them, whilst restricting other alternatives which do not fit their ‘vision’. Hence, the post-digital ‘task’ set for us nowadays might consist in critiquing digital efficiency and how it has come to work against commonality, despite transforming the majority of Western infrastructure in its wake.

The purpose of these notes is to outline how computation has imparted an unwarranted effect of totalised efficiency, and to label this effect the type of description it deserves: propaganda. The fact that Shannon and Turing had multiple lunches together at Bell labs in 1943, held conversations and exchanged ideas, but not detailed methods of cryptanalysis (Price & Shannon) provides a nice contextual allegory for how digital informatics strategies fail to be transparent.

But in saying this, I do not mean that companies only use digital networks for propagative means (although that happens), but that the very means of computing a real concrete function is constitutively propagative. In this sense, propaganda resembles a post-digital understanding of what it means to be integrated into an ecology of efficiency, and how technical artefacts are literally enacted as propagative decisions. Digital information often deceives us into accepting its transparency, and of holding it to that account: yet in reality it does the complete opposite, with no given range of judgements available to detect manipulation from education, or persuasion from smear. It is the procedural act of interacting with someone else’s automated conceptual principles, embedding pre-determined decisions which not only generate but pre-determine ones ability to make choices about such decisions, like propaganda.

This might consist in distancing ideological definitions of false consciousness as an epistemological limit to knowing alternatives within thought, to engaging with a real programmable systems which embeds such limits concretely, withholding the means to transform them. In other words, propaganda incorporates how ‘decisional structures’ structure other decisions, either conceptually or systematically.

2.

Two years before Shannon’s famous Masters thesis, Turing published what would be a theoretical basis for computation in his 1936 paper “On Computable Numbers, with an Application to the Entscheidungsproblem.” The focus of the paper was to establish the idea of computation within a formal system of logic, which when automated would solve particular mathematical problems put into function (Turing, An Application). What is not necessarily taken into account is the mathematical context to that idea: for the foundations of mathematics were already precarious, way before Turing outlined anything in 1936. Contra the efficiency of the digital, this is a precariousness built-in to computation from its very inception: the precariousness of solving all problems in mathematics.

The key word of that paper, its key focus, was on the Entscheidungsproblem, or decision problem. Originating from David Hilbert’s mathematical school of formalism, ‘decision’ means something more rigorous than the sorts of decisions in daily life. It really means a ‘proof theory’, or how analytic problems in number theory and geometry could be formalised, and thus efficiently solved (Hilbert 3). Solving a theorem is simply finding a provable ‘winning position’ in a game. Similar to Shannon, ‘decision’ is what happens when an automated system of function is constructed in such a sufficiently complex way, that an algorithm can always ‘decide’ a binary, yes or no answer to a mathematical problem, when given an arbitrary input, in a sufficient amount of time. It does not require ingenuity, intuition or heuristic gambles, just a combination of simple consistent formal rules and a careful avoidance of contradiction.

The two key words there are ‘always’ and ‘decide’. The progressive end-game of twentieth century mathematicians who, like Hilbert, sought after a simple totalising conceptual system to decide every mathematical problem and work towards absolute knowledge. All Turing had to do was make explicit Hilbert’s implicit computational treatment of formal rules, manipulate symbol strings and automate them using an ’effective’ or “systematic method” (Turing, Solvable and Unsolvable Problems 584) encoded into a machine. This is what Turing’s thesis meant (discovered independently to Alonzo Church’s equivalent thesis (Church)): any systematic algorithm solved by a mathematical theorem can be computed by a Turing machine (Turing, An Application), or in Robin Gandy’s words, “[e]very effectively calculable function is a computable function” (Gandy).

Thus effective procedures decide problems, and they resolve puzzles providing winning positions (like theorems) in the game of functional rules and formal symbols. In Turing’s words, “a systematic procedure is just a puzzle in which there is never more than one possible move in any of the positions which arise and in which some significance is attached to the final result” (Turing, Solvable and Unsolvable Problems 590). The significance, or the winning position, becomes the crux of the matter for the decision: what puzzles or problems are to be decided? This is what formalism attempted to do: encode everything through the regime of formalised efficiency, so that all of mathematically inefficient problems are, in principle, ready to be solved. Programs are simply proofs: if it could be demonstrated mathematically, it could be automated.

In 1936, Turing had showed some complex mathematical concepts of effective procedures could simulate the functional decisions of all the other effective procedures (such as the Universal Turing Machine). Ten years later, Turing and John von Neumann would independently show how physical general purpose computers, offered the same thing and from that moment on, efficient digital decisions manifested themselves in the cultural application of physical materials. Before Shannon’s information theory offered the precision of transmitting information, Hilbert and Turing developed the structure of its transmission in the underlying regime of formal decision.

Yet, there was also a non-computational importance here, for Turing was also fascinated by what decisions couldn’t compute. His thesis was quite precise, so as to elucidate that if no mathematical problem could be proved, a computer was not of any use. In fact, the entire focus of his 1936 paper, often neglected by Silicon Valley cohorts, was to show that Hilbert’s particular decision problem could not be solved. Unlike Hilbert, Turing was not interested in using computation to solve every problem, but as a curious endeavour for surprising intuitive behaviour. The most important of all, Turing’s halting, or printing problem was influential, precisely as it was undecidable; a decision problem which couldn’t be decided.

We can all picture the halting problem, even obliquely. Picture the frustrated programmer or mathematician starting at their screen, waiting to know when an algorithm will either halt and spit out a result, or provide no answer. The computer itself has already determined the answer for us, the programmer just has to know when to give up. But this is a myth, inherited with a bias towards human knowledge, and a demented understanding of machines as infinite calculating engines, rather than concrete entities of decision. For reasons that escape word space, Turing didn’t understand the halting problem in this way: instead he understood it as a contradictory example of computational decisions failing to decide on each other, on the account that there could never be one totalising decision or effective procedure. There is no guaranteed effective procedure to decide on all the others, and any attempt to build one (or invest in a view which might help build one), either has too much investment in absolute formal reason, or it ends up with ineffective procedures.

Undecidable computation might be looked at as a dystopian counterpart against the efficiency of Shannon’s ‘digital information’ theory. A base 2 binary system of information resembling one of two possible states, whereby a system can communicate with one digit, only in virtue of the fact that there is one other digit alternative to it. Yet the perfect transmission of that information, is only subject to a system which can ‘decide’ on the digits in question, and establish a proof to calculate a success rate. If there is no mathematical proof to decide a problem, then transmitting information becomes problematic for establishing a solution.

3.

What has become clear is that our world is no longer simply accountable to human decision alone. Decisions are no longer limited to the borders of human decisions and ‘culture’ is no longer simply guided by a collective whole of social human decisions. Nor is it reducible to one harmonious ‘natural’ collective decision which prompts and pre-empts everything else. Instead we seem to exist in an ecology of decisions: or better yet decisional ecologies. Before there was ever the networked protocol (Galloway), there was the computational decision. Decision ecologies are already set up before we enter the world, implicitly coterminous with our lives: explicitly determining a quantified or bureaucratic landscape upon which an individual has limited manoeuvrability.

Decisions are not just digital, they are continuous as computers can be: yet decisions are at their most efficient when digitally transferred. Decisions are everywhere and in everything. Look around. We are constantly told by governments and states that are they making tough decisions in the face of austerity. CEOs and Directors make tough decisions for the future of their companies and ‘great’ leaders are revered for being ‘great decisive leaders’: not just making decisions quickly and effectively, but also settling issues and producing definite results.

Even the word ‘decide’, comes from the Latin origin of ‘decidere’, which means to determine something and ‘to cut off.’ Algorithms in financial trading know not of value, but of decision: whether something is marked by profit or loss. Drones know not of human ambiguity, but can only decide between kill and ignore, cutting off anything in-between. Constructing a system which decides between one of two digital values, even repeatedly, means cutting off and excluding all other possible variables, leaving a final result at the end of the encoded message. Making a decision, or building a system to decide a particular ideal or judgement must force other alternatives outside of it. Decisions are always-already embedded into the framework of digital action, always already deciding what is to be done, how it can be done or what is threatening to be done. It would make little sense to suggest that these entities ‘make decisions’ or ‘have decisions’, it would be better to say that they are decisions and ecologies are constitutively constructed by them.

The importance of neo-liberal digital transmissions are not that they become innovative, or worthy of a zeitgeist break: but that they demonstrably decide problems whose predominant significance is beneficial for self-individual efficiency and accumulation of capital. Digital efficiency is simply about the expansion of automating decisions and what sort of formalised significances must be propagated to solve social and economic problems, which creates new problems in a vicious circle.

The question can no longer simply be ‘who decides’, but now, ‘what decides?’ Is it the cafe menu board, the dinner party etiquette, the NASDAQ share price, Google Pagerank, railway network delays, unmanned combat drones, the newspaper crossword, the javascript regular expression or the differential calculus? It’s not quite right to say that algorithms rule the world, whether in algo-trading or in data capture, but the uncomfortable realisation that real entities are built to determine provable outcomes time and time again: most notably ones for cumulating profit and extracting revenue from multiple resources.

One pertinent example: consider George Dantzig’s simplex algorithm: this effective procedure (whose origins began in multidimensional geometry) can always decide solutions for large scale optimisation problems which continually affect multi-national corporations. The simplex algorithm’s proliferation and effectiveness has been critical since its first commercial application in 1952, when Abraham Charnes and William Cooper used it to decide how best to optimally blend four different petroleum products at the Gulf Oil Company (Elwes 35; Gass & Assad 79). Since then the simplex algorithm has had years of successful commercial use, deciding almost everything from bus timetables and work shift patterns to trade shares and Amazon warehouse configurations. According to the optimisation specialist Jacek Gondzio, the simplex algorithm runs at “tens, probably hundreds of thousands of calls every minute” (35), always deciding the most efficient method of extracting optimisation.

In contemporary times, nearly all decision ecologies work in this way, accompanying and facilitating neo-liberal methods of self-regulation and processing all resources through a standardised efficiency: from bureaucratic methods of formal standardisation, banal forms ready to be analysed one central system, to big-data initiatives and simple procedural methods of measurement and calculation. The technique of decision is a propagative method of embedding knowledge, optimisation and standardisation techniques in order to solve problems and an urge to solve the most unsolvable ones, including us.

Google do not build into their services an option to pay for the privilege of protecting privacy: the entire point of providing a free service which purports to improve daily life, is that it primarily benefits the interests of shareholders and extend commercial agendas. James Grimmelmann gave a heavily detailed exposition on Google’s own ‘net neutrality’ algorithms and how biased they happen to be. In short, PageRank does not simply decide relevant results, it decides visitor numbers and he concluded on this note.

With disturbing frequency, though, websites are not users’ friends. Sometimes they are, but often, the websites want visitors, and will be willing to do what it takes to grab them (Grimmelmann 458).

If the post-digital stands for the self-criticality of digitalisation already underpinning contemporary regimes of digital consumption and production, then its saliency lies in understanding the logic of decision inherent to such regimes. The reality of the post-digital, shows that machines remain curiously efficient whether we relish in cynicism or not. Such regimes of standardisation and determined results, were already ‘mistakenly built in’ to the theories which developed digital methods and means, irrespective of what computers can or cannot compute.

4.

Why then should such post-digital actors be understood as instantiations of propaganda? The familiarity of propaganda is manifestly evident in religious and political acts of ideological persuasion: brainwashing, war activity, political spin, mind control techniques, subliminal messages, political campaigns, cartoons, belief indoctrination, media bias, advertising or news reports. A definition of propaganda might follow from all of these examples: namely, the systematic social indoctrination of biased information that persuades the masses to take action on something which is neither beneficial to them, nor in their best interests: or as Peter Kenez writes, propaganda is “the attempt to transmit social and political values in the hope of affecting people’s thinking, emotions, and thereby behaviour” (Kenez 4) Following Stanley B. Cunningham’s watered down definition, propaganda might also denote a helpful and pragmatic “shorthand statement about the quality of information transmitted and received in the twentieth century” (Cunningham 3).

But propaganda isn’t as clear as this general definition makes out: in fact what makes propaganda studies such a provoking topic is that nearly every scholar agrees that no stable definition exists. Propaganda moves beyond simple ‘manipulation’ and ‘lies’ or derogatory, jingoistic representation of an unsubtle mood – propaganda is as much about the paradox of constructing truth, and the irrational spread of emotional pleas, as well as endorsing rational reason. As the master propagandist William J. Daugherty wrote;

It is a complete delusion to think of the brilliant propagandist as being a professional liar. The brilliant propagandist […] tells the truth, or that selection of the truth which is requisite for his purpose, and tells it in such a way that the recipient does not think that he is receiving any propaganda…. (Daugherty 39).

Propaganda, like ideology works by being inherently implicit and social. In the same way that post-ideology apologists ignore their symptom, propaganda is also ignored. It isn’t to be taken as a shadowy fringe activity, blown apart by the democratising fairy-dust of ‘the Internet’. As many others have noted, the purported ‘decentralising’ power of online networks, offer new methods for propagative techniques, or ‘spinternet’ strategies, evident in China (Brady). Iran’s recent investment into video game technology only makes sense, only when you discover that 70% of Iran’s population are under 30 years of age, underscoring a suitable contemporary method of dissemination. Similarly in 2011, the New York City video game developer Kuma Games was mired in controversy when it was discovered that an alleged CIA agent, Amir Mirza Hekmati, had been recruited to make an episodic video game series intending to “change the public opinion’s mindset in the Middle East.” (Tehran Times). The game in question, Kuma\War (2006 – 2011) was a free-to-play First-Person Shooter series, delivered in episodic chunks, the format of which attempted to simulate biased re-enactments of real-life conflicts, shortly after they reached public consciousness.

Despite his unremarkable leanings towards Christian realism, Jacques Ellul famously updated propaganda’s definition as the end product of what he previously lamented as ‘technique’. Instead of viewing propaganda as a highly organised systematic strategy for extending the ideologues of peaceful warfare, he understood it as a general social phenomenon in contemporary society.

Ellul outlined two types: political and sociological propaganda: Political propaganda involves government, administrative techniques which intend to directly change the political beliefs of an intended audience. By contrast, sociological propaganda is the implicit unification of involuntary public behaviour which creates images, aesthetics, problems, stereotypes, the purpose of which aren’t explicitly direct, nor overtly militaristic. Ellul argues that sociological propaganda exists; “in advertising, in the movies (commercial and non-political films), in technology in general, in education, in the Reader’s Digest; and in social service, case work, and settlement houses” (Ellul 64). It is linked to what Ellul called “pre” or “sub-propaganda”: that is, an imperceptible persuasion, silently operating within ones “style of life” or permissible attitude (63). Faintly echoing Louis Althusser’s Ideological State Apparatuses (Althusser 182) nearly ten years prior, Ellul defines it as “the penetration of an ideology by means of its sociological context.” (63) Sociological propaganda is inadequate for decisive action, paving the way for political propaganda – its strengthened explicit cousin – once the former’s implicitness needs to be transformed into the latter’s explicitness.

In a post-digital world, such implicitness no longer gathers wartime spirits, but instead propagates a neo-liberal way of life that is individualistic, wealth driven and opinionated. Ellul’s most powerful assertion is that ‘facts’ and ‘education’ are part and parcel of the sociological propagative effect: nearly everyone faces a compelling need to be opinionated and we are all capable of judging for ourselves what decisions should be made, without at first considering the implicit landscape from which these judgements take place. One can only think of the implicit digital landscape of Twitter: the archetype for self-promotion and snippets of opinions and arguments – all taking place within Ellul’s sub-propaganda of data collection and concealment. Such methods, he warns, will have “solved the problem of man” (xviii).

But information is of relevance here, and propaganda is only effective within a social community when it offers the means to solve problems using the communicative purview of information:

Thus, information not only provides the basis for propaganda but gives propaganda the means to operate; for information actually generates the problems that propaganda exploits and for which it pretends to offer solutions. In fact, no propaganda can work until the moment when a set of facts has become a problem in the eyes of those who constitute public opinion (114).

]]>
Wed, 11 Dec 2013 15:42:45 -0800 http://post-digital.projects.cavi.dk/?p=475
<![CDATA[Digital Metaphors: Editor’s Introduction | Alluvium]]> http://www.alluvium-journal.org/2013/12/04/digital-metaphors-editors-introduction/

Metaphor wants to be…

‘[...] metaphors work to change people’s minds. Orators have known this since Demosthenes. [...] But there’s precious little evidence that they tell you what people think. [...] And in any case, words aren’t meanings. As any really good spy knows, a word is a code that stands for something else. If you take the code at face value then you’ve fallen for the trick.’ (Daniel Soar, “The Bourne Analogy”).

Tao Lin’s recent novel Taipei (2013) is a fictional document of life in our current digital culture. The protagonist, Paul — who is loosely based on the author — is numb from his always turned on digitally mediated life, and throughout the novel increases his recreational drug taking as a kind of compensation: the chemical highs and trips are the experiential counterpoint to the mundanity of what once seemed otherworldly — his online encounters. In the novel online interactions are not distinguished from real life ones, they are all real, and so Paul’s digital malaise is also his embodied depressive mindset. The apotheosis of both these highs and lows is experienced by Paul, and his then girlfriend Erin, on a trip to visit Paul’s parents in Taipei. There the hyper-digital displays of the city — ‘lighted signs [...] animated and repeating like GIF files, attached to every building’ (166) — launch some of the more explicit mediations on digital culture in the novel: Paul asked [Erin] if she could think of a newer word for “computer” than “computer,” which seemed outdated and, in still being used, suspicious in some way, like maybe the word itself was intelligent and had manipulated culture in its favor, perpetuating its usage (167). Here Paul intimates a sense that language is elusive, that it is sentient, and that, in the words of Daniel Soar quoted above as an epitaph, it tricks us. It seems to matter that in this extract from Taipei the word ‘computer’ is conflated with a sense of the object ‘computer’. The word, in being ‘intelligent’, has somehow taken on the quality of the thing it denotes — a potentially malevolent agency. The history of computing is one of people and things: computers were first the women who calculated ballistics trajectories during the Second World War, whose actions became the template for modern automated programming. The computer, as an object, is also-always a metaphor of a human-machine relation. The name for the machine asserts a likeness between the automated mechanisms of computing and the physical and mental labour of the first human ‘computers’. Thinking of computing as a substantiated metaphor for a human-machine interaction pervades the way we talk about digital culture. Most particularly in the way we think of computers as sentient — however casually. We often speak of computers as acting independently from our commands, and frequently we think of them ‘wanting’ things, ‘manipulating’ culture, or ourselves.

Pre-Electronic Binary Code Pre-electronic binary code: the history of computing offers us metaphors for human-machine interaction which pervade the way we talk about digital culture today [Image by Erik Wilde under a CC BY-SA license]

Julie E. Cohen, in her 2012 book Configuring the Networked Self, describes the way the misplaced metaphor of human-computer and machine-computer has permeated utopian views of digitally mediated life: Advocates of information-as-freedom initially envisioned the Internet as a seamless and fundamentally democratic web of information [...]. That vision is encapsulated in Stewart Brand’s memorable aphorism “Information wants to be free.” [...] Information “wants” to be free in the same sense that objects with mass present in the earth’s gravitational field “want” to fall to the ground. (8) Cohen’s sharp undercutting of Brand’s aphorism points us toward the way the metaphor of computing is also an anthropomorphisation. The metaphor implicates a human desire in machine action. This linguistic slipperiness filters through discussion of computing at all levels. In particular the field of software studies — concerned with theorising code and programming as praxis and thing — contains at its core a debate on the complexity of considering code in a language which will always metaphorise, or allegorise. Responding to an article of Alexander R. Galloway’s titled “Language Wants to Be Overlooked: On Software and Ideology”, Wendy Hui Kyong Chun argues that Galloway’s stance against a kind of ‘anthropomorphization’ of code studies (his assertion that as an executable language code is ‘against interpretation’) is impossible within a discourse of critical theory. Chun argues, ‘to what extent, however, can source code be understood outside of anthropomorphization? [...] (The inevitability of this anthropomorphization is arguably evident in the title of Galloway’s article: “Language Wants to Be Overlooked” [emphasis added].)’ (Chun 305). In her critique of Galloway’s approach Wendy Chun asserts that it is not possible to extract the metaphor from the material, that they are importantly and intrinsically linked.[1] For Julie E. Cohen the relationship between metaphor and digital culture-as-it-is-lived is a problematic tie that potentially damages legal and constitutional understanding of user rights. Cohen convincingly argues that a term such as ‘cyberspace’, which remains inextricable from its fictional and virtual connotations, does not transition into legal language successfully; in part because the word itself is a metaphor, premised on an imagined reality rather than ‘the situated, embodied beings who inhabit it’ (Cohen 3). And yet Cohen’s writing itself demonstrates the tenacious substance of metaphoric language, using extended exposition of metaphors as a means to think more materially about the effects of legal and digital protocol and action. In the following extract from Configuring the Networked Self, Cohen is winding down a discussion of the difficulty of forming actual policy out of freedom versus control debates surrounding digital culture. Throughout the discussion Cohen has emphasised the way that both sides of the debate are unable to substantiate their rhetoric with embodied user practice; instead Cohen identifies a language that defers specific policy aims.[2] Cohen’s own use of metaphor in this section — ‘objections to control fuel calls [...]’, ‘darknets’ (the latter in inverted commas) — is made to mean something grounded, through a kind of allegorical framework. I am not suggesting that allegory materialises metaphor — allegory functioning in part as itself an extended metaphor — but it does contextualise metaphor.

Circuit Board 2 How tenacious is metaphoric language? The persistence of computational metaphors in understanding digital culture could harm legal and constitutional understandings of user rights [Image by Christian under a CC BY-NC-ND license]

This is exemplified in Cohen’s description of the ways US policy discussions regarding code, rights and privacy of the subject are bound to a kind of imaginary, and demonstrate great difficulty in becoming concrete: Policy debates have a circular, self-referential quality. Allegations of lawlessness bolster the perceived need for control, and objections to control fuel calls for increased openness. That is no accident; rigidity and license historically have maintained a curious symbiosis. In the 1920s, Prohibition fueled the rise of Al Capone; today, privately deputized copyright cops and draconian technical protection systems spur the emergence of uncontrolled “darknets.” In science fiction, technocratic, rule-bound civilizations spawn “edge cities” marked by their comparative heterogeneity and near imperviousness to externally imposed authority. These cities are patterned on the favelas and shantytowns that both sap and sustain the world’s emerging megacities. The pattern suggests an implicit acknowledgment that each half of the freedom/control binary contains and requires the other (9-10). I quote this passage at length in order to get at the way in which the ‘self-referential nature’ of policy discussion is here explained through a conceptual, and specifically literary, framing. Technology is always both imagined and built: this seems obvious, but it justifies reiteration because the material operations of technology are always metaphorically considered just as they are concretely manifest. The perilous circumstance this creates is played on in Cohen’s writing as she critiques constitutional policy that repeatedly cannot get at the embodied subject that uses digital technology; thwarted by the writing and rewriting of debate. In Cohen’s words this real situation is like the science fiction that is always-already seemingly like the real technology. Whether William Gibson’s ‘cyberspace’, a programmer’s speculative coding, or a lawyer’s articulation of copyright, there is no easy way to break apart the relationship between the imaginary and the actual of technoculture. Perhaps then what is called for is an explosion of the metaphors that pervade contemporary digital culture. To, so to speak, push metaphors until they give way; to generate critical discourse that tests the limits of metaphors, in an effort to see what pretext they may yield for our daily digital interactions. The articles in this issue all engage with exactly this kind of discourse. In Sophie Jones’ “The Electronic Heart”, the history of computing as one of women’s labour is used to reconfigure the metaphor of a computer as an ‘electronic brain’; instead asking whether cultural anxieties about computer-simulated emotion are linked to the naturalization of women’s affective labour. In “An Ontology of Everything on the Face of the Earth”, Daniel Rourke also considers computers as a sentient metaphor: uncovering an uncanny symbiosis between what a computer wants and what a human can effect with computing, through a critical dissection of the biocybernetic leeching of John Carpenter’s 1982 film The Thing. Finally, in “The Metaphorics of Virtual Depth”, Rob Gallagher uses Marcel Proust’s treatment of novelistic spacetime to generate a critical discourse on spatial and perspectival metaphor in virtual game environments. All these articles put into play an academic approach to metaphors of computing that dig up and pull out the stuff in between language and machine. In his introduction to Understanding Digital Humanities David M. Berry has argued for such an approach: [what is needed is a] ‘critical understanding of the literature of the digital, and through that [to] develop a shared culture through a form of Bildung’ (8).

Elysium A wheel in the sky: Neil Blomkamp's futuristic L.A. plays on the territorial paranoia of the U.S. over alien invasion and dystopian metaphors of digitally-mediated environments [Image used under fair dealings provisions]

I am writing this article a day after seeing Neill Blomkamp’s film Elysium (2013). Reading Cohen’s assertion regarding the cyclical nature of US digital rights policy debates on control and freedom, her allegory with science fiction seems entirely pertinent. Elysium is set in 2154; the earth is overpopulated, under-resourced, and a global elite have escaped to a man-made (and machine-made) world on a spaceship, ‘Elysium’. Manufacturing for Elysium continues on earth where the population, ravaged by illness, dreams of escaping to Elysium to be cured in “Med-Pods”. The movie focuses on the slums of near future L.A. and — perhaps unsurprisingly given Blomkamp’s last film District 9 (2009) — plays on the real territorial paranoia of the U.S. over alien invasion: that the favelas of Central and South America, and the political structures they embody, are always threatening ascension. In Elysium the “edge city” is the whole world, and the technocratic power base is a spaceship garden, circling the earth’s orbit. ‘Elysium’ is a green and white paradise; a techno-civic environment in which humans and nature are equally managed, and manicured. ‘Elysium’, visually, looks a lot like Disney’s Epcot theme park — which brings me back to where I started. In Tao Lin’s Taipei Paul’s disillusionment with technology is in part with its failure to be as he imagined, and his imagination was informed by the Disney-fied future of Epcot. In Taipei: Paul stared at the lighted signs, some of which were animated and repeated like GIF files, attached to almost every building to face oncoming traffic [...] and sleepily thought how technology was no longer the source of wonderment and possibility it had been when, for example, he learned as a child at Epcot Center [...] that families of three, with one or two robot dogs and one maid, would live in self-sustaining, underwater, glass spheres by something like 2004 or 2008 (166). Thinking through the metaphor of Elysium has me thinking toward the fiction of Epcot (via Tao Lin’s book). The metaphor-come-allegories at work here are at remove from my digitally mediated, embodied reality, but they seep through nonetheless. Rather than only look for the concrete reality that drives the metaphor, why not also engage with the messiness of the metaphor; its potential disjunction with technology as it is lived, and its persistent presence regardless.

CITATION: Zara Dinnen, "Digital Metaphors: Editor's Introduction," Alluvium, Vol. 2, No. 6 (2013): n. pag. Web. 4 December 2013, http://dx.doi.org/10.7766/alluvium.v2.6.04

]]>
Wed, 11 Dec 2013 15:42:41 -0800 http://www.alluvium-journal.org/2013/12/04/digital-metaphors-editors-introduction/
<![CDATA[Reinvention without End: Roland Barthes | Mute]]> http://www.metamute.org/editorial/articles/reinvention-without-end-roland-barthes

Peter Suchin reappraises the prismatic works of Roland Barthes – an author who defied his own pronouncement of the designation’s demise. From the Marxist of Mythologies to the ‘scientist’ of S/Z, Suchin discovers a writer who understood the pleasure of text

Roland Barthes par Roland Barthes, Seuil, 1975In her obituary of Roland Barthes Susan Sontag observed that Barthes never underlined passages in the books he read, instead transcribing noteworthy sections of text onto index cards for later consultation. In recounting this practice Sontag connected Barthes’ aversion to this sacrilegious act of annotation with ‘the fact that he drew, and that this drawing, which he pursued seriously, was a kind of writing.’[1] Sontag was making reference to the 700 or so drawings and paintings left by Barthes – usually regarded as a literary critic and social commentator – at his death as the result of a road accident in 1980.

Occasionally reproduced in his books, most visibly on the cover of Roland Barthes par Roland Barthes (1975), but never exhibited during his lifetime, these paintings were, as Barthes himself pointed out, the work of an amateur. ‘The Amateur’, he noted, ‘engages in painting, music, sport, science, without the spirit of mastery or competition[...] he establishes himself graciously (for nothing) in the signifier: in the immediately definitive substance of music, of painting[...] he is – he will be perhaps – the counter-bourgeois artist.’[2]

If Barthes was happy to be an amateur he nonetheless gave this word the weight of a serious critical designation. The practice of an amateur is ‘counter-bourgeois’ insofar as it manages to escape commodification, having been made for the pleasure implicit in production itself, rather than for monetary gain or cultural status. Barthes’ paintings relay an indulgence in the materiality of the brush or pen as it moved across the support, in the body’s engagement with the texture of paint, the physical trace of a shimmering track of ink or a riotous collision of colours. ‘I have an almost obsessive relation to writing instruments’, he reflected in 1973. ‘I often switch from one pen to the other just for the pleasure of it. I try out new ones. I have far too many pens – I don’t know what to do with all of them.’[3] For Barthes, who wrote all his texts by hand, this concern with the tools of writing was connected with his experience and recognition of the intimate materiality of artistic production. Each day he found time to sit at the piano, ‘fingering’ as he called it, and had taken singing lessons in his youth and acted in classical Greek theatre whilst a student at the Sorbonne in the 1930s. The ‘corporeal, sensual content of rock music...expresses a new relation to the body’, he told an interviewer in 1972: ‘it should be defended.’[4]

Barthes’ perceptive analyses of French culture, collected together in Mythologies (1957), were, like his other early writings, overtly Marxist. This approach was later superseded by one in which his prose mimicked the ostensible neutrality of scientific discourse. S/Z (1970), for example, mapped five cultural codes onto a Balzac short story which had been divided up by Barthes into 561 fragments or ‘lexias’, the text being taken to pieces as though it were being examined in a laboratory. His tour de force semiological study of The Fashion System (1967) had relied on a similarly ‘objective’ approach to the linguistic niceties of fashion writing. But the practice of the later Barthes – the Barthes of The Pleasure of the Text (1973), A Lover’s Discourse (1977), and Camera Lucida (1980) – revealed the earlier publications to be complicated machines for the generation of diverse forms of language, modes of writing, as opposed to ‘matter of fact’ commentaries or critiques. When considered together as a corpus or oeuvre, Barthes numerous books suggest an emphatically idiosyncratic individual and author whose ‘political’ and ’scientific’ writings were but elements in a constantly shifting trajectory, stages in a literary career whose central motivation was the repeated reinvigoration of language. Like that of Proust, whose work he described as being for him ‘the reference work...the mandala of the entire literary cosmogony’[5], Barthes’ life might be said to be inseparable from this practice of writing. ‘The language I speak within myself is not of my time’, he mused in The Pleasure of the Text; ‘it is prey, by nature, to ideological suspicion; thus, it is with this language I must struggle. I write because I do not want the words I find...’ (p. 40). This act of writing was not so much a reflection of the ‘self’ Barthes happened to be at a given moment as a means of self-invention, of, in fact, reinvention without end. To work on language was, for Barthes, to work upon the self, engaging with received ideas, cultural stereotypes, and cliches of every kind in order to overthrow or reposition them, moving around and through language into another order of action and effect. ‘All his writings are polemical,’ suggests Sontag, but a strong optimistic strand is clearly evident too: ‘He had little feeling for the tragic. He was always finding the advantage of a disadvantage.’[6]

But if one was, as a human being, condemned to relentlessly signify, to make, and be oneself made into ‘meanings’, Barthes seriously pursued in his watercolours and assiduous scribbles the impossible position of the exemption of meaning. If these paintings are ‘a kind of writing’, they are forgeries, fragments of false tongues and imaginary ciphers, closer to what Barthes himself termed ‘texts of bliss’, rather than ‘texts of pleasure’, though positioned somewhere between the two.

This opposition, which runs through The Pleasure of the Text, defines texts of pleasure as constituting an attractive but ultimately mundane aesthetic form, whilst those of bliss or, in the French, jouissance, comprise a radical break, not merely within language but within the very fabric of culture itself. Such a binary opposition can be found elsewhere in Barthes’ writings. The terms ‘studium’ and ‘punctum’ in Camera Lucida are a case in point, the former referring to the commonality of photographic representations with which we are today surrounded, whilst ‘punctum’ designates a puncture or disturbance in the viewer. ‘A detail overwhelms the entirety of my reading; it is an immense mutation of my interest...By the mark of something, the photograph is no longer “anything whatever”.’ (p. 49) With such an emphasis on the reader’s or viewer’s individual response Barthes moved closer and closer to autobiography and the subjective format of the jotting or journal. Most famous for his 1968 essay ‘The Death of the Author’, the acutely particular tone of Barthes’ writing later appears to contradict the loss of authorial authority celebrated in this immensely influential work.[7] Rather than ‘critic’, ‘literary historian’ or ‘structuralist’, the appelation ‘writer’ looks to be the most succinct for all the different ‘Barthes’ we encounter in his writings. He is finally all these things and none, ‘a subject in process’, to use a term from his student Julia Kristeva.[8] Yet Barthes recognised that the artist or author can never control meaning, that the last word always belongs to someone else: ‘to write is to permit others to conclude one’s own discourse, and writing is only a proposition whose answer one never knows. One writes in order to be loved, one is read without being able to be loved, it is doubtless this distance which constitutes the writer.’[9]

]]>
Wed, 11 Dec 2013 15:42:37 -0800 http://www.metamute.org/editorial/articles/reinvention-without-end-roland-barthes
<![CDATA[Kipple and Things II: The Subject of Digital Detritus]]> http://machinemachine.net/text/ideas/kipple-and-things-ii-the-subject-of-digital-detritus

This text is a work in progress; a segment ripped from my thesis. To better ingest some of the ideas I throw around here, you might want to read these texts first: - Kipple and Things: How to Hoard and Why Not To Mean - Digital Autonomy

Captured in celluloid under the title Blade Runner, (Scott 1982) Philip K. Dick’s vision of kipple abounds in a world where mankind lives alongside shimmering, partly superior, artificial humans. The limited lifespan built into the Nexus 6 replicants  [i] is echoed in the human character J.F. Sebastian,[ii]whose own degenerative disorder lends his body a kipple-like quality, even if the mind it enables sparkles so finely. This association with replication and its apparent failure chimes for both the commodity fetish and an appeal to digitisation. In Walter Benjamin’s The Work of Art in the Age of its Technological Reproducibility, mechanisation and mass production begin at the ‘original’, and work to distance the commodity from the form captured by each iteration. Not only does the aura of the original stay intact as copies of it are reproduced on the production line, that aura is actually heightened in the system of commoditisation. As Frederic Jameson has noted, Dick’s work ‘renders our present historical by turning it into the past of a fantasized future’ (Jameson 2005, 345). Kipple piles up at the periphery of our culture, as if Dick is teasing us to look upon our own time from a future anterior in which commodity reification will have been: It hadn’t upset him that much, seeing the half-abandoned gardens and fully abandoned equipment, the great heaps of rotting supplies. He knew from the edu-tapes that the frontier was always like that, even on Earth. (Dick 2011, 143) Kipple figures the era of the commodity as an Empire, its borders slowly expanding away from the subjects yearning for Biltong replicas, seeded with mistakes. Kipple is a death of subjects, haunted by objects, but kipple is also a renewal, a rebirth. The future anterior is a frontier, one from which it might just be possible to look back upon the human without nostalgia. Qualify the human subject with the android built in its image; the object with the entropic degradation that it must endure if its form is to be perpetuated, and you necessarily approach an ontology of garbage, junk and detritus: a glimmer of hope for the remnants of decay to assert their own identity. Commodities operate through the binary logic of fetishisation and obsolescence, in which the subject’s desire to obtain the shiny new object promotes the propagation of its form through an endless cycle of kippleisation. Kipple is an entropy of forms, ideals long since removed from their Platonic realm by the march of mimesis, and kippleisation an endless, unstoppable encounter between subjectness and thingness. Eschewing Martin Heidegger’s definition of a thing, in which objects are brought out of the background of existence through human use, (Bogost 2012, 24) Bill Brown marks the emergence of things through the encounter: As they circulate through our lives… we look through objects because there are codes by which our interpretive attention makes them meaningful, because there is a discourse of objectivity that allows us to use them as facts. A thing, in contrast, can hardly function as a window. We begin to confront the thingness of objects when they stop working for us… (Brown 2001, 4) This confrontation with the ‘being’ of the object occurs by chance when, as Brown describes, a patch of dirt on the surface of the window captures us for a moment, ‘when the drill breaks, when the car stalls… when their flow within the circuits of production and distribution, consumption and exhibition, has been arrested, however momentarily’. (Brown 2001, 4) We no longer see through the window-object (literally or metaphorically), but are brought into conflict with its own particular discrete being by the encounter with its filthy surface. A being previously submersed in the continuous background of world as experience, need not necessarily be untangled by an act of human-centric use. The encounter carries the effect of a mirror, for as experience stutters at the being of a thing, so the entity invested in that experience is made aware of their own quality as a thing – if only for a fleeting moment. Brown’s fascination with ‘how inanimate objects constitute human subjects’ (Brown 2001, 7) appears to instate the subject as the centre of worldly relations. But Bill Brown has spun a realist [iii] web in which to ensnare us. The object is not phenomenal, because its being exists independent of any culpability we may wish to claim. Instead a capture of object and human, of thing qua thing, occurs in mutual encounter, bringing us closer to a flat ontology ‘where humans are no longer monarchs of being but are instead among beings, entangled in beings, and implicated in other beings.’ (Bryant 2011, 40)

Brown’s appraisal of things flirts with the splendour of kipple. Think of the landfill, an engorged river of kipple, or the salvage yard, a veritable shrine to thingness. Tattered edges and featureless forms leak into one another in unsavoury shades of tea-stain brown and cobweb grey splashed from the horizon to your toes. Masses of broken, unremarkable remnants in plastic, glass and cardboard brimming over the edge of every shiny suburban enclave. The most astonishing thing about the turmoil of these places is how any order can be perceived in them at all. But thing aphasia does diminish, and it does so almost immediately. As the essential human instinct for order kicks in, things come to resemble objects. Classes of use, representation and resemblance neatly arising to cut through the pudding; to make the continuous universe discrete once again. You note a tricycle wheel there, underneath what looks like the shattered circumference of an Edwardian lamp. You almost trip over a bin bag full of carrot tops and potato peel before becoming transfixed by a pile of soap-opera magazines. Things, in Brown’s definition, are unreachable by human caprice. Things cannot be grasped, because their thingnessslips back into recognition as soon as it is encountered: When such a being is named, then, it is also changed. It is assimilated into the terms of the human subject at the same time that it is opposed to it as object, an opposition that is indeed necessary for the subject’s separation and definition. (Schwenger 2004, 137) The city of Hull, the phrase ‘I will’, the surface of an ice cube and an image compression algorithm are entities each sustained by the same nominative disclosure: a paradox of things that seem to flow into one another with liquid potential, but things, nonetheless limited by their constant, necessary re-iteration in language. There is no thing more contradictory in this regard than the human subject itself, a figure Roland Barthes’ tried to paradoxically side-step in his playful autobiography. Replenishing each worn-out piece of its glimmering hull, one by one, the day arrives when the entire ship of Argo has been displaced – each of its parts now distinct from those of the ‘original’ vessel. For Barthes, this myth exposes two modest activities: - Substitution (one part replaces another, as in a paradigm) – Nomination (the name is in no way linked to the stability of the parts) (Barthes 1994, 46) Like the ship of Argo, human experience has exchangeable parts, but at its core, such was Barthes’ intention, ‘the subject, unreconciled, demands that language represent the continuity of desire.’ (Eakin 1992, 16) In order that the subject remain continuous, it is the messy world that we must isolate into classes and taxonomies. We collate, aggregate and collect not merely because we desire, but because without these nominative acts the pivot of desire – the illusionary subject – could not be sustained. If the powerful stance produced in Dick’s future anterior is to be sustained, the distinction between subjects aggregating objects, and objects coagulating the subject, needs flattening. [iv] Bill Brown’s appeal to the ‘flow within the circuits of production and distribution, consumption and exhibition’ (Brown 2001, 4) partially echoes Dick’s concern with the purity of the thing. Although Dick’s Biltong were probably more of a comment on the Xerox machine than the computer, the problem of the distribution of form, as it relates to commodity fetishism, enables ‘printing’ as a neat paradigm of the contemporary network-based economy. Digital things, seeming to proliferate independent from the sinuous optical cables and super-cooled server banks that disseminate them, are absolutelyreliant on the process of copying. Copying is a fundamental component of the digital network where, unlike the material commodity, things are not passed along. The digital thing is always a copy, is always copied, and is always copying: Copying the product (mechanical reproduction technologies of modernity) evolves into copying the instructions for manufacturing (computer programs as such recipes of production). In other words, not only copying copies, but more fundamentally copying copying itself. (Parikka 2008, 72) Abstracted from its material context, copying is ‘a universal principle’ (Parikka 2008, 72) of digital things, less flowing ‘within the circuits’ (Brown 2001, 4) as being that circuitry flow in and of itself. The entire network is a ship of Argo, capable, perhaps for the first time, [v]to Substitute and Nominate its own parts, or, as the character J.F. Isidore exclaims upon showing an android around his kippleised apartment: When nobody’s around, kipple reproduces itself. [my emphasis] (Dick 1968, 53) Kipple is not garbage, nor litter, for both these forms are decided upon by humans. In a recent pamphlet distributed to businesses throughout the UK, the Keep Britain Tidy Campaign made a useful distinction: Litter can be as small as a sweet wrapper, as large as a bag of rubbish, or it can mean lots of items scattered about. ENCAMS describes litter as “Waste in the wrong place caused by human agency”. In other words, it is only people that make litter. (Keep Britain Tidy Campaign, 3) Garbage is a decisive, collaborative form, humans choose to destroy or discard. A notion of detritus that enhances the autonomy, the supposed mastery of the subject in its network. Digital networks feature their own litter in the form of copied data packets that have served their purpose, or been deemed erroneous by algorithms designed to seed out errors. These processes, according to W. Daniel Hillis, define, ‘the essence of digital technology, which restores signal to near perfection at every stage’. (Hillis 1999, 18) Maintenance of the network and the routines of error management are of primary economic and ontological concern: control the networks and the immaterial products will manage themselves; control the tendency of errors to reproduce, and we maintain a vision of ourselves as masters over, what Michel Serres has termed, ‘the abundance of the Creation’. (Serres 2007, 47) Seeming to sever their dependency on the physical processes that underlie them, digital technologies, ‘incorporate hyper-redundant error-checking routines that serve to sustain an illusion of immateriality by detecting error and correcting it’. (Kirschenbaum 2008, 12) The alleviation of error and noise, is then, an implicit feature of digital materiality. Expressed at the status of the digital image it is the visual glitch, the coding artifact, [vi]that signifies the potential of the digital object to loosen its shackles; to assert its own being. In a parody of Arthur C. Clarke’s infamous utopian appraisal of technology, another science fiction author, Bruce Sterling, delivers a neat sound bite for the digital civilisation, so that: Any sufficiently advanced technology is indistinguishable from magic (Clarke 1977, 36) …becomes… Any sufficiently advanced technology is indistinguishable from [its] garbage. (Sterling 2012)  

Footnotes [i] A label appropriated by Ridley Scott for the film Blade Runner, and not by Philip K. Dick in the original novel, Do Androids Dream of Electric Sheep?, who preferred the more archaic, general term, android. Throughout the novel characters refer to the artificial humans as ‘andys,’ portraying a casual ease with which to shrug off these shimmering subjects as mere objects. [ii] A translated version of the character, J.F. Isidore, from the original novel. [iii] Recent attempts to disable appeals to the subject, attempts by writers such as Graham Harman, Levi R. Bryant, Bill Brown and Ian Bogost, have sought to devise, in line with Bruno Latour, an ontology in which ‘Nothing can be reduced to anything else, nothing can be deduced from anything else, everything may be allied to everything else;’ (Latour 1993, 163) one in which a discussion of the being of a chilli pepper or a wrist watch may rank alongside a similar debate about the being of a human or a dolphin. An object-oriented, flat ontology (Bryant 2011) premised on the niggling sentiment that ‘all things equally exist, yet they do not exist equally.’ (Bogost 2012, 19) Unlike Graham Harman, who uses the terms interchangeably, (Bogost 2012, 24) Bill Brown’s Thing Theory approaches the problem by strongly asserting a difference between objects and things. [iv] I have carefully avoided using the term ‘posthuman,’ but I hope its resonance remains. [v] The resonance here with a biological imperative is intentional, although it is perhaps in this work alone that I wish to completely avoid such digital/biological metonyms. Boris Groys’ text From Image to Image File – And Back: Art in the Age of Digitisation, functions neatly to bridge this work with previous ones when he states: The biological metaphor says it all: not only life, which is notorious in this respect, but also technology, which supposedly opposes nature, has become the medium of non-identical reproduction.

[vi] I have very consciously chosen to spell ‘artifact’ with an ‘i’, widely known as the American spelling of the term. This spelling of the word aligns it with computer/programming terminology (i.e.’compression artifact’), leaving the ‘e’ spelling free to echo its archaeological heritage. In any case, multiple meanings for the word can be read in each instance.

Bibliography Barthes, Roland. 1994. Roland Barthes. University of California Press. Bogost, Ian. 2012. Alien Phenomenology, Or What It’s Like to Be a Thing. University of Minnesota Press. Brown, Bill. 2001. “Thing Theory.” Critical Inquiry 28 (1) (October 1): 1–22. Bryant, Levi R. 2011. The Democracy of Objects. http://hdl.handle.net/2027/spo.9750134.0001.001. Clarke, Arthur C. 1977. “Hazards of Prophecy: The Failure of Imagination.” In Profiles of the future?: an inquiry into the limits of the possible. New York: Popular Library. Dick, Philip K. 1968. Do Androids Dream of Electric Sheep? Random House Publishing Group, 2008. ———. 2011. The Three Stigmata of Palmer Eldritch. Houghton Mifflin Harcourt. Eakin, Paul John. 1992. Touching the World: Reference in Autobiography. Princeton University Press. Hillis, W. 1999. The Pattern on the Stone?: the Simple Ideas That Make Computers Work. 1st paperback ed. New York: Basic Books. Jameson, Fredric. 2005. Archaeologies of the Future: The Desire Called Utopia and Other Science Fictions. Verso. Keep Britain Tidy Campaign, Environmental Campaigns (ENCAMS). YOUR RUBBISH AND THE LAW a Guide for Businesses. http://kb.keepbritaintidy.org/fotg/publications/rlaw.pdf. Kirschenbaum, Matthew G. 2008. Mechanisms: New Media and the Forensic Imagination. MIT Press. Latour, Bruno. 1993. The Pasteurization of France. Harvard University Press. Parikka, Jussi. 2008. “Copy.” In Software Studies?: a Lexicon, ed. Matthew Fuller, 70–78. Cambridge  Mass.: MIT Press. Schwenger, Peter. 2004. “Words and the Murder of the Thing.” In Things, 135 – 150. University of Chicago Press Journals. Scott, Ridley. 1982. Blade Runner. Drama, Sci-Fi, Thriller. Serres, Michel. 2007. The Parasite. 1st University of Minnesota Press ed. Minneapolis: University of Minnesota Press. Sterling, Bruce. 2012. “Design Fiction: Sascha Pohflepp & Daisy Ginsberg, ‘Growth Assembly’.” Wired Magazine: Beyond The Beyond. http://www.wired.com/beyond_the_beyond/2012/01/design-fiction-sascha-pohflepp-daisy-ginsberg-growth-assembly/.

]]>
Sat, 25 Aug 2012 10:00:00 -0700 http://machinemachine.net/text/ideas/kipple-and-things-ii-the-subject-of-digital-detritus
<![CDATA[Binary Nomination]]> http://machinemachine.net/text/ideas/binary-nomination

‘An important feature of a learning machine is that its teacher will often be very largely ignorant of quite what is going on inside, although he may still be able to some extent to predict his pupil’s behaviour.’ Alan Turing, Computing Machinery and Intelligence (1950)

Replenishing each worn-out piece of its glimmering hull, one by one, the day arrives when the entire ship of Argo has been displaced – each of its parts now distinct from those of the ‘original’ vessel. For Roland Barthes, this myth exposes two modest activities:

Substitution (one part replaces another, as in a paradigm) Nomination (the name is in no way linked to the stability of the parts) 1

The discrete breaches the continuous in the act of nomination. Take for instance the spectrum of colours, the extension of which ‘is verbally reduced to a series of discontinuous terms’ 2 such as red, green, lilac or puce. Each colour has no cause but its name. By being isolated in language the colour ‘blue’ is allowed to exist, but its existence is an act of linguistic and, some would argue, perceptual severance. The city of Hull, the phrase “I will”, the surface of an ice cube and an image compression algorithm are entities each sustained by the same nominative disclosure: a paradox of things that seem to flow into one another with liquid potential, but things, nonetheless, limited by their constant, necessary re-iteration in language. There is no thing more contradictory in this regard than the human subject, a figure Barthes’ tried to paradoxically side-step in his playful autobiography. Like the ship of Argo, human experience has exchangeable parts, but at its core, such was Barthes’ intention, ‘the subject, unreconciled, demands that language represent the continuity of desire.’ 3

In an esoteric paper, published in 1930, Lewis Richardson teased out an analogy between flashes of human insight and the spark that leaps across a stop gap in an electrical circuit. The paper, entitled The Analogy Between Mental Images and Sparks, navigates around a provocative sketch stencilled into its pages of a simple indeterminate circuit, whose future state it is impossible to predict. Richardson’s playful label for the diagram hides a deep significance. For even at the simplest binary level, Richardson argued, computation need not necessarily be deterministic.

The discrete and the continuous are here again blurred by analogy. Electricity flowing and electricity not flowing: a binary imposition responsible for the entire history of information technology.

 

1 Roland Barthes, Roland Barthes (University of California Press, 1994), 46.

2 Roland Barthes, Elements of Semiology (Hill and Wang, 1977), 64.

3 Paul John Eakin, Touching the World: Reference in Autobiography (Princeton University Press, 1992), 16.

]]>
Thu, 19 Jul 2012 09:32:00 -0700 http://machinemachine.net/text/ideas/binary-nomination
<![CDATA[Rigid Implementation vs Flexible Materiality]]> http://machinemachine.net/text/research/rigid-implementation-vs-flexible-materiality

Wow. It’s been a while since I updated my blog. I intend to get active again here soon, with regular updates on my research. For now, I thought it might be worth posting a text I’ve been mulling over for a while (!) Yesterday I came across this old TED presentation by Daniel Hillis, and it set off a bunch of bells tolling in my head. His book The Pattern on the Stone was one I leafed through a few months back whilst hunting for some analogies about (digital) materiality. The resulting brainstorm is what follows. (This blog post, from even longer ago, acts as a natural introduction: On (Text and) Exaptation) In the 1960s and 70s Roland Barthes named “The Text” as a network of production and exchange. Whereas “the work” was concrete, final – analogous to a material – “the text” was more like a flow, a field or event – open ended. Perhaps even infinite. In, From Work to Text, Barthes wrote: The metaphor of the Text is that of the network… (Barthes 1979) This semiotic approach to discourse, by initiating the move from print culture to “text” culture, also helped lay the ground for a contemporary politics of content-driven media. Skipping backwards through From Work to Text, we find this statement: The text must not be understood as a computable object. It would be futile to attempt a material separation of works from texts. I am struck here by Barthes” use of the phrase “computable object”, as well as his attention to the “material”. Katherine Hayles in her essay, Text is Flat, Code is Deep, (Hayles 2004) teases out the statement for us: ‘computable’ here mean[s] to be limited, finite, bound, able to be reckoned. Written twenty years before the advent of the microcomputer, his essay stands in the ironic position of anticipating what it cannot anticipate. It calls for a movement away from works to texts, a movement so successful that the ubiquitous ‘text’ has all but driven out the media-specific term book. Hayles notes that the “ubiquity” of Barthes” term “Text” allowed – in its wake – an erasure of media-specific terms, such as “book”. In moving from, The Work to The Text, we move not just between different politics of exchange and dissemination, we also move between different forms and materialities of mediation. (Manovich 2002)For Barthes the material work was computable, whereas the network of the text – its content – was not.

In 1936, the year that Alan Turing wrote his iconic paper ‘On Computable Numbers’, a German engineer by the name of Konrad Zuse built the first working digital computer. Like its industrial predecessors, Zuse’s computer was designed to function via a series of holes encoding its program. Born as much out of convenience as financial necessity, Zuse punched his programs directly into discarded reels of 35mm film-stock. Fused together by the technologies of weaving and cinema, Zuse’s computer announced the birth of an entirely new mode of textuality. The Z3, the world’s first working programmable, fully automatic computer, arrived in 1941. (Manovich 2002) A year earlier a young graduate by the name of Claude Shannon had published one of the most important Masters theses in history. In it he demonstrated that any logical expression of Boolean algebra could be programmed into a series of binary switches. Today computers still function with a logic impossible to distinguish from their mid-20th century ancestors. What has changed is the material environment within which Boolean expressions are implemented. Shannon’s work first found itself manifest in the fragile rows of vacuum tubes that drove much of the technical innovation of the 40s and 50s. In time, the very same Boolean expressions were firing, domino-like, through millions of transistors etched onto the surface of silicon chips. If we were to query the young Shannon today, he might well gawp in amazement at the material advances computer technology has gone through. But, if Shannon was to examine either your digital wrist watch or the world’s most advanced supercomputer in detail, he would once again feel at home in the simple binary – on/off – switches lining those silicon highways. Here the difference between how computers are implemented and what computers are made of digs the first of many potholes along our journey. We live in an era not only practically driven by the computer, but an era increasingly determined by the metaphors computers have injected into our language. Let us not make the mistake of presupposing that brains (or perhaps minds) are “like” computers. Tempting though it is to reduce the baffling complexities of the human being to the functions of the silicon chip, the parallel processor or Wide Area Network this reduction occurs most usefully at the level of metaphor and metonym. Again the mantra must be repeated that computers function through the application of Boolean logic and binary switches, something that can not be said about the human brain with any confidence a posteriori. Later I will explore the consequences on our own understanding of ourselves enabled by the processing paradigm, but for now, or at least the next few paragraphs, computers are to be considered in terms of their rigid implementation and flexible materiality alone. At the beginning of his popular science book, The Pattern on the Stone, (Hillis 1999) W.  Daniel Hillis narrates one of his many tales on the design and construction of a computer. Built from tinker-toys the computer in question was/is functionally complex enough to “play” tic-tac-toe (noughts and crosses). The tinker-toy was chosen to indicate the apparent simplicity of computer design, but as Hillis argues himself, he may very well have used pipes and valves to create a hydraulic computer, driven by water pressure, or stripped the design back completely, using flowing sand, twigs and twine or any other recipe of switches and connectors. The important point is that the tinker-toy tic-tac-toe computer functions perfectly well for the task it is designed for, perfectly well, that is, until the tinker-toy material begins to fail. This failure is what Chapter 1 of this thesis is about: why it happens, why its happening is a material phenomenon and how the very idea of “failure” is suspect. Tinker-toys fail because the mechanical operation of the tic-tac-toe computer puts strain on the strings of the mechanism, eventually stretching them beyond practical use. In a perfect world, devoid of entropic behaviour, the tinker-toy computer may very well function forever, its users setting O or X conditions, and the computer responding according to its program in perfect, logical order. The design of the machine, at the level of the program, is completely closed; finished; perfect. Only materially does the computer fail (or flail), noise leaking into the system until inevitable chaos ensues and the tinker-toys crumble back into jumbles of featureless matter. This apparent closure is important to note at this stage because in a computer as simple as the tic-tac-toe machine, every variable can be accounted for and thus programmed for. Were we to build a chess playing computer from tinker-toys (pretending we could get our hands on the, no doubt, millions of tinker-toy sets we”d need) the closed condition of the computer may be less simple to qualify. Tinker-toys, hydraulic valves or whatever material you choose, could be manipulated into any computer system you can imagine, even the most brain numbingly complicated IBM supercomputer is technically possible to build from these fundamental materials. The reason we don”t do this, why we instead choose etched silicon as our material of choice for our supercomputers, exposes another aspect of computers we need to understand before their failure becomes a useful paradigm. A chess playing computer is probably impossible to build from tinker-toys, not because its program would be too complicated, but because tinker-toys are too prone to entropy to create a valid material environment. The program of any chess playing application could, theoretically, be translated into a tinker-toy equivalent, but after the 1,000th string had stretched, with millions more to go, no energy would be left in the system to trigger the next switch along the chain. Computer inputs and outputs are always at the mercy of this kind of entropy: whether in tinker-toys or miniature silicon highways. Noise and dissipation are inevitable at any material scale one cares to examine. The second law of thermo dynamics ensures this. Claude Shannon and his ilk knew this, even back when the most advanced computers they had at their command couldn”t yet play tic-tac-toe. They knew that they couldn”t rely on materiality to delimit noise, interference or distortion; that no matter how well constructed a computer is, no matter how incredible it was at materially stemming entropy (perhaps with stronger string connectors, or a built in de-stretching mechanism), entropy nonetheless was inevitable. But what Shannon and other computer innovators such as Alan Turing also knew, is that their saviour lay in how computers were implemented. Again, the split here is incredibly important to note:

Flexible materiality: How and of what a computer is constructed e.g. tinker-toys, silicon Rigid implementation: Boolean logic enacted through binary on/off switches (usually with some kind of input à storage à feedback/program function à output). Effectively, how a computer works

Boolean logic was not enough on its own. Computers, if they were to avoid entropy ruining their logical operations, needed to have built within them an error management protocol. This protocol is still in existence in EVERY computer in the world. Effectively it takes the form of a collection of parity bits delivered alongside each packet of data that computers, networks and software deal with. The bulk of data contains the binary bits encoding the intended quarry, but the receiving element in the system also checks the main bits alongside the parity bits to determine whether any noise has crept into the system. What is crucial to note here is the error-checking of computers happens at the level of their rigid implementation. It is also worth noting that for every eight 0s and 1s delivered by a computer system, at least one of those bits is an error checking function. W. Daniel Hillis puts the stretched strings of his tinker-toy mechanism into clear distinction and in doing so, re-introduces an umbrella term set to dominate this chapter: I constructed a later version of the Tinker Toy computer which fixed the problem, but I never forgot the lesson of the first machine: the implementation technology must produce perfect outputs from imperfect inputs, nipping small errors in the bud. This is the essence of digital technology, which restores signals to near perfection at every stage. It is the only way we know – at least, so far – for keeping a complicated system under control. (Hillis 1999, 18)   Bibliography  Barthes, Roland. 1979. ‘From Work to Text.’ In Textual Strategies: Perspectives in Poststructuralist Criticism, ed. Josue V. Harari, 73–81. Ithaca, NY: Cornell University Press. Hayles, N. Katherine. 2004. ‘Print Is Flat, Code Is Deep: The Importance of Media-Specific Analysis.’ Poetics Today 25 (1) (March): 67–90. doi:10.1215/03335372-25-1-67. Hillis, W. 1999. The Pattern on the Stone : the Simple Ideas That Make Computers Work. 1st paperback ed. New York: Basic Books. Manovich, Lev. 2002. The Language of New Media. 1st MIT Press pbk. ed. Cambridge  Mass.: MIT Press.      

]]>
Thu, 07 Jun 2012 06:08:07 -0700 http://machinemachine.net/text/research/rigid-implementation-vs-flexible-materiality
<![CDATA[Sloppy MicroChips: Can a fair comparison be made between biological and silicon entropy?]]> http://ask.metafilter.com/mefi/217051

Was reading about microchips that are designed to allow a few mistakes (known as 'Sloppy Chips'), and pondering equivalent kinds of 'coding' errors and entropy in biological systems. Can a fair comparison be made between the two? OK, to setup my question I probably need to run through my (basic) understanding of biological vs silicon entropy...

In the transistor, error is a bad thing (in getting the required job done as efficiently and cheaply as possible), metered by parity bits that come as standard in every packet of data transmitted. But, in biological systems error is not necessarily bad. Most copying errors are filtered out, but some propogate and some of those might become beneficial to the organism (in thermodynamics sometimes known as "autonomy producing equivocations").

Relating to the article about 'sloppy chips', how does entropy and energy efficiency factor into this? For the silicon chip efficiency leads to heat (a problem), for the string of DNA efficiency leads to fewer mutations, and thus less change within populations, and thus, inevitably, less capacity for organisms to diversify and react to their environments - leading to no evolution, no change, no good. Slightly less efficiency is good for biology, and, it seems, good for some kinds of calculations and computer processes.

What work has been done on these connections I draw between the biological and the silicon?

I'm worried that my analogy is limited, based as it is on a paradigm for living systems that too closely mirrors the digital systems we have built. Can DNA and binary parity bit transistors be understood on their own terms, without resorting to using the other as a metaphor to understanding?

Where do the boundaries lie in comparing the two?

]]>
Tue, 05 Jun 2012 10:05:10 -0700 http://ask.metafilter.com/mefi/217051
<![CDATA[What is the biological equivalent of discovering the Higgs Boson?]]> http://www.nature.com/news/life-changing-experiments-the-biological-higgs-1.10310#/

We put the question to experts in various fields. Biology is no stranger to large, international collaborations with lofty goals, they pointed out — the race to sequence the human genome around the turn of the century had scientists riveted. But most biological quests lack the mathematical precision, focus and binary satisfaction of a yes-or-no answer that characterize the pursuit of the Higgs. “Most of what is important is messy, and not given to a moment when you plant a flag and crack the champagne,” says Steven Hyman, a neuroscientist at the Broad Institute in Cambridge, Massachusetts.

Nevertheless, our informal survey shows that the field has no shortage of fundamental questions that could fill an anticipatory auditorium. These questions concern where and how life started — and why it ends.

]]>
Thu, 29 Mar 2012 08:44:00 -0700 http://www.nature.com/news/life-changing-experiments-the-biological-higgs-1.10310#/
<![CDATA["As you can probably imagine, this took some effort to make."]]> http://www.metafilter.com/114058/As-you-can-probably-imagine-this-took-some-effort-to-make

"The calculator itself is just over 250x200x100 blocks. It contains 2 6-digit BCD number selectors, 2 BCD-to-binary decoders, 3 binary-to-BCD decoders, 6 BCD adders and subtractors, a 20 bit (output) multiplier, 10 bit divider, a memory bank and additional circuitry for the graphing function." Yes, someone built a working scientific calculator, in Minecraft.

]]>
Wed, 21 Mar 2012 10:44:41 -0700 http://www.metafilter.com/114058/As-you-can-probably-imagine-this-took-some-effort-to-make
<![CDATA[Minecraft Scientific/Graphing calculator - Sin Cos Tan Log Square root]]> http://www.youtube.com/watch?v=wgJfVRhotlQ&feature=youtube_gdata

Hello there! (Reddit name: MaxSGB) Here is the project I've been working on ^.^ Specs: 6 digit addition and subtraction, 3 digit multiplication, division and trigonometric/scientific functions. (The reason these are only 3 digits is because multiplication and division would take a long time to decode/complete/encode. Also, the fraction display is hard enough to build for 3 digits, let alone 6 - 6 digit RAM would not only be massive, but a bit pointless since the curves follow the same pattern surrounding the peaks.). Graphing y=mx+c functions, quadratic functions, and equation solving of the form mx+c=0.

The screen and keypad were always meant to be the main feature of this machine. The main display boasts 25 digits. Square root signs are displayed and can change to accommodate any number of digits. Square root signs, add, minus, multiply and divide signs are displayed at appropriate times, and there is a full fraction display. The 7-segments for the fractions are the smallest possible, being only 3 wide, and stackable vertically and horizontally.

I made a custom texture pack for the keypad, and made wooden pressure plates invisible in order to get the best effect.

The calculator itself is just over 250x200x100 blocks. It contains 2 6-digit BCD number selectors, 2 BCD-to-binary decoders, 3 binary-to-BCD decoders, 6 BCD adders and subtractors, a 20 bit (output) multiplier, 10 bit divider, a memory bank and additional circuitry for the graphing function.

Music: City of Innocence, Gem Droids, - Dan O'Connor - Royalty-Free music at http://Danosongs.com Rocketry,

    Killing Time - Kevin MacLeod - <a href="http://Incompetech.com" rel="external">http://Incompetech.com</a>

Thank you very much for watching. As you can probably imagine, this took some effort to make, and so a like would be very much appreciated. =]

]]>
Wed, 21 Mar 2012 09:51:21 -0700 http://www.youtube.com/watch?v=wgJfVRhotlQ&feature=youtube_gdata