MachineMachine /stream - search for genetics https://machinemachine.net/stream/feed en-us http://blogs.law.harvard.edu/tech/rss LifePress therourke@gmail.com <![CDATA[Can Progressives Be Convinced That Genetics Matters? | The New Yorker]]> https://www.newyorker.com/magazine/2021/09/13/can-progressives-be-convinced-that-genetics-matters

Until she was thirty-three, Kathryn Paige Harden, a professor of psychology at the University of Texas at Austin, had enjoyed a vocational ascent so steady that it seemed guided by the hand of predestination.

]]>
Tue, 14 Sep 2021 03:51:13 -0700 https://www.newyorker.com/magazine/2021/09/13/can-progressives-be-convinced-that-genetics-matters
<![CDATA[What does it mean to be human? - Digg]]> https://digg.com/2017/human-evolution-mosaic

Gaia Vince discovers that analyzing the genetics of ancient humans means changing ideas about our evolution. The Rock of Gibraltar appears out of the plane window as an immense limestone monolith sharply rearing up from the base of Spain into the Mediterranean.

]]>
Wed, 22 Jan 2020 18:57:10 -0800 https://digg.com/2017/human-evolution-mosaic
<![CDATA[Crispr Can Speed Up Nature—and Change How We Grow Food | WIRED]]> https://www.wired.com/story/crispr-tomato-mutant-future-of-food/

Although he worked on a farm as a teenager and has a romantic attachment to the soil, ­Lippman isn’t a farmer. He’s a plant biologist at Cold Spring Harbor Laboratory in New York with an expertise in genetics and development. And these greenhouse plants aren’t ordinary tomatoes.

]]>
Sat, 11 Aug 2018 03:42:16 -0700 https://www.wired.com/story/crispr-tomato-mutant-future-of-food/
<![CDATA[How Neanderthals influenced human genetics at the crossroads of Asia and Europe - HeritageDaily - Heritage & Archaeology News]]> https://www.heritagedaily.com/2017/10/neanderthals-influenced-human-genetics-crossroads-asia-europe/117147

When the ancestors of modern humans migrated out of Africa, they passed through the Middle East and Turkey before heading deeper into Asia and Europe.

]]>
Sun, 26 Nov 2017 07:30:40 -0800 https://www.heritagedaily.com/2017/10/neanderthals-influenced-human-genetics-crossroads-asia-europe/117147
<![CDATA[Benjamin Bratton. The Post-Anthropocene. 2015]]> http://www.youtube.com/watch?v=FrNEHCZm_Sc

http://www.egs.edu Benjamin H. Bratton, born 1968, is an American theorist, sociologist and professor of visual arts, contemporary social and political theory, philosophy, and design.

The Post-Anthropocene: The Turing-incomplete Orchid Mantis Evolves Machine Vision. Public open lecture for the students and faculty of the European Graduate School EGS Media and Communication Studies department program Saas-Fee Switzerland Europe. 2015.

Benjamin H. Bratton, (b. 1968), is an American theorist, sociologist, and professor of visual arts, contemporary social and political theory, philosophy, and design. His research deals with computational media and infrastructure, design research management & methodologies, classical and contemporary sociological theory, architecture and urban design issues, and the politics of synthetic ecologies and biologies.

Bratton completed his doctoral studies in the sociology of technology at the University of California, Santa Barbara​, and was the Director of the Advanced Strategies Group at Yahoo! before expanding his cross-disciplinary research and practice in academia. He taught in the Department of Design/Media Art at UCLA from 2003-2008, and at the SCI Arc​ (Southern California Institute of Architecture)​ for a decade, and continues to teach as a member of the Visiting Faculty. While at SCI Arc, Benjamin Bratton and Hernan Diaz-Alonso co-founded the XLAB courses, which placed students in laboratory settings where they could work directly and comprehensively in robotics, scripting, biogenetics, genetic codification, and cellular systems​. Currently, in addition to his professorship at EGS, Bratton is an associate professor of Visual Arts at the University of California, San Dieg​o, where he also directs the Center for Design and Geopolitics, partnering with the California Institute of Telecommunications and Information Technology​.

In addition to his formal positions, Benjamin H. Bratton is a regular visiting lecturer at numerous universities and institutions including: Columbia University, Yale University, Pratt Institute, Bartlett School of Architecture, University of Pennsylvania, University of Southern California, University of California, Art Center College of Design, Parsons The New School for Design, University of Michigan, Brown University, The University of Applied Arts in Vienna, Bauhaus- University, Moscow State University, Moscow Institute for Higher Economics, and the Architectural Association School of Architecture in London.

Bratton's current projects focus on the political geography of cloud computing, massively- granular universal addressing systems, and alternate models of ecological governance. In his most recent book, The Stack: On Software and Sovereignty (MIT Press, 2015), Bratton asks the question, "What has planetary-scale computation done to our geopolitical realities?​" and in response, offers the proposition "that smart grids, cloud computing, mobile software and smart cities, universal addressing systems, ubiquitous computing, and other types of apparently unrelated planetary-scale computation can be viewed as forming a coherent whole—an accidental megastructure called The Stack that is both a computational apparatus and a new geopolitical architecture.​"

Other more recent texts include the following: Some Trace Effects of the Post-Anthropocene: On Accelerationist Geopolitical Aesthetics, On Apps and Elementary Forms of Interfacial Life: Object, Image, Superimposition, Deep Address, What We Do is Secrete: On Virilio, Planetarity and Data Visualization, Geoscapes & the Google Caliphate: On Mumbai Attacks, Root the Earth: On Peak Oil Apohenia and Suspicious Images/ Latent Interfaces (with Natalie Jeremijenko), iPhone City, Logistics of Habitable Circulation (introduction to the 2008 edition of Paul Virilio’s Speed and Politics). As well, recent online lectures include: 2 or 3 Things I Know About The Stack, at Bartlett School of Architecture, University of London, and University of Southampton;Cloud Feudalism at Proto/E/Co/Logics 002, Rovinj, Croatia; Nanoskin at Parsons School of Design; On the Nomos of the Cloud at Berlage Institute, Rotterdam, École Normale- Superiore, Paris, and MOCA, Los Angeles; Accidental Geopolitics at The Guardian Summit, New York; Ambivalence and/or Utopia at University of Michigan and UC Irvine, and Surviving the Interface at Parsons School of Design.

]]>
Tue, 18 Aug 2015 08:42:48 -0700 http://www.youtube.com/watch?v=FrNEHCZm_Sc
<![CDATA[Meet the Father of Digital Life]]> http://nautil.us/issue/14/mutation/meet-the-father-of-digital-life

n 1953, at the dawn of modern computing, Nils Aall Barricelli played God. Clutching a deck of playing cards in one hand and a stack of punched cards in the other, Barricelli hovered over one of the world’s earliest and most influential computers, the IAS machine, at the Institute for Advanced Study in Princeton, New Jersey. During the day the computer was used to make weather forecasting calculations; at night it was commandeered by the Los Alamos group to calculate ballistics for nuclear weaponry. Barricelli, a maverick mathematician, part Italian and part Norwegian, had finagled time on the computer to model the origins and evolution of life.

Inside a simple red brick building at the northern corner of the Institute’s wooded wilds, Barricelli ran models of evolution on a digital computer. His artificial universes, which he fed with numbers drawn from shuffled playing cards, teemed with creatures of code—morphing, mutating, melting, maintaining. He created laws that determined, independent of any foreknowledge on his part, which assemblages of binary digits lived, which died, and which adapted. As he put it in a 1961 paper, in which he speculated on the prospects and conditions for life on other planets, “The author has developed numerical organisms, with properties startlingly similar to living organisms, in the memory of a high speed computer.” For these coded critters, Barricelli became a maker of worlds.

Until his death in 1993, Barricelli floated between biological and mathematical sciences, questioning doctrine, not quite fitting in. “He was a brilliant, eccentric genius,” says George Dyson, the historian of technology and author of Darwin Among The Machines and Turing’s Cathedral, which feature Barricelli’s work. “And the thing about geniuses is that they just see things clearly that other people don’t see.”

Barricelli programmed some of the earliest computer algorithms that resemble real-life processes: a subdivision of what we now call “artificial life,” which seeks to simulate living systems—evolution, adaptation, ecology—in computers. Barricelli presented a bold challenge to the standard Darwinian model of evolution by competition by demonstrating that organisms evolved by symbiosis and cooperation.

Pixar cofounder Alvy Ray Smith says Barricelli influenced his earliest thinking about the possibilities for computer animation.

In fact, Barricelli’s projects anticipated many current avenues of research, including cellular automata, computer programs involving grids of numbers paired with local rules that can produce complicated, unpredictable behavior. His models bear striking resemblance to the one-dimensional cellular automata—life-like lattices of numerical patterns—championed by Stephen Wolfram, whose search tool Wolfram Alpha helps power the brain of Siri on the iPhone. Nonconformist biologist Craig Venter, in defending his creation of a cell with a synthetic genome—“the first self-replicating species we’ve had on the planet whose parent is a computer”—echoes Barricelli.

Barricelli’s experiments had an aesthetic side, too. Uncommonly for the time, he converted the digital 1s and 0s of the computer’s stored memory into pictorial images. Those images, and the ideas behind them, would influence computer animators in generations to come. Pixar cofounder Alvy Ray Smith, for instance, says Barricelli stirred his earliest thinking about the possibilities for computer animation, and beyond that, his philosophical muse. “What we’re really talking about here is the notion that living things are computations,” he says. “Look at how the planet works and it sure does look like a computation.”

Despite Barricelli’s pioneering experiments, barely anyone remembers him. “I have not heard of him to tell you the truth,” says Mark Bedau, professor of humanities and philosophy at Reed College and editor of the journal Artificial Life. “I probably know more about the history than most in the field and I’m not aware of him.”

Barricelli was an anomaly, a mutation in the intellectual zeitgeist, an unsung hero who has mostly languished in obscurity for the past half century. “People weren’t ready for him,” Dyson says. That a progenitor has not received much acknowledgment is a failing not unique to science. Visionaries often arrive before their time. Barricelli charted a course for the digital revolution, and history has been catching up ever since.

Barricelli_BREAKER-02 EVOLUTION BY THE NUMBERS: Barricelli converted his computer tallies of 1s and 0s into images. In this 1953 Barricelli print, explains NYU associate professor Alexander Galloway, the chaotic center represents mutation and disorganization. The more symmetrical fields toward the margins depict Barricelli’s evolved numerical organisms.From the Shelby White and Leon Levy Archives Center, Institute for Advanced Study, Princeton. Barricelli was born in Rome on Jan. 24, 1912. According to Richard Goodman, a retired microbiologist who met and befriended the mathematician in the 1960s, Barricelli claimed to have invented calculus before his tenth birthday. When the young boy showed the math to his father, he learned that Newton and Leibniz had preempted him by centuries. While a student at the University of Rome, Barricelli studied mathematics and physics under Enrico Fermi, a pioneer of quantum theory and nuclear physics. A couple of years after graduating in 1936, he immigrated to Norway with his recently divorced mother and younger sister.

As World War II raged, Barricelli studied. An uncompromising oddball who teetered between madcap and mastermind, Barricelli had a habit of exclaiming “Absolut!” when he agreed with someone, or “Scandaloos!” when he found something disagreeable. His accent was infused with Scandinavian and Romantic pronunciations, making it occasionally challenging for colleagues to understand him. Goodman recalls one of his colleagues at the University of California, Los Angeles who just happened to be reading Barricelli’s papers “when the mathematician himself barged in and, without ceremony, began rattling off a stream of technical information about his work on phage genetics,” a science that studies gene mutation, replication, and expression through model viruses. Goodman’s colleague understood only fragments of the speech, but realized it pertained to what he had been reading.

“Are you familiar with the work of Nils Barricelli?” he asked.

“Barricelli! That’s me!” the mathematician cried.

Notwithstanding having submitted a 500-page dissertation on the statistical analysis of climate variation in 1946, Barricelli never completed his Ph.D. Recalling the scene in the movie Amadeus in which the Emperor of Austria commends Mozart’s performance, save for there being “too many notes,” Barricelli’s thesis committee directed him to slash the paper to a tenth of the size, or else it would not accept the work. Rather than capitulate, Barricelli forfeited the degree.

Barricelli began modeling biological phenomena on paper, but his calculations were slow and limited. He applied to study in the United States as a Fulbright fellow, where he could work with the IAS machine. As he wrote on his original travel grant submission in 1951, he sought “to perform numerical experiments by means of great calculating machines,” in order to clarify, through mathematics, “the first stages of evolution of a species.” He also wished to mingle with great minds—“to communicate with American statisticians and evolution-theorists.” By then he had published papers on statistics and genetics, and had taught Einstein’s theory of relativity. In his application photo, he sports a pyramidal moustache, hair brushed to the back of his elliptic head, and hooded, downturned eyes. At the time of his application, he was a 39-year-old assistant professor at the University of Oslo.

Although the program initially rejected him due to a visa issue, in early 1953 Barricelli arrived at the Institute for Advanced Study as a visiting member. “I hope that you will be finding Mr. Baricelli [sic] an interesting person to talk with,” wrote Ragnar Frisch, a colleague of Barricelli’s who would later win the first Nobel Prize in Economics, in a letter to John von Neumann, a mathematician at IAS, who helped devise the institute’s groundbreaking computer. “He is not very systematic always in his exposition,” Frisch continued, “but he does have interesting ideas.”

Barricelli_BREAKER_2crop PSYCHEDELIC BARRICELLI: In this recreation of a Barricelli experiment, NYU associate professor Alexander Galloway has added color to show the gene groups more clearly. Each swatch of color signals a different organism. Borders between the color fields represent turbulence as genes bounce off and meld with others, symbolizing Barricelli’s symbiogenesis.Courtesy Alexander Galloway Centered above Barricelli’s first computer logbook entry at the Institute for Advanced Study, in handwritten pencil script dated March 3, 1953, is the title “Symbiogenesis problem.” This was his theory of proto-genes, virus-like organisms that teamed up to become complex organisms: first chromosomes, then cellular organs, onward to cellular organisms and, ultimately, other species. Like parasites seeking a host, these proto-genes joined together, according to Barricelli, and through their mutual aid and dependency, originated life as we know it.

Standard neo-Darwinian doctrine maintained that natural selection was the main means by which species formed. Slight variations and mutations in genes combined with competition led to gradual evolutionary change. But Barricelli disagreed. He pictured nimbler genes acting as a collective, cooperative society working together toward becoming species. Darwin’s theory, he concluded, was inadequate. “This theory does not answer our question,” he wrote in 1954, “it does not say why living organisms exist.”

Barricelli coded his numerical organisms on the IAS machine in order to prove his case. “It is very easy to fabricate or simply define entities with the ability to reproduce themselves, e.g., within the realm of arithmetic,” he wrote.

The early computer looked sort of like a mix between a loom and an internal combustion engine. Lining the middle region were 40 Williams cathode ray tubes, which served as the machine’s memory. Within each tube, a beam of electrons (the cathode ray) bombarded one end, creating a 32-by-32 grid of points, each consisting of a slight variation in electrical charge. There were five kilobytes of memory total stored in the machine. Not much by today’s standards, but back then it was an arsenal.

Barricelli saw his computer organisms as a blueprint of life—on this planet and any others.

Inside the device, Barricelli programmed steadily mutable worlds each with rows of 512 “genes,” represented by integers ranging from negative to positive 18. As the computer cycled through hundreds and thousands of generations, persistent groupings of genes would emerge, which Barricelli deemed organisms. The trick was to tweak his manmade laws of nature—“norms,” as he called them—which governed the universe and its entities just so. He had to maintain these ecosystems on the brink of pandemonium and stasis. Too much chaos and his beasts would unravel into a disorganized shamble; too little and they would homogenize. The sweet spot in the middle, however, sustained life-like processes.

Barricelli’s balancing act was not always easygoing. His first trials were riddled with pests: primitive, often single numeric genes invaded the space and gobbled their neighbors. Typically, he was only able to witness a couple of hereditary changes, or a handful at best, before the world unwound. To create lasting evolutionary processes, he needed to handicap these pests’ ability to rapidly reproduce. By the time he returned to the Institute in 1954 to begin a second round of experiments, Barricelli made some critical changes. First, he capped the proliferation of the pests to once per generation. That constraint allowed his numerical organisms enough leeway to outpace the pests. Second, he began employing different norms to different sections of his universes. That forced his numerical organisms always to adapt.

Even in the earlier universes, Barricelli realized that mutation and natural selection alone were insufficient to account for the genesis of species. In fact, most single mutations were harmful. “The majority of the new varieties which have shown the ability to expand are a result of crossing-phenomena and not of mutations, although mutations (especially injurious mutations) have been much more frequent than hereditary changes by crossing in the experiments performed,” he wrote.

When an organism became maximally fit for an environment, the slightest variation would only weaken it. In such cases, it took at least two modifications, effected by a cross-fertilization, to give the numerical organism any chance of improvement. This indicated to Barricelli that symbioses, gene crossing, and “a primitive form of sexual reproduction,” were essential to the emergence of life.

“Barricelli immediately figured out that random mutation wasn’t the important thing; in his first experiment he figured out that the important thing was recombination and sex,” Dyson says. “He figured out right away what took other people much longer to figure out.” Indeed, Barricelli’s theory of symbiogenesis can be seen as anticipating the work of independent-thinking biologist Lynn Margulis, who in the 1960s showed that it was not necessarily genetic mutations over generations, but symbiosis, notably of bacteria, that produced new cell lineages.

Barricelli saw his computer organisms as a blueprint of life—on this planet and any others. “The question whether one type of symbio-organism is developed in the memory of a digital computer while another type is developed in a chemical laboratory or by a natural process on some planet or satellite does not add anything fundamental to this difference,” he wrote. A month after Barricelli began his experiments on the IAS machine, Crick and Watson announced the shape of DNA as a double helix. But learning about the shape of biological life didn’t put a dent in Barricelli’s conviction that he had captured the mechanics of life on a computer. Let Watson and Crick call DNA a double helix. Barricelli called it “molecule-shaped numbers.”

Barricelli_BREAKER

What buried Barricelli in obscurity is something of a mystery. “Being uncompromising in his opinions and not a team player,” says Dyson, no doubt led to Barricelli’s “isolation from the academic mainstream.” Dyson also suspects Barricelli and the indomitable Hungarian mathematician von Neumann, an influential leader at the Institute of Advanced Study, didn’t hit it off. Von Neumann appears to have ignored Barricelli. “That was sort of fatal because everybody looked to von Neumann as the grandfather of self-replicating machines.”

Ever so slowly, though, Barricelli is gaining recognition. That stems in part from another of Barricelli’s remarkable developments; certainly one of his most beautiful. He didn’t rest with creating a universe of numerical organisms, he converted his organisms into images. His computer tallies of 1s and 0s would then self-organize into visual grids of exquisite variety and texture. According to Alexander Galloway, associate professor in the department of media, culture, and communication at New York University, a finished Barricelli “image yielded a snapshot of evolutionary time.”

When Barricelli printed sections of his digitized universes, they were dazzling. To modern eyes they might look like satellite imagery of an alien geography: chaotic oceans, stratigraphic outcrops, and the contours of a single stream running down the center fold, fanning into a delta at the patchwork’s bottom. “Somebody needs to do a museum show and show this stuff because they’re outrageous,” Galloway says.

Barricelli was an uncompromising oddball who teetered between madcap and mastermind.

Today, Galloway, a member of Barricelli’s small but growing cadre of boosters, has recreated the images. Following methods described by Barricelli in one of his papers, Galloway has coded an applet using the computer language Processing to revive Barricelli’s numerical organisms—with slight variation. While Barricelli encoded his numbers as eight-unit-long proto-pixels, Galloway condensed each to a single color-coded cell. By collapsing each number into a single pixel, Galloway has been able to fit eight times as many generations in the frame. These revitalized mosaics look like psychedelic cross-sections of the fossil record. Each swatch of color represents an organism, and when one color field bumps up against another one, that’s where cross-fertilization takes place.

“You can see these kinds of points of turbulence where the one color meets another color,” Galloway says, showing off the images on a computer in his office. “That’s a point where a number would be—or a gene would be—sort of jumping from one organism to another.” Here, in other words, is artificial life—Barricelli’s symbiogenesis—frozen in amber. And cyan and lavender and teal and lime and fuchsia.

Galloway is not the only one to be struck by the beauty of Barricelli’s computer-generated digital images. As a doctoral student, Pixar cofounder Smith became familiar with Barricelli’s work while researching the history of cellular automata for his dissertation. When he came across Barricelli’s prints he was astonished. “It was remarkable to me that with such crude computing facilities in the early 50s, he was able to be making pictures,” Smith says. “I guess in a sense you can say that Barricelli got me thinking about computer animation before I thought about computer animation. I never thought about it that way, but that’s essentially what it was.”

Cyberspace now swells with Barricelli’s progeny. Self-replicating strings of arithmetic live out their days in the digital wilds, increasingly independent of our tampering. The fittest bits survive and propagate. Researchers continue to model reduced, pared-down versions of life artificially, while the real world bursts with Boolean beings. Scientists like Venter conjure synthetic organisms, assisted by computer design. Swarms of autonomous codes thrive, expire, evolve, and mutate underneath our fingertips daily. “All kinds of self-reproducing codes are out there doing things,” Dyson says. In our digital lives, we are immersed in Barricelli’s world.

]]>
Fri, 20 Jun 2014 06:08:03 -0700 http://nautil.us/issue/14/mutation/meet-the-father-of-digital-life
<![CDATA[As We May Think: A 1945 Essay on Information Overload, "Curation," and Open]]> http://brainpickings.org/index.php/2012/10/11/as-we-may-think-1945

Professionally our methods of transmitting and reviewing the results of research are generations old and by now are totally inadequate for their purpose. If the aggregate time spent in writing scholarly works and in reading them could be evaluated, the ratio between these amounts of time might well be startling. Those who conscientiously attempt to keep abreast of current thought, even in restricted fields, by close and continuous reading might well shy away from an examination calculated to show how much of the previous month’s efforts could be produced on call. Mendel’s concept of the laws of genetics was lost to the world for a generation because his publication did not reach the few who were capable of grasping and extending it; and this sort of catastrophe is undoubtedly being repeated all about us, as truly significant attainments become lost in the mass of the inconsequential.

]]>
Mon, 15 Oct 2012 01:37:00 -0700 http://brainpickings.org/index.php/2012/10/11/as-we-may-think-1945
<![CDATA[A Diatribe from the Remains of Dr. Fred McCabe]]> http://www.3quarksdaily.com/3quarksdaily/2010/07/a-diatribe-from-the-remains-of-dr-fred-mccabe.html

About a month ago in handling the remains of one Dr. Fred McCabe I found rich notes of contemplation on the subject of information theory. It appears that Fred could have written an entire book on the intricacies of hidden data, encoded messages and deceptive methods of transmission. Instead his notes exist in the form of a cryptic assemblage of definitions and examples, arranged into what Dr. McCabe himself labelled a series of ‘moments’.

I offer these moments alongside some of the ten thousand images Dr. McCabe amassed in a separate, but intimately linked, archive. The preface to this abridged compendium is little capable of preparing one for the disarray of material, but by introducing this text with Fred's own words it is my hope that a sense of the larger project will take root in the reader’s fertile imagination.

The Moment of the Message: A Diatribe

by Dr. Fred McCabe

More than ten thousand books on mathematics and a thousand books on philosophy exist for every one upon information. This is surprising. It must mean something.

I want to give you a message. But first. I have to decide how to deliver the message.

This is that moment.

I can write it down, or perhaps memorise it – reciting it in my head like a mantra, a prayer chanted in the Palace gardens. And later, speaking in your ear, I will repeat it to you. That is, if you want to hear it.

I could send it to you, by post, or telegram. After writing it down I will transmit it to you. Broadcasting on your frequency in the hope that you will be tuned in at the right moment. Speaking your language. Encoded and encrypted, only you will understand it.

I have a message for you and I want you to receive it. But first. I have to decide what the message is.

This is that moment:

This is the moment of the message

From the earliest days of information theory it has been appreciated that information per se is not a good measure of message value. The value of a message appears to reside not in its information (its absolutely unpredictable parts) but rather in what might be called its redundancy—parts predictable only with difficulty, things the receiver could in principle have figured out without being told, but only at considerable cost in money, time, or computation. In other words, the value of a message is the amount of work plausibly done by its originator, which its receiver is saved from having to repeat.

This is the moment my water arrived at room temperature

The term enthalpy comes from the Classical Greek prefix en-, meaning "to put into", and the verb thalpein, meaning "to heat".

For a simple system, with a constant number of particles, the difference in enthalpy is the maximum amount of thermal energy derivable from a thermodynamic process in which the pressure is held constant.

This is the moment the wafer became the body of Christ

The Roman Catholic Church got itself into a bit of a mess. Positing God as the victim of the sacrifice introduced a threshold of undecidability between the human and the divine. The simultaneous presence of two natures, which also occurs in transubstantiation, when the bread and wine become the body and blood of Christ, threatens to collapse the divine into the human; the sacred into the profane. The question of whether Christ really is man and God, of whether the wafer really is bread and body, falters between metaphysics and human politics. The Pope, for all his failings, has to decide the undecidable.

This is the moment black lost the game

A ko fight is a tactical and strategic phase that can arise in the game of go.

Some kos offer very little gain for either player. Others control the fate of large portions of the board, sometimes even the whole board, and the outcome of those kos can determine the winner of the game. For this reason, finding and using ko threats well is a very important skill.

This is the moment Robinson Crusoe becomes the first, English language novel

According to Abel Chevalley, a novel is: "a fiction in prose of a certain extent”, defining that "extent" at over 50,000 words. Some critics distinguish the novel from the romance (which has fantastic elements), the allegory (in which characters and events have political, religious or other meanings), and the picaresque (which has a loosely connected sequence of episodes). Some critics have argued that Robinson Crusoe contains elements of all three of these other forms.

This is the moment Sarah Conner takes control

A paper clip is usually a thin wire in a looped shape that takes advantage of the elasticity and strength of the materials of its construction to compress and therefore hold together two or more pieces of paper by means of torsion and friction. Some other kinds of paper clip use a two-piece clamping system.

In fiction, a paper clip often takes the place of a key as means of breaking and entering, or, in Sarah Conner’s case, as means of escape.

This is the moment they found the missing piece of DNA

In northern Spain 49,000 years ago, 11 Neanderthals were murdered. Their tooth enamel shows that each of them had gone through several periods of severe starvation, a condition their assailants probably shared. Cut marks on the bones indicate the people were butchered with stone tools. About 700 feet inside the cave, a research team excavated 1,700 bones from that cannibalistic feast. Much of what is known about Neanderthal genetics comes from those 11 individuals.

This is the moment Bill Clinton lied (to himself)

A microexpression is a brief, involuntary facial expression shown on the face according to emotions experienced. They usually occur in high-stake situations, where people have something to lose or gain. Unlike regular facial expressions, it is difficult to fake microexpressions. Microexpressions express the seven universal emotions: disgust, anger, fear, sadness, happiness, surprise, and contempt. They can occur as fast as 1/25 of a second.

This is the moment I strained my iris

Idiopathic is an adjective used primarily in medicine meaning arising spontaneously or from an obscure or unknown cause. From Greek idios (one's own) and pathos (suffering), it means approximately "a disease of its own kind."

This is the moment everything changed

In ordinary conversation, everything usually refers only to the totality of things relevant to the subject matter. When there is no expressed limitation, everything may refer to the universe or the world.

Every object and entity is a part of everything, including all physical bodies and in some cases all abstract objects. Everything is generally defined as the opposite of nothing, although an alternative view considers "nothing" a part of everything.

This is the moment of another message

In information theory the value of a message is calculated by the cost it would take to repeat or replace the work the message has done.

One might argue that a message’s usefulness is a better measure of value than its replacement cost. Usefulness is an anthropocentric concept that information theorists find difficult to conceptualise.

]]>
Sun, 11 Jul 2010 21:25:00 -0700 http://www.3quarksdaily.com/3quarksdaily/2010/07/a-diatribe-from-the-remains-of-dr-fred-mccabe.html
<![CDATA[Inside Code: A Conversation with Dr. Lane DeNicola and Seph Rodney]]> http://www.3quarksdaily.com/3quarksdaily/2010/06/inside-code-a-conversation.html
posted by Daniel Rourke

A couple of weeks ago I was invited to take part in a panel discussion on London based, arts radio station, Resonance FM. It was for The Thread, a lively show that aims to use speech and discussion as a tool for research, opening up new and unexpected angles through the unravelling of conversation.

The Thread's host, London Consortium researcher Seph Rodney, and I were lucky enough to share the discussion with Dr. Lane DeNicola, a lecturer and researcher in Digital Anthropology from University College London. We talked about encoding and decoding, about the politics of ownership and the implications for information technologies. We talked about inscriptions in stone, and the links we saw between the open-source software movement and genome sequencing.

Here is an edited transcript of the show, but I encourage you to visit The Thread's website, where you will shortly find a full audio recording of the conversation. The website also contains information about upcoming shows, as well as a rich archive of past conversations.

Inside Code: Encoding and decoding appear in contemporary context as a fundamental feature of technology, in our use of language and in our social interactions, from html to language coding and literary symbolism. How, and through what means, do people encode and decode?

Creative Commons License This transcript is shared under a Creative Commons License

The Rosetta StoneSeph Rodney: I wanted to start off the conversation by asking both my guests how it is that we get the kind of literacy that we have to decode writing. It seems to me that it’s everywhere, that we take it for granted. It seems that there’s a kind of decoding that happens in reading, isn’t there?

Lane DeNicola: Yes. I would say that one of the more interesting aspects of that are the material consequences. Whereas literacy before was largely a matter of human knowledge, understanding of a language, all the actual practices involved was a surface to mark on and an instrument to do the marking, whereas today, a great deal of the cultural content that is in circulation commonly involves technologies that are considerably more complex than a simple writing instrument. Things that individuals don’t really comprehend in the same way.

Seph: What are the technologies that are more complex? What’s coming to my mind is computer code.

Lane: Exactly. Apple’s Garage Band might be one example, these tools that many of us encounter as final products on YouTube. One of the things on the new program at UCL we have tried to give a broad exposure to is exactly how much communicating people are doing through these new forms, and how they take the place in some instances of more traditional modes of communication.

Seph: You’re calling it communication, and one of the things that occurred to me after talking to Daniel, and exchanging a few emails, was that he calls writing, at least, a system of exchange. I was thinking, wouldn’t that in other contexts be called communication, and maybe ten years ago we would have called it transmission? But why is it exchange for you?

Daniel Rourke: I just have a problem with the notion of communication because of this idea of passing on something which is mutual. I think to use the word exchange for me takes it down a notch almost, that I am passing something on, but I am not necessarily passing on what I intend to pass on. To take it back to the idea of a writing system, the history of writing wasn’t necessarily marks on a page. The technologies that emerged from say Babylonia of a little cone of clay that had markings on the outside, they said just as much about the body and about symbolic notions as they did about what it was the marks were meaning to say. So that’s why I use exchange I think. It opens up the meaning a bit.

Seph: Yeah. It doesn’t presume that there is a person transmitting and a person that’s receiving, necessarily? And it also says something about, what I thought was really fascinating, that there is so much more in the object than just the markings on a page. About how the materials tell us something about that particular age, that particular moment in history.

Lane: Yeah. Even in a contemporary context it may have been the case that the early days of the web were all about hypertext, but the great deal of what you call ‘exchange’ that is happening today, how are you going to qualify a group of people playing World of Warcraft simultaneously in this shared virtual space – calling that communication is a little bit limiting. In fact it is experienced much more as a joint space, or an exchange of things, more than simple information. It can be thought of as an exchange of experience, or of virtual artefacts for example.

Seph: That can happen certainly in simulated game play, but it also happens in the decoding of texts. Objects that come to us from antiquity. There is all this material to be decoded that’s wrapped up in the artefacts. It is also, how much we decode and what we decode has something to do with our moment in time.

Daniel: I think it might be worth picking an example out of the air, when we are talking about this.

Seph: OK

Daniel: I’ve become fascinated by the archive of Henry Folger, he was a collector who became obsessed with collecting everything about Shakespeare he could get his hands on. This was in the 1920s and 30s I think. At the time there was a lot of need for every library around the world to have the object, whereas today we can digitise it and distribute it, back then if you didn’t have access to the thing itself, then you didn’t have the thing at all. Henry Folger became known for collecting the same Folio, tens and tens of times. In fact he became a laughing stock because he had tens and tens of the same ‘Last Folio’ of Shakespeare. People of course asked him, why did he need to have these things? Surely it was better to distribute them, but actually after his death, having all of these Folios in the same place, when people came to study them they found that they gained more information by comparing the Folios that were apparently the same. Comparing the marks that differed across Folios; one printing press had made an error here; how this piece of paper had been re-used, and therefore turned over, to print on the other side. And by decoding across the many Folios that Folger had collected they managed to piece together information about Shakespeare’s works that you could never have gained if all the Folios had been in 40 research libraries around the world. They had to be together, they had to be next to each other.

Seph: And the fact that there were differences, even though ostensibly there was just repetition, there were differences amongst the repetitions? It brings to mind immediately the Rosetta Stone, an ancient traffic sign that says the same thing in one language and the same thing in another language. A repetition, but clearly a key difference.

Daniel: The thing about the Rosetta Stone is that there was already knowledge of one system, and then they could transfer it, but I suppose it becomes interesting, especially in things like digital anthropology, where similar comparisons need to be made. You sent around this link about an old satellite system that they had managed to get more information from, by comparing and contrasting data, than it was originally intended for?

Nimbus II satellite data: Techno-Archaeology? Lane: Exactly. There’s almost a sub-genre of information technology today that I think you could call information archaeology. We’ve had several decades with computers and rapid changes in the kind of technology involved, and as a result we are losing the ability to access nearly as much data as we are collecting in some fields. The idea of people being able to retain older media, in the case you mentioned, there was only one two-inch tape drive left in the world that was capable of reading the media involved. So the project had garnered some kind of innovation research funding and they had done a proof of concept just to show that yes, we can use this one device successfully to retrieve the data from, what I believe was a 1960’s Nimbus Satellite. It has strange consequences in fields outside of paleography.

Seph: This obsolescence of objects is strange because it seems like, if the object is the height of technology at the moment, when it becomes obsolete the chances of us being able to decode what was encoded using that technology seemingly nosedive. But paper, stone, these most simple materials – it seems like those things we can continue to decode for ages.

Lane: There are questions here that are quite political in nature, but there are also questions that historians have about how something is going to work, when this proportion of our exchange, our communication and mutual experience, is happening in these forms that require opaque technologies in order to decode them.

Seph: When you say opaque, you mean?

Lane: Something that the average person couldn’t cobble together a simple instance of. Most digital technology, for example. Although there are counter-trends, like the open source software movement.

Seph: Where you create a platform, essentially, that allows anyone who uses it to add to it.

Lane: Exactly. They’ve kind of formalised it at this point. In the early days of open source it was very much about sustaining open exchange of things like source code. They realised fairly quickly that they needed something a little bit stronger, and that was where organisations like Creative Commons came into play. This is an organisation that provides a specific set of licences that legally preserve the right of users of a piece of code to re-mix it, re-modify and re-distribute it, as they wish. Some people refer to it semi-jokingly as a ‘copy-left’, whether it’s a piece of source code, or a piece or data like music and so on, essentially making it available for public re-mixing, whilst ensuring that attribution of the original author is ensured. It’s all built on this paradigm that exchange needs to happen and needs to be retained as a right for everyone.

Seph: Right. In essence exchange needs to be broadened out, so that the technology can actually stay viable.

Lane: Yes. Exactly.

Seph: I guess to suggest that for technologies to continue, to not become so obsolete that there is only one piece of equipment in the world that can decode, they need to have a lot of participants.

Daniel: And with open-source, the hierarchy also gets taken out to a degree. You don’t have the guy on the pulpit who can read the Bible and the people down in the church who are listening. With open-source it’s the people down in the church, basically, who control the code. As much as it lives, it evolves and is successfully passed on, rather than being decided by some authority. I don’t mean to build a figure-head here, but a lot of code is owned by corporations...

Lane: We won’t name any names.

Daniel: No.

Seph: Would we get in trouble for that? Of course this is the thing that has gotten Microsoft in a bit of trouble, right, with the EU? They made moves, allegedly, with their software that locks out certain people and locks in certain add-ons and software that must be used with Windows. It seems to be an effort at control, right? I’m not sure how this connects to literacy, but if you are controlling or trying to control how much your information disseminates you are making the opposite move from what we have been talking about.

Daniel: I think there is a comparison to be made. I’m thinking in terms of the difference between the French language and the English language. Every year the French authorities come together to decide what new words will be accepted into the French language, whereas English has always been allowed to bloom and blossom. Of course there’s benefits to both of those, like Microsoft controlling its source code means that when people buy a PC it’s going to work, because all the software or hardware has been designed by the same company. Anyone who has had to go into a lecture theatre and wait 20 minutes whilst the person at the front figures out how things plug in and why it’s not working. That’s one of the problems with open-source. So there’s benefits to both: to open-source because we can all partake in the code, but we have to forego some kind of standardisation.

Seph: It’s interesting that in writing, and I don’t know if this is true further afield from writing like computer code, that there’s this impetus to limit who has a certain kind of literacy or who has the power to decode and encode. It seems for writing that there doesn’t seem to be those kinds of limitations?

Lane: We haven’t brought up the term encryption; there are certainly situations where an individual wants to preserve a text, but only maintain a limited kind of access.

Seph: One of the complaints people make about ‘high-theory’, especially in literary studies, is that the language is so coded that the average person, if there is such a thing, has a hard time making heads or tails of it. There a gate is being set up where you say, well you have to know this much to come through.

Daniel: I think maybe looking at the system involved is important. With theory, do you want to argue that it’s a closed system? That universities foreground their own existence by perpetrating this coded language that we all exchange with each other, where we get funding opportunities and hold conferences.

Seph: I’m not sure I would go as far as to say it’s closed, it’s restricted.

Daniel: But it does open out at certain points. I do think it’s important for people in academia to see their work in its practical means, but whether that has anything to do with the authority of the page or the authority of speech, I am not sure.

Lane: This is making me recall some of the anthropological work that I have read on magical writing. Michael Taussig, for example, authored a book on the magic of the state. There is a whole genre on writing, writing practice and its association, in a number of cultures for millennia, with magic and magical power. It’s commonly acknowledged enough that it’s almost a joke that there’s a similar paradigm in the minds of a lot of programmers. That is, they have an esoteric, a kind of arcane knowledge, and that the literacy involved is sometimes associated with a specific language, but just as often with abstract programming principles. The exclusivity of that kind of writing is something that can bind them as a community. I have seen that many times first hand, but then there have been revealing things written on that too, mirroring tiny Melanesian communities that practice this kind of magical writing.

Seph: What does magical writing look like?

Lane: The term refers to a number of different phenomenon. There’s a colleague of mine in the states that wrote about a very small community that kept track of its dead by writing their names in a book. There were repercussions to not having a particular ancestor’s name written in the book, it had consequences that were woven into the culture. There was a specific person who was allotted the responsibility of writing the names in the book. You don’t even need to look that far afield. European traditions exist, for example, where spell casting abilities get traced in one form or another to the inscription of sigils.

Seph: Sigils?

Lane: Iconographic runes for example, proto-lettering. But it’s the whole process of representation that people see as a magical human capacity. This idea of transforming thought into a material form.

Seph: And that dovetails with your research Daniel?

Daniel: I’d like to think so. I’m thinking of Walter Benjamin and his short essay on Mimesis. He tries to go back and pick apart what reading was. That before we were reading letters we were reading the world, in a sense. When you sacrificed an animal you would ‘read’ the entrails and you could say whether it was going to be a good season. That’s the kind of magic capacity, to see patterns in the world, that at that point we would have thought had been coded by God or nature for us to find and pick apart. It’s only a small leap from that to saying, nature has given us the entrails to read, well what if I make this mark and I say this mark represents the rain or something. Then you’ve got the step towards the rune or the hieroglyph.

Seph: It’s a huge step that we make when we do that, when we take a mark and say this represents the animal, what do you think that allows us to do?

Daniel: What it forces us to do is to separate the world from ourselves, or ourselves from the world, to some extent. Perhaps when reading the entrails we don’t distinguish as much as we do when we read a mark on a page what meaning is and what world is, seeing them inherent in the same moment. To write something on a page and say it represents love or my name, suddenly our symbolic notions are pushed one step further, we are distinguishing ourselves from nature, from the world around us, from the language that we speak.

Seph: It sounds like the bad part of that is that we become more abstracted, that we begin the process of abstracting ourselves from ourselves. Saying, I can be represented by this stick figure, or this name in a ledger somewhere, or even represented by a statistic. But there’s got to be a good part as well.

Lane: In the field that I come from they often refer to writing as the original technology, and discuss Western civilisation as predicated in large part on writing and the written word. There’s a whole, in part false, but compelling dichotomy between cultures that privilege writing in some form and cultures that are primarily verbal, where stories are passed down verbally from one generation to the next. There are these clear advantages, depending on your stance. The ability to have texts preserved in a way that limits the latitude of the re-interpretations over time has very important consequences. Like you say, that disconnection that is happening, so that a given sequence of thoughts of articulations are taken away from their author, and persist in time and are looked at and forced into being interpreted in a new kind of way. That is the trade-off.

Seph: So encoding things and reading that code allows us to gain distance from things. It allows us to move away from them symbolically, and move away from them in time, and still in some ways preserve them. Daniel, in one of our emails to each other you had raised this question as to whether at any level of reality coding/decoding stopped working as a paradigm. Do you think there is a point where decoding/encoding doesn’t work anymore?

Craig Venter Daniel: To ask that question I have to contemporise myself, I have to locate myself in the present day. We’ve been talking about this separation, where the symbol starts to determine how we look at the world, the main paradigm of today perhaps would be the computer, or science, both of which have become very much combined in the science of genetics. In the news recently was the story of the entrepreneurial scientist Craig Venter, who announced to the world that they had created synthetic life from code on a computer. We could have spent the entire hour talking about the moral implications of this, and the political implications of him presenting this knowledge in the way he did, but underlying it is the very simple notion that life is able to be decoded. That to its very fundamental constituents we can pick it apart. Now, I’m not going state my opinion – whether I am a materialist, do I see something more ‘important’ in the world – I don’t know. But there are a lot of implications for free-will, especially people of religious inclination have been up in arms about this announcement. Embedded with it is the idea, from Craig Venter, that the world could be completely picked apart to its constituents, that we could rebuild things from the ground up.

Seph: The way we want to. Absolutely. Not talking about the moral implications, but it seems that one of the things we are risking in synthesising things, life, in this very commercialised, dead on the table sort of way, is we are risking despair.

Daniel: They tried to inject some kind of symbolic value back into this by encoding some words from James Joyce within the DNA of the organism.

Seph: Giving it a literary credibility?

Daniel: Yeah. I don’t know if that’s supposed to show that all scientists have got a literary heart deep within them.

Seph: A humanist side.

Daniel: A headline grabber.

Lane: I read an article on a geneticist in the states who procured some relatively cheap gene sequencing equipment off eBay.

Seph: Really? That’s an amazing sentence. Relatively inexpensive and off eBay!

Lane: Still in the thousands of US dollars, but comparatively pretty cheap. And, he had done this because he had previously been working for, I think, a large pharmaceutical company and he had access to the most advanced equipment, but as a result of him leaving the company he didn’t have access to it anymore and he was interested in a project of his own devising. He has a daughter who has a particular genetic malady and he wanted to sequence her genome with the idea that it could provide basic information for later therapy, potentially. So he, in effect, was initiated this do-it-yourself DNA community – if you could call it a community at this point. But in a sense, it’s like open-sourcing gene sequencing. It really muddles that whole question of, on the one hand, a trepidation built into the whole process of manipulating our own genes, but that’s a separate layer from the question of the commercialisation of the process. And the copyrighting of the ‘human text’, so the speak. I think primarily you’re talking about the pharmaceuticals industry as the leading industrial sector that has an interest in patenting specific sequences from a genome, for things like targeted drugs. An emerging and exploded new direction for the pharmaceuticals industry. Essentially, you’re talking about the copyrighting of a text.

Daniel: And the ability perhaps to put that online, to upload it to your website and let everybody see it.

Seph: To do what you will with it. The question that comes to my mind is well, then if you do create a kind of, let’s call it a ‘community’, like that, is it the kind of community – one of these I am more comfortable with – that’s like Wikipedia or is it a community like the comments page on YouTube. Do you know what I mean?

Lane: That you get the dregs along with it?

Seph: Yeah. Or an informed, scholarly position.

Daniel: I think in the long run it’s probably much more important that this information is shared around the right parties, but that’s where the question of morals comes up again. We are worried now about terrorists getting hold of radioactive material, and making a ‘dirty bomb’. It’s possible that if you can buy a genetic sequencing kit of eBay that in the next ten to twenty years people will be able to organise and design bacteria or viruses that could specifically attack certain ethnicities. These are some of the possibilities that the decoding of the genome allows us to do in the future.

Seph: Who gets access to the encoding scheme then, seems like a really important question?

Lane: Not just from the commercial angle. Usually the way the discussion of copyrighted texts begins is with the interest in motivating creative work. So the major content providers, whether it’s television production studios or what have you, their argument is if you don’t have incentives for people to produce creative work then you’re not going to have the same calibre of work being done. This is tantamount to an argument for some kind of mechanism being in place to preserve texts as property, in a kind of abstract way. That’s more at the commercial level, but there are other parallel concerns as well.

Seph: In other words, incentives like, the author gets some sort of payment or remuneration at some point for her work or efforts. Isn’t this the issue with Craig Venter. He was working with the major operation, a government funded project, that began looking to decode the genome, and then he broke off from it, saying that they were doing it too slow, that they he knew a faster way to do it. He got funding, and because he is obviously a very clever man, made it commercially viable.

Daniel: He didn’t quite beat them though. I think it was very close.

Seph: His model is, you need to make it commercially viable to get investors. For it to work you essentially need to make a profit. To go back to what we were talking about at the beginning, one of the things that earlier technologies in some ways avoid is precisely that paradigm of commercialism. Presumably when they made marks in rocks or on papyrus they weren’t doing it because that was their wage earning job?

Daniel: There is a huge hierarchy in text-technologies. I mean, every Egyptian Pharaoh had a scribe. The workers that built the pyramids wouldn’t have been able to read the hieroglyphs necessarily. So there have always been hierarchies within textual technologies. We think of text now as the freest system of communication that there is, but in pre-literate societies where education wasn’t available to everybody the text was just a mass of squiggles on a page that only the priest had access to. In that very move, the church could claim authority over the text, because only they could read it out. I don’t know if we should be mapping that directly onto Craig Venter and his commercial enterprise, but there has always been an attempt to gain control of information technologies from their outset. Always.

Seph: It seems that one of the things we have been saying is that that effort to gain control over technology, and to limit who gains access to literacy in that technology, is not necessarily a bad thing?

The Printing Press Lane: Right. I am kind of compelled to mention, as we are here, that copyright as it’s known began in London. Book publishing, and the right to reproduce a text, was granted by the crown and the whole idea that a text, in the abstract, could be property – rather than the copies of a text. The idea that that abstract entity could be property began here, when the major book publishers in London were beginning to suffer a drop in their profits because other printing presses were beginning to open up. The printing press was proliferating and as a result people were able to produce things much cheaper. They realised that this was going to cause them a problem, that the authors who they were compensating were not going to enjoy any of the money from their works. When copyright came around, I think around the early to mid 1800s, it was about preserving the creative incentives for the authors. There was a limit put on the amount of time the copyright could be enjoyed by the publishers. I believe it was originally 20 years, but that’s gone out of the window since then. Certainly in the States it has been extended, especially in the case of Walt Disney, to beyond 95 years.

Seph: Property – and by that we mean private property – is in itself not a thing, but a relation, a community. It is only private property because I recognise your right to have that pen next to you, to own it.

Lane: Right.

Daniel: I think the Walt Disney example is an important one. Not only do they extend the ownership of their icon Mickey Mouse every 20 years, or so, but isn’t it also the case that all the Disney films were borrowed off someone? Taking the stories of others and using them themselves. But as soon as any outsider wanted to use the image of Mickey Mouse in an art object, or in anyway, they slammed down on them as hard as they could. So there are different degrees of ownership, and community, depending on how important you see your own ownership as being.

Seph: It’s funny that in talking about encoding that we’ve gone from the text, to genetics, to moral implications, to commercialism and ownership. I suppose ownership is a good place to get to because of the political implications of encoding; of what it is to have the ability to encode something and then again decode it, to make it make sense, to share it; to allow it to proliferate. Maybe one of the great strengths about writing is that it is not under control. It really is everywhere, and in everything. Is that going too far?

Daniel: I wouldn’t want to claim that writing is any different from say a digital code. Not everybody can code in PERL for instance, but everybody can now get a YouTube video and convert it, using a program into another format, and add some titles on the bottom saying “this is my daughter, 1995” and then send that to someone else. I don’t understand the history of these marks on the page, why the letter ‘e’ is the shape it is, or what in Chinese, for example, is the history of this ideographic symbol. I don’t understand that, but I have the power to use it for my own means, to make it express. I think that is the same in all of these technologies, when they get to the public the public will use them at different levels of encoding, in a sense.

Seph: And that seems to somehow ensure that the technology will continue.

Daniel: Yes.

Lane: Yes.

Creative Commons License This transcript is shared under a Creative Commons License
posted by Daniel Rourke
]]>
Sun, 13 Jun 2010 21:25:00 -0700 http://www.3quarksdaily.com/3quarksdaily/2010/06/inside-code-a-conversation.html
<![CDATA[Could a Mini Horse Be Bred Small Enough to Fit in Your Palm?]]> http://www.wired.com/wiredscience/2010/05/miniature-horses/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+wiredscience+%28Blog+-+Wired+Science%29

The world’s smallest horse was born in late April on a farm in New Hampshire. Weighing in at 6 pounds at birth, Einstein appears to have beaten the previous record holder by three whole pounds.

But Einstein probably won’t hold his place in the Guinness Book of World Records forever, because there may be no limit to how tiny we can make our horses, said equine geneticist Samantha Brooks of Cornell University. But to get teacup horses will take many generations of breeding.

“In the last 50 years, breeders have made very good progress at making a very small horse, but they periodically hit these speed bumps,” said Brooks. “It takes a while to work them out so that you end up with a horse that not only fits in the palm of your hand but is happy and healthy.”

In recent years, the genetic underpinnings of height and size in mammals have generated increasing interest from scientists. In 2007, genetics researchers made the surprising finding that a single gene plays a very large role in reg

]]>
Tue, 04 May 2010 02:41:00 -0700 http://www.wired.com/wiredscience/2010/05/miniature-horses/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+wiredscience+%28Blog+-+Wired+Science%29
<![CDATA[Raising Neanderthals: Metaphysics at the Limits of Science]]> http://www.3quarksdaily.com/3quarksdaily/2010/03/raising-neanderthals-metaphysics-at-the-limits-of-science.html

A face to face encounter, devoid of the warm appeal of flesh. The eyes are glass, a cold blue crystal reflects the light in a way real eyes never would. A muzzle of hair, perhaps taken from a barbershop floor or the hind quarters of an animal. The painted scalp peeks through the sparse strands: there is nothing here one might caress with fumbling fingers, or, a millennia ago, pick between to lovingly tease out a louse or mite. The figure balances uneasily on stumps for legs. Its waxen surface bears no resemblance to skin. It is a shade saturated of living colour. In another shortened limb the figure holds a wooden spear, with a plastic point designed to take the place of the authentic stone tip. Under its beaten brow this creature forever stands. He is a spectacle, a museum attraction. He is not human, he is 'other'. He is not man, he is Neanderthal.

Encounters like this, hashed together from memories that span my childhood and adult years, represent the closest many of us will come to meeting a Neanderthal. Encounters built upon out-dated science and the desire of museums to authenticate experiences which, in reality, are as far away from 'true' anthropology as those glass eyes are from windows on the soul. In a recent Archaeology.org article a question was put forward that made me think again about these encounters:

Should we Clone Neanderthals? : I could not help but probe the proposition further.

Neanderthal and Human skeletons comparied In my own lifetime our understanding of these absolute 'others' has gone through several revolutions. What once were lumbering apes, incapable of rational thought, speech or the rituals of religious reverence, have become our long lost evolutionary cousins. Research from various quarters has shown that not only were Neanderthals quite capable of vocal expression, but in all likelihood they lived a rich, symbolic life. They had bigger brains than we did, or do, and were probably burying their dead with appeal to an afterlife 50,000 years before our ancestors left Africa. They cared for their young, lived in well established social groups and apart from their prominent brow and less mobile, stocky build, resembled humans in most other aspects. More recent evidence seems to show that far from being a completely separate species, it is quite possible that ancient humans interbred with Neanderthals. This astounding revelation, if it were ever verified, would mean that many of us – if not every one of us – carry within our genetic make-up a living memory of Neanderthal heritage.

But Neanderthals are more than scientific curiosities. They are the embodiment of the 'other', a reflective surface via which the human race may peer upon themselves. Human myth is filled with lumbering creatures, not quite human but every bit an echo of our deepest fears, our vanities, our failings, our memories prone to fade in time. With Shakespeare's Caliban, the feral beast of Prospero's burden, and William Blake's depiction of Nebuchadnezzar, the Babylonian king who myth says was reduced to animal madness, being only two in a long list of sub-human characters. Along with these mythic creatures the Neanderthal has achieved the status of a linguistic archetype, carrying the weight of our inhumanity when admitting our limitations is too much to bear. For a very long time after their discovery Neanderthals were named as the very embodiment of our ineptitudes. To be violent, or brutally instinctive was to be Neanderthal Neanderthals stood as a fiendish remnant of the days before language, fire or social grace, before the borders between man and nature had been breached by the gift of free-will – a gift bequeathed to us, and not to them.

This vague notion of a 'gift' came to me after reading the article about the possibility of cloning Neanderthals At first I read with a certain distance, the same reading I might have given to an article about cloning dodos, mammoths or dinosaurs. Soon though, it was clear that "bringing Neanderthals back from the dead" was a far more metaphysically slippy statement than similar ones about long extinct birds or mammals.

Although concise and engaging the article bounds between two wildly opposed positions when it comes to representing Neanderthals On the one hand the scientists interviewed seem to understand Neanderthals as entities worthy of 'human' rights and freedoms:

"We are not Frankenstein doctors who use human genes to create creatures just to see how they work." Noonan agrees, "If your experiment succeeds and you generate a Neanderthal who talks, you have violated every ethical rule we have," he says, "and if your experiment fails...well. It's a lose-lose." Other scientists think there may be circumstances that could justify Neanderthal cloning.

"If we could really do it and we know we are doing it right, I'm actually for it," says Lahn. "Not to understate the problem of that person living in an environment where they might not fit in. So, if we could also create their habitat and create a bunch of them, that would be a different story."

Extract from article: Should We Clone Neanderthals?

On the other hand, much of the article is made up of insights into piecing together ancient – and therefore fragmented – DNA sequences, and the benefits a Neanderthal clone might be to human medicine:

“Neanderthal cells could be important for discovering treatments to diseases that are largely human-specific, such as HIV, polio, and smallpox, he says. If Neanderthals are sufficiently different from modern humans, they may have a genetic immunity to these diseases. There may also be differences in their biology that lead to new drugs or gene therapy treatments.”

Extract from article: Should We Clone Neanderthals?

The article does a very good job of considering the moral implications of these outcomes, and on many levels I agree with them. But when it came to the metaphysical significance of cloning a Neanderthal the article, like so many other articles about science, stayed largely silent.

Before I come back to the notion of the 'gift' I mentioned before, I'd like to reconsider the article with a few simple questions:

1. What would it mean to give life to an extinct creature, let alone one whose mental capacities are as varied and dexterous as our own?

Neanderthal burial The likelihood is that early human groups had a part to play in the extinction of our closest cousins, as we still do in the demise of many other, less human, creatures. Our propensity to distinguish ourselves from the natural world that supports us is one intimately bound to our notions of identity, of cause and effect and – perhaps most fundamentally – of spiritual presence. Where the Neanderthal differs from other extinct species is at the status of 'other'. By being so similar in kind to us the Neanderthal cannot help but become a mirror for the human race. Of course it is impossible to know how early humans and Neanderthals reacted to each other all those centuries ago. But the outcome would suggest that humans did not run to help their evolutionary neighbours as their life slipped away. To us they must have seemed both alien and kin. Something to fear, not because of their absolute difference, but because deep down we knew how they viewed the world. And if it was anything like the way we did, we were better rid of them.

Further more, to bring Neanderthals into the world as scientific curiosities – which they would be by necessity – is to deny them the status of 'human' from the beginning. Not only would a Neanderthal grown in a test-tube be the embodiment of 'other', they would also be a walking, talking genetic tool-kit, replete with the needs of a person, but the status of a slave.

2. How much of what we decode and reconstruct is just DNA?

Genetics is an impressively successful science, giving us insights into the living, breathing world at a range of detail unknown to previous generations. But I don't think it is too cynical of me to throw caution upon its 'truth' value. Genetics is not a truth about the world. Instead, it is a highly paradigmatic model that scientists use to understand abstractions of reality far removed from the every day. Of course this gross generalisation is worthy of a long discussion in itself, and one better balanced by whole swathes of research designed to outline the weak points of the genetic paradigm, as well as advance our understanding of it. When the issue at hand is better medical care, or the development of advanced crops for the third world, we should keep these issues in mind, moving forwards cautiously as long as the benefits outweigh our reservations. When it comes to bringing to life an entire species, and one in whose original demise we probably played a part, a blind trust in scientific models is much more likely to lead us into a moral cul de sac we may never escape from. Although the article talks at length on the problems associated with cloning (such as birth defects and multiple infant deaths) it fails outright to consider what genetics, and thus 'cloning', actually represents.

Any creature that we did 'raise from the dead' would be as much a result of contemporary scientific models as it was of mother nature.

...and with both those questions in mind, a third:

3. What happens to our vision of ourselves once the deed has been done?

This final point, leaking naturally from the other two, is founded on what seem on the surface overtly metaphysical concerns. But I hasten to show that through applying the rhetoric of 'human rights' onto creatures that were born in a laboratory, nothing but confusion can arise.

It pays here to visit two theorists for whom the questions of the 'gift' and of the 'tool' are highly significant to their philosophies.

From the writings of Georges Bataille we learn that nothing humans make in utility may be given a status above a tool:

“The tool has no value in itself – like the subject, or the world, or the elements that are of the same nature as the subject or the world – but only in relation to an anticipated result. The time spent in making it directly establishes its utility, its subordination to the one who uses it with an end in view, and its subordination to this end; at the same time it establishes the clear distinction between the end and the means and it does so in the very terms that its appearance has defined. Unfortunately the end is thus given in terms of the means, in terms of utility. This is one of the most remarkable and most fateful aberrations of language. The purpose of a tool's use always has the same meaning as the tool's use: a utility is assigned to it in turn and so on...”

Georges Bataille, Theory of Religion

For Jean-Joseph Goux the act of 'giving' is always a statement of otherness:

“The impossibility of return reveals the truth of the gift in separating it from the return and, most of all, in showing it as an act carried out for others. This service toward others must be the only reason for the kind deed.... It is only as a superior level in the gradation of the regimes of giving that the gift without return can be thought. This mode of giving imitates the Gods. We have two extreme positions on the moral scale, “The one who brings kind deeds imitates the Gods; the one who claims a payments imitates the usurers.””

Jean-Joseph Goux, Seneca Against Derrida

To consider the cloning of Neanderthals as an act of utility (i.e. for the benefits their genome would be to human medicine) is to, by definition, subordinate them to their 'use-value' - denying them outright the status of human and the rights to which that status is associated. On the other hand, to consider the act of cloning as a true 'gift' to the Neanderthals is to push our own status as the 'givers' towards the divine. Either the cloned Neanderthals are tools for us to use as we wish or their life is their very own and one instantly removed from any right we claim to administer it.

Of course it is impossible to imagine the cloned Neanderthals' 'gift' as being one we would honour unto them without any claim of return. In the modern world such creatures, born in our laboratories, would at least be the legal property of the institution that bore them. At the very worst the cloned Neanderthal would grow up under the lights of a thousand television cameras, only to be cut open and dissected in front of the very same zooming lenses when it came of age.

Neanderthal depiction There is a much deeper problem at play here, one that I believe science has no possibility of solving. Once our technologies are capable of bringing sentient life into existence, whether that be a Neanderthal or a cognitive computer, that very same technology becomes instantly incapable of representing the life it has created. Simply put, it is at this point that scientific rhetoric collapses as a field of enterprise, and only the patterns and considerations of philosophy and religion become relevant. As Bataille and Goux show, the only entity capable of truly giving – asking not for return, the only metaphysical concept capable of acting upon the world without utility is a divine being, neither of this world nor capable of being represented in it. Not for one moment does this philosophical enquiry suggest that such a being exists, what it does do is draw firmly into the sand a metaphysical line beyond which 'we' cannot cross. It is not that humans won't be technically able to bring such living entities into the world, it is more that, at the very moment we do so 'we' – as a concept – cease to be. In religious terminology the relationship we might have with such beings is similar to that between the shepherd and his flock. At the moment we bring Neanderthals into our world they become our most significant responsibility, exploding to infinity any notions we may have carried before about our own place in the cosmos.

I have no idea whether I would like to see Neanderthals walking the planet alongside us, or whether their memory should stay that way, long into our future. What I do know is that this philosophical parable is one that science, and the modern humanity it supports, would do well to become more aware of. So often it seems that our scientific rhetoric is incapable of providing us with solid enough foundations for the acts we commit in its name. Perhaps in a world of continued, man-made extinctions, of climate change and ever increasing human populations, perhaps in this world science needs a Neanderthal-cloning moment to awaken it to the implications of its continued existence.

“...philosophy seeks to establish, or rather restore, an other relationship to things, and therefore an other knowledge, a knowledge and a relationship that precisely science hides from us, of which it deprives us, because it allows us only to conclude and to infer without ever presenting, giving to us the thing in itself.”

Gilles Deleuze, Desert Islands

]]>
Sun, 21 Mar 2010 22:30:00 -0700 http://www.3quarksdaily.com/3quarksdaily/2010/03/raising-neanderthals-metaphysics-at-the-limits-of-science.html
<![CDATA[Raising Neanderthals]]> http://machinemachine.net/text/ideas/raising-neanderthals

In northern Spain 49,000 years ago, 11 Neanderthals were murdered. Their tooth enamel shows that each of them had gone through several periods of severe starvation, a condition their assailants probably shared. Cut marks on the bones indicate the people were butchered with stone tools. About 700 feet inside El Sidrön cave, a research team including Lalueza-Fox excavated 1,700 bones from that cannibalistic feast. Much of what is known about Neanderthal genetics comes from those 11 individuals. Lalueza-Fox does not plan to sequence the entire genome of the El Sidrön Neanderthals. He is interested in specific genes. “I choose genes that are somehow related to individuality,” he says. “I’d like to create a personal image of these guys.” Extract from: Should we Clone Neanderthals?

Genetics has reduced the organic world to the status of a code. In a gesture that continues the work of Descartes, mankind has separated itself from its own constitution. The mind, or perhaps the self, is merely anchored to the body, rather than reliant on it. Since Descartes we have been able to refer to the organic body as other, and we continue to congratulate ourselves. Genetic sequencing instigates a new kind of dualism. That of body and code. The reduction of living matter to four strands of nucleic acid; the code constitutes the life, yet it is not the life. On the computer screen, or in a sequencing lab, the code floats free from the living, becoming pure information in and of itself. We are now able to refer to the information of the human as other and again we congratulate ourselves. From Descartes onwards, through Kant and the enlightenment, philosophy now finds itself at an impasse. By separating the mind and the body dualism also separated the tools of enquiry by which the holistic ‘human’ could be understood. Man is not of world, man is not even of body: and so it transpires that man is not even of the sequence; the code; the malleable constituent of life itself. This new dualism opens itself through the rhetoric of genetics. Science is now capable of handling the entire history of life as if it were a cut-up text; a freakish maelstrom of free-floating base-pairs mangled in some Burroughs-esque sequencing shredder. To science the sequence maketh the Neanderthal, but it does not constitute mankind. But what of the historicity of those creatures? For Neanderthal are much more than a genetic cousin, labelled in similitude. The Neanderthal is a symbol; a mythic resonance. Neanderthals are a different category of person, literally lost to the world, but not lost to our memory. In being so close in kind to us they represent the ultimate other. As much creature as human; as much removed as they are imminent. Do we give them the gift of life by re-sequencing their code? By ushering them into our time through test-tubes and computer simulations? Forgetting for a moment the religious efficacy entailed by this position (by my use of the word ‘gift’), the moral implications alone out number the minds available to ponder them. And still not a single metaphysical question is raised. What is it exactly that we think we are cloning?

]]>
Thu, 11 Mar 2010 08:35:00 -0800 http://machinemachine.net/text/ideas/raising-neanderthals
<![CDATA[De-constructing 'code' (picking apart its assumptions)]]> http://ask.metafilter.com/mefi/144810

De-constructing 'code': I am looking for philosophical (from W. Benjamin through to post-structuralism and beyond) examinations of 'code'. That both includes the assumptions contained in the word 'code' and any actual objects or subjects that code is connected to - including, but not limited to: computer programming, cyphers, linguistics, genetics etc. I am looking to question the assumptions of 'code'. Perhaps a specific example of a theorist de-constructing the term.

I am currently knee deep in an examination of certain practices and assumptions that have arisen from digital media/medium and digital practice (art and making in the era of data packets and compression-artefacts for example). Through my analysis I wish to investigate the paradigms of text and writing practice (the making of textual arts).

A simple analogy to this process would be looking at dialectic cultures (speech based) from the perspective/hindsight of a grapholectic culture (writing/print based). In a similar way, I want to examine writing, film and their making with the hindsight of digital paradigms.

I am aware of the works of Deleuze, Derrida, Barthes, Genette, Ong, Serres, Agamben etc. but any of their works that deal specifically with 'code' would be very very useful.

I look forward to any pointers you can give me

]]>
Tue, 02 Feb 2010 06:35:00 -0800 http://ask.metafilter.com/mefi/144810
<![CDATA[The Next Great Discontinuity: The Data Deluge]]> http://www.3quarksdaily.com/3quarksdaily/2009/04/the-next-great-discontinuity-part-two.html

Speed is the elegance of thought, which mocks stupidity, heavy and slow. Intelligence thinks and says the unexpected; it moves with the fly, with its flight. A fool is defined by predictability… But if life is brief, luckily, thought travels as fast as the speed of light. In earlier times philosophers used the metaphor of light to express the clarity of thought; I would like to use it to express not only brilliance and purity but also speed. In this sense we are inventing right now a new Age of Enlightenment… A lot of… incomprehension… comes simply from this speed. I am fairly glad to be living in the information age, since in it speed becomes once again a fundamental category of intelligence. Michel Serres, Conversations on Science, Culture and Time

(Originally published at 3quarksdaily · Link to Part One) Human beings are often described as the great imitators: We perceive the ant and the termite as part of nature. Their nests and mounds grow out of the Earth. Their actions are indicative of a hidden pattern being woven by natural forces from which we are separated. The termite mound is natural, and we, the eternal outsiders, sitting in our cottages, our apartments and our skyscrapers, are somehow not. Through religion, poetry, or the swift skill of the craftsman smearing pigment onto canvas, humans aim to encapsulate that quality of existence that defies simple description. The best art, or so it is said, brings us closer to attaining a higher truth about the world that remains elusive from language, that perhaps the termite itself embodies as part of its nature. Termite mounds are beautiful, but were built without a concept of beauty. Termite mounds are mathematically precise, yet crawling through their intricate catacombs cannot be found one termite in comprehension of even the simplest mathematical constituent. In short, humans imitate and termites merely are. This extraordinary idea is partly responsible for what I referred to in Part One of this article as The Fallacy of Misplaced Concreteness. It leads us to consider not only the human organism as distinct from its surroundings, but it also forces us to separate human nature from its material artefacts. We understand the termite mound as integral to termite nature, but are quick to distinguish the axe, the wheel, the book, the skyscraper and the computer network from the human nature that bore them. When we act, through art, religion or with the rational structures of science, to interface with the world our imitative (mimetic) capacity has both subjective and objective consequence. Our revelations, our ideas, stories and models have life only insofar as they have a material to become invested through. The religion of the dance, the stone circle and the summer solstice is mimetically different to the religion of the sermon and the scripture because the way it interfaces with the world is different. Likewise, it is only with the consistency of written and printed language that the technical arts could become science, and through which our ‘modern’ era could be built. Dances and stone circles relayed mythic thinking structures, singular, imminent and ethereal in their explanatory capacities. The truth revealed by the stone circle was present at the interface between participant, ceremony and summer solstice: a synchronic truth of absolute presence in the moment. Anyone reading this will find truth and meaning through grapholectic interface. Our thinking is linear, reductive and bound to the page. It is reliant on a diachronic temporality that the pen, the page and the book hold in stasis for us. Imitation alters the material world, which in turn affects the texture of further imitation. If we remove the process from its material interface we lose our objectivity. In doing so we isolate the single termite from its mound and, after much careful study, announce that we have reduced termite nature to its simplest constituent. The reason for the tantalizing involutions here is obviously that intelligence is relentlessly reflexive, so that even the external tools that it uses to implement its workings become ‘internalized’, that is, part of its own reflexive process… To say writing is artificial is not to condemn it but to praise it. Like other artificial creations and indeed more than any other, it is utterly invaluable and indeed essential for the realisation of fuller, interior, human potentials. Technologies are not mere exterior aids but also interior transformations of consciousness, and never more than when they affect the word. Walter J. Ong, Orality and Literacy

Anyone reading this article cannot fail but be aware of the changing interface between eye and text that has taken place over the past two decades or so. New Media – everything from the internet database to the Blackberry – has fundamentally changed the way we connect with each other, but it has also altered the way we connect with information itself. The linear, diachronic substance of the page and the book have given way to a dynamic textuality blurring the divide between authorship and readership, expert testament and the simple accumulation of experience. The main difference between traditional text-based systems and newer, data-driven ones is quite simple: it is the interface. Eyes and fingers manipulate the book, turning over pages in a linear sequence in order to access the information stored in its printed figures. For New Media, for the digital archive and the computer storage network, the same information is stored sequentially in databases which are themselves hidden to the eye. To access them one must commit a search or otherwise run an algorithm that mediates the stored data for us. The most important distinction should be made at the level of the interface, because, although the database as a form has changed little over the past 50 years of computing, the Human Control Interfaces (HCI) we access and manipulate that data through are always passing from one iteration to another. Stone circles interfacing the seasons stayed the same, perhaps being used in similar rituals over the course of a thousand years of human cultural accumulation. Books, interfacing text, language and thought, stay the same in themselves from one print edition to the next, but as a format, books have changed very little in the few hundred years since the printing press. The computer HCI is most different from the book in that change is integral to it structure. To touch a database through a computer terminal, through a Blackberry or iPhone, is to play with data at incredible speed: Sixty years ago, digital computers made information readable. Twenty years ago, the Internet made it reachable. Ten years ago, the first search engine crawlers made it a single database. Now Google and like-minded companies are sifting through the most measured age in history, treating this massive corpus as a laboratory of the human condition… Kilobytes were stored on floppy disks. Megabytes were stored on hard disks. Terabytes were stored in disk arrays. Petabytes are stored in the cloud. As we moved along that progression, we went from the folder analogy to the file cabinet analogy to the library analogy to — well, at petabytes we ran out of organizational analogies. At the petabyte scale, information is not a matter of simple three- and four-dimensional taxonomy and order but of dimensionally agnostic statistics… This is a world where massive amounts of data and applied mathematics replace every other tool that might be brought to bear. Out with every theory of human behavior, from linguistics to sociology. Forget taxonomy, ontology, and psychology. Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves. Wired Magazine, The End of Theory, June 2008

And as the amount of data has expanded exponentially, so have the interfaces we use to access that data and the models we build to understand that data. On the day that Senator John McCain announced his Vice Presidential Candidate the best place to go for an accurate profile of Sarah Palin was not the traditional media: it was Wikipedia. In an age of instant, global news, no newspaper could keep up with the knowledge of the cloud. The Wikipedia interface allowed knowledge about Sarah Palin from all levels of society to be filtered quickly and efficiently in real-time. Wikipedia acted as if it was encyclopaedia, as newspaper as discussion group and expert all at the same time and it did so completely democratically and at the absence of a traditional management pyramid. The interface itself became the thinking mechanism of the day, as if the notes every reader scribbled in the margins had been instantly cross-checked and added to the content. In only a handful of years the human has gone from merely dipping into the database to becoming an active component in a human-cloud of data. The interface has begun to reflect back upon us, turning each of us into a node in a vast database bigger than any previous material object. Gone are the days when clusters of galaxies had to a catalogued by an expert and entered into a linear taxonomy. Now, the same job is done by the crowd and the interface, allowing a million galaxies to be catalogued by amateurs in the same time it would have taken a team of experts to classify a tiny percentage of the same amount. This method of data mining is called ‘crowdsourcing’ and it represents one of the dominant ways in which raw data will be turned into information (and then knowledge) over the coming decades. Here the cloud serves as more than a metaphor for the group-driven interface, becoming a telling analogy for the trans-grapholectic culture we now find ourselves in. To grasp the topological shift in our thought patterns it pays to move beyond the interface and look at a few of the linear, grapholectic models that have undergone change as a consequence of the information age. One of these models is evolution, a biological theory the significance of which we are still in the process of discerning:

If anyone now thinks that biology is sorted, they are going to be proved wrong too. The more that genomics, bioinformatics and many other newer disciplines reveal about life, the more obvious it becomes that our present understanding is not up to the job. We now gaze on a biological world of mind-boggling complexity that exposes the shortcomings of familiar, tidy concepts such as species, gene and organism. A particularly pertinent example [was recently provided in New Scientist] - the uprooting of the tree of life which Darwin used as an organising principle and which has been a central tenet of biology ever since. Most biologists now accept that the tree is not a fact of nature - it is something we impose on nature in an attempt to make the task of understanding it more tractable. Other important bits of biology - notably development, ageing and sex - are similarly turning out to be much more involved than we ever imagined. As evolutionary biologist Michael Rose at the University of California, Irvine, told us: “The complexity of biology is comparable to quantum mechanics.” New Scientist, Editorial, January 2009

As our technologies became capable of gathering more data than we were capable of comprehending, a new topology of thought, reminiscent of the computer network, began to emerge. For the mindset of the page and the book science could afford to be linear and diachronic. In the era of The Data Deluge science has become more cloud-like, as theories for everything from genetics to neuroscience, particle physics to cosmology have shed their linear constraints. Instead of seeing life as a branching tree, biologists are now speaking of webs of life, where lineages can intersect and interact, where entire species are ecological systems in themselves. As well as seeing the mind as an emergent property of the material brain, neuroscience and philosophy have started to consider the mind as manifest in our extended, material environment. Science has exploded, and picking up the pieces will do no good. Through the topology of the network we have begun to perceive what Michel Serres calls ‘The World Object’, an ecology of interconnections and interactions that transcends and subsumes the causal links propounded by grapholectic culture. At the limits of science a new methodology is emerging at the level of the interface, where masses of data are mined and modelled by systems and/or crowds which themselves require no individual understanding to function efficiently. Where once we studied events and ideas in isolation we now devise ever more complex, multi-dimensional ways for those events and ideas to interconnect; for data sources to swap inputs and output; for outsiders to become insiders. Our interfaces are in constant motion, on trajectories that curve around to meet themselves, diverge and cross-pollinate. Thought has finally been freed from temporal constraint, allowing us to see the physical world, life, language and culture as multi-dimensional, fractal patterns, winding the great yarn of (human) reality: The advantage that results from it is a new organisation of knowledge; the whole landscape is changed. In philosophy, in which elements are even more distanced from one another, this method at first appears strange, for it brings together the most disparate things. People quickly crit[cize] me for this… But these critics and I no longer have the same landscape in view, the same overview of proximities and distances. With each profound transformation of knowledge come these upheavals in perception. Michel Serres, Conversations on Science, Culture and Time

]]>
Tue, 05 May 2009 07:35:00 -0700 http://www.3quarksdaily.com/3quarksdaily/2009/04/the-next-great-discontinuity-part-two.html
<![CDATA[hypertext/?="The Metaphor is the Message"]]> http://spacecollective.org/Rourke/3735/hypertextThe-Metaphor-is-the-Message

Readers: Do you think in hypertext?

The era of the linear tome is dead, information is a web - who'd have thought it - a net of knots in time and space, a palimpsest with infinite, self-referential layers.

I find that the model of hypertext has become the metaphor via which my thoughts, my research, finds form. I can't read one book at a time. Instead I skip between many, following an annotation in one, buying a bibiliographed reference, dipping into books by the same or similar authors in the bookstore, scribbling notes in one book about another. I make the world my internet; the library my world wide web.

Less I describe my journeys in hypertext, how about I carve them in hypertext, for you to explore?

Here's a hypertextual mind-map of some of my recent travels as reader. Click the to interact hypertextually**

I started this post because I am interested in the metaphors we use to model the world. As our understanding of the world evolves, so do our metaphors. As the metaphors shift, so our models are re-moulded in ever newer forms. The forms metaphors take say a lot about the culture they emerged from. The model, in many aspects, is not important: The metaphor is the message.

For example...

Over the millennia religions, philosophers, scientists and psychologists have cultivated countless metaphors for the soul; mind; consciousness. By looking at just a handful of the metaphors that were prevalent at different times in history, one begins to notice fascinating messages about the cultures that bore us:

If we look back over recent centuries we will see the brain described as a hydrodynamic machine, clockwork, and as a steam engine. When I was a child in the 1950's I read that the human brain was a telephone switching network. Later it became a digital computer, and then a massively parallel digital computer. A few years ago someone put up their hand after a talk I had given at the University of Utah and asked a question I had been waiting for for a couple of years: "Isn't the human brain just like the world wide web?". The brain always seems to be one of the most advanced technologies that we humans currently have. - Rodney A. Brooks As new technologies/theories are invented, we tend to use them as metaphors to explain the world around us and within us. Consciousness isn't the only human attribute we blindly re-metaphorise.

In recent years the Gaia Hypothesis has become very successful at explaining climate change, ecology shifts or the ever-constant salinity of the oceans as the workings of Planet Earth's immune system. The model here posits Earth as an organism, inspired at a time in history when Biological, Darwinian science was reaching its peak. Newton's mechanistic universe was probably influenced by the technically cutting-edge clocks that ticked so perfectly on his office wall. Richard Dawkins' 'meme theory' of language, for instance, came from a strong understanding of genetics.

Our language itself is packed full of artefacts of metaphor. Phrases and words that have become so absolute in our understanding of the world that we forget they all came from technologies we invented. Think of the phrase "letting off some steam". Or "mapping the territory"? Or "what makes him tick? Or "photographic memory". Engines, maps, clocks and photos have become interwoven into our linguistic frameworks, used to describe anger, ideas, other people's inner-realms and inner-mindscapes.

There are countless other models that grow out of technological or ideological changes. So too do cultural movements, in turn, become inspired by the models of the world that exist at the time. So we had the Cubists working shortly after Einstein's Relativity was being devised, or Andy Warhol reacting to consumerist, mass-produced culture by creating art that was also mass-produced. At present, architects are pursuing design down an organic-pathway, originally laid out by fractal modelling, organic chemistry, and evolutionary theory. Twisting the metaphor of the organism - a concept that philosophers of Biology try to model with their own metaphors - in order to design and implement more 'natural' human environments.

And the metaphors never stop. Mind is now a quantum computer, mind is a neural network, mind is the internet, mind is a hypertext...

And so I come back to my original point, hypertext, or more specifically the application of hypertext as a metaphor for reading, thinking, researching.

Somewhere in the feedback between culture, science, technology and thought there is an idea called 'human' that persists. Trying to raise this idea to anything above a metaphor is difficult, until we come to recognise the ripples in time and space that our models of reality leave in their wake. Tracing those models back through history and off into the future we begin to draw the outline of ourselves and our limitations.

Is it possible to use and abuse a metaphor, like hypertext, to map that territory, to permanently inscribe those lines in the sand? Even as I attempt to form my ideas into words the metaphors keep coming. Can our evolving metaphors of reality, of its perception be plotted? On a map? A hypertextual mind-map? An interlinking system of symbols, signs, cultures, ideas and relationships that feed into each other, grow forward and away from each other, merge and link back to themselves with enough clicks on the metaphorical mouse-button?

What metaphors are the message? and can Space Collective, and internet entities like it, espouse new messages in their models?

UPDATE: Part Two of this piece can be found here: Palimpsests/Palimpsests/Palimpsests

** I created this mind-map with online tool mindmeister.com. It is far from a perfect, hypertextual representation of my thoughts as they relate to books. For one thing, the mind-map can only be manipulated into a tree structure, so that branches move outwards, but never come back to link with each other across branches.

Apart from this, the mind-map is merely a tool for you to explore, click on some of the links ( ) and generally interact with. Mind Meister allows for the possibility of collaborative mind-maps, could there be possibilities for Space Collective Projects etc? If you would like to expand my mind-map then let me know and I can add you in as a collaborator.

The metaphor is the message.

]]>
Fri, 18 Apr 2008 04:22:00 -0700 http://spacecollective.org/Rourke/3735/hypertextThe-Metaphor-is-the-Message