MachineMachine /stream - search for shannon https://machinemachine.net/stream/feed en-us http://blogs.law.harvard.edu/tech/rss LifePress therourke@gmail.com <![CDATA[Data Archive Infrastructure 2018 – Shannon Mattern | The New School]]> http://www.wordsinspace.net/data_archive/fall2018/

Our semester will be divided into four units: Epistemological Architectures (Weeks 3-5), Epistemological Apparatae (Weeks 6-8), Human Subjects + Publics (Weeks 10-11), and Collections + Content (Weeks 12-14)

]]>
Sat, 01 Sep 2018 10:09:46 -0700 http://www.wordsinspace.net/data_archive/fall2018/
<![CDATA[The Wearable, Projection-Mapped Mask Is a Cyberpunk Masterpiece | The Creators Project]]> http://thecreatorsproject.vice.com/blog/the-wearable-projection-mapped-mask-is-a-cyberpunk-masterpiece?utm_Source=tcpfbus

Bill Shannon Wearable Video Mask from william shannon on Vimeo.

]]>
Sat, 29 Aug 2015 15:33:43 -0700 http://thecreatorsproject.vice.com/blog/the-wearable-projection-mapped-mask-is-a-cyberpunk-masterpiece?utm_Source=tcpfbus
<![CDATA[Four Notes Towards Post-Digital Propaganda | post-digital-research]]> http://post-digital.projects.cavi.dk/?p=475

“Propaganda is called upon to solve problems created by technology, to play on maladjustments and to integrate the individual into a technological world” (Ellul xvii).

How might future research into digital culture approach a purported “post-digital” age? How might this be understood?

1.

A problem comes from the discourse of ‘the digital’ itself: a moniker which points towards units of Base-2 arbitrary configuration, impersonal architectures of code, massive extensions of modern communication and ruptures in post-modern identity. Terms are messy, and it has never been easy to establish a ‘post’ from something, when pre-discourse definitions continue to hang in the air. As Florian Cramer has articulated so well, ‘post-digital’ is something of a loose, ‘hedge your bets’ term, denoting a general tendency to criticise the digital revolution as a modern innovation (Cramer).

Perhaps it might be aligned with what some have dubbed “solutionism” (Morozov) or “computationalism” (Berry 129; Golumbia 8): the former critiquing a Silicon Valley-led ideology oriented towards solving liberalised problems through efficient computerised means. The latter establishing the notion (and critique thereof) that the mind is inherently computable, and everything associated with it. In both cases, digital technology is no longer just a business that privatises information, but the business of extending efficient, innovative logic to all corners of society and human knowledge, condemning everything else through a cultural logic of efficiency.

In fact, there is a good reason why ‘digital’ might as well be an synonym for ‘efficiency’. Before any consideration is assigned to digital media objects (i.e. platforms, operating systems, networks), consider the inception of ‘the digital’ inception as such: that is information theory. If information was a loose, shabby, inefficient method of vagueness specific to various mediums of communication, Claude Shannon compressed all forms of communication into a universal system with absolute mathematical precision (Shannon). Once information became digital, the conceptual leap of determined symbolic logic was set into motion, and with it, the ‘digital’ became synonymous with an ideology of effectivity. No longer would miscommunication be subject to human finitude, nor be subject to matters of distance and time, but only the limits of entropy and the matter of automating messages through the support of alternating ‘true’ or ‘false’ relay systems.

However, it would be quite difficult to envisage any ‘post-computational’ break from such discourses – and with good reason: Shannon’s breakthrough was only systematically effective through the logic of computation. So the old missed encounter goes: Shannon presupposed Alan Turing’s mathematical idea of computation to transmit digital information, and Turing presupposed Shannon’s information theory to understand what his Universal Turing Machines were actually transmitting. The basic theories of both have not changed, but the materials affording greater processing power, extensive server infrastructure and larger storage space have simply increased the means for these ideas to proliferate, irrespective of what Turing and Shannon actually thought of them (some historians even speculate that Turing may have made the link between information and entropy two years before Bell Labs did) (Good).

Thus a ‘post-digital’ reference point might encompass the historical acknowledgment of Shannon’s digital efficiency, and Turing’s logic but by the same measure, open up a space for critical reflection, and how such efficiencies have transformed not only work, life and culture but also artistic praxis and aesthetics. This is not to say that digital culture is reducibly predicated on efforts made in computer science, but instead fully acknowledges these structures and accounts for how ideologies propagate reactionary attitudes and beliefs within them, whilst restricting other alternatives which do not fit their ‘vision’. Hence, the post-digital ‘task’ set for us nowadays might consist in critiquing digital efficiency and how it has come to work against commonality, despite transforming the majority of Western infrastructure in its wake.

The purpose of these notes is to outline how computation has imparted an unwarranted effect of totalised efficiency, and to label this effect the type of description it deserves: propaganda. The fact that Shannon and Turing had multiple lunches together at Bell labs in 1943, held conversations and exchanged ideas, but not detailed methods of cryptanalysis (Price & Shannon) provides a nice contextual allegory for how digital informatics strategies fail to be transparent.

But in saying this, I do not mean that companies only use digital networks for propagative means (although that happens), but that the very means of computing a real concrete function is constitutively propagative. In this sense, propaganda resembles a post-digital understanding of what it means to be integrated into an ecology of efficiency, and how technical artefacts are literally enacted as propagative decisions. Digital information often deceives us into accepting its transparency, and of holding it to that account: yet in reality it does the complete opposite, with no given range of judgements available to detect manipulation from education, or persuasion from smear. It is the procedural act of interacting with someone else’s automated conceptual principles, embedding pre-determined decisions which not only generate but pre-determine ones ability to make choices about such decisions, like propaganda.

This might consist in distancing ideological definitions of false consciousness as an epistemological limit to knowing alternatives within thought, to engaging with a real programmable systems which embeds such limits concretely, withholding the means to transform them. In other words, propaganda incorporates how ‘decisional structures’ structure other decisions, either conceptually or systematically.

2.

Two years before Shannon’s famous Masters thesis, Turing published what would be a theoretical basis for computation in his 1936 paper “On Computable Numbers, with an Application to the Entscheidungsproblem.” The focus of the paper was to establish the idea of computation within a formal system of logic, which when automated would solve particular mathematical problems put into function (Turing, An Application). What is not necessarily taken into account is the mathematical context to that idea: for the foundations of mathematics were already precarious, way before Turing outlined anything in 1936. Contra the efficiency of the digital, this is a precariousness built-in to computation from its very inception: the precariousness of solving all problems in mathematics.

The key word of that paper, its key focus, was on the Entscheidungsproblem, or decision problem. Originating from David Hilbert’s mathematical school of formalism, ‘decision’ means something more rigorous than the sorts of decisions in daily life. It really means a ‘proof theory’, or how analytic problems in number theory and geometry could be formalised, and thus efficiently solved (Hilbert 3). Solving a theorem is simply finding a provable ‘winning position’ in a game. Similar to Shannon, ‘decision’ is what happens when an automated system of function is constructed in such a sufficiently complex way, that an algorithm can always ‘decide’ a binary, yes or no answer to a mathematical problem, when given an arbitrary input, in a sufficient amount of time. It does not require ingenuity, intuition or heuristic gambles, just a combination of simple consistent formal rules and a careful avoidance of contradiction.

The two key words there are ‘always’ and ‘decide’. The progressive end-game of twentieth century mathematicians who, like Hilbert, sought after a simple totalising conceptual system to decide every mathematical problem and work towards absolute knowledge. All Turing had to do was make explicit Hilbert’s implicit computational treatment of formal rules, manipulate symbol strings and automate them using an ’effective’ or “systematic method” (Turing, Solvable and Unsolvable Problems 584) encoded into a machine. This is what Turing’s thesis meant (discovered independently to Alonzo Church’s equivalent thesis (Church)): any systematic algorithm solved by a mathematical theorem can be computed by a Turing machine (Turing, An Application), or in Robin Gandy’s words, “[e]very effectively calculable function is a computable function” (Gandy).

Thus effective procedures decide problems, and they resolve puzzles providing winning positions (like theorems) in the game of functional rules and formal symbols. In Turing’s words, “a systematic procedure is just a puzzle in which there is never more than one possible move in any of the positions which arise and in which some significance is attached to the final result” (Turing, Solvable and Unsolvable Problems 590). The significance, or the winning position, becomes the crux of the matter for the decision: what puzzles or problems are to be decided? This is what formalism attempted to do: encode everything through the regime of formalised efficiency, so that all of mathematically inefficient problems are, in principle, ready to be solved. Programs are simply proofs: if it could be demonstrated mathematically, it could be automated.

In 1936, Turing had showed some complex mathematical concepts of effective procedures could simulate the functional decisions of all the other effective procedures (such as the Universal Turing Machine). Ten years later, Turing and John von Neumann would independently show how physical general purpose computers, offered the same thing and from that moment on, efficient digital decisions manifested themselves in the cultural application of physical materials. Before Shannon’s information theory offered the precision of transmitting information, Hilbert and Turing developed the structure of its transmission in the underlying regime of formal decision.

Yet, there was also a non-computational importance here, for Turing was also fascinated by what decisions couldn’t compute. His thesis was quite precise, so as to elucidate that if no mathematical problem could be proved, a computer was not of any use. In fact, the entire focus of his 1936 paper, often neglected by Silicon Valley cohorts, was to show that Hilbert’s particular decision problem could not be solved. Unlike Hilbert, Turing was not interested in using computation to solve every problem, but as a curious endeavour for surprising intuitive behaviour. The most important of all, Turing’s halting, or printing problem was influential, precisely as it was undecidable; a decision problem which couldn’t be decided.

We can all picture the halting problem, even obliquely. Picture the frustrated programmer or mathematician starting at their screen, waiting to know when an algorithm will either halt and spit out a result, or provide no answer. The computer itself has already determined the answer for us, the programmer just has to know when to give up. But this is a myth, inherited with a bias towards human knowledge, and a demented understanding of machines as infinite calculating engines, rather than concrete entities of decision. For reasons that escape word space, Turing didn’t understand the halting problem in this way: instead he understood it as a contradictory example of computational decisions failing to decide on each other, on the account that there could never be one totalising decision or effective procedure. There is no guaranteed effective procedure to decide on all the others, and any attempt to build one (or invest in a view which might help build one), either has too much investment in absolute formal reason, or it ends up with ineffective procedures.

Undecidable computation might be looked at as a dystopian counterpart against the efficiency of Shannon’s ‘digital information’ theory. A base 2 binary system of information resembling one of two possible states, whereby a system can communicate with one digit, only in virtue of the fact that there is one other digit alternative to it. Yet the perfect transmission of that information, is only subject to a system which can ‘decide’ on the digits in question, and establish a proof to calculate a success rate. If there is no mathematical proof to decide a problem, then transmitting information becomes problematic for establishing a solution.

3.

What has become clear is that our world is no longer simply accountable to human decision alone. Decisions are no longer limited to the borders of human decisions and ‘culture’ is no longer simply guided by a collective whole of social human decisions. Nor is it reducible to one harmonious ‘natural’ collective decision which prompts and pre-empts everything else. Instead we seem to exist in an ecology of decisions: or better yet decisional ecologies. Before there was ever the networked protocol (Galloway), there was the computational decision. Decision ecologies are already set up before we enter the world, implicitly coterminous with our lives: explicitly determining a quantified or bureaucratic landscape upon which an individual has limited manoeuvrability.

Decisions are not just digital, they are continuous as computers can be: yet decisions are at their most efficient when digitally transferred. Decisions are everywhere and in everything. Look around. We are constantly told by governments and states that are they making tough decisions in the face of austerity. CEOs and Directors make tough decisions for the future of their companies and ‘great’ leaders are revered for being ‘great decisive leaders’: not just making decisions quickly and effectively, but also settling issues and producing definite results.

Even the word ‘decide’, comes from the Latin origin of ‘decidere’, which means to determine something and ‘to cut off.’ Algorithms in financial trading know not of value, but of decision: whether something is marked by profit or loss. Drones know not of human ambiguity, but can only decide between kill and ignore, cutting off anything in-between. Constructing a system which decides between one of two digital values, even repeatedly, means cutting off and excluding all other possible variables, leaving a final result at the end of the encoded message. Making a decision, or building a system to decide a particular ideal or judgement must force other alternatives outside of it. Decisions are always-already embedded into the framework of digital action, always already deciding what is to be done, how it can be done or what is threatening to be done. It would make little sense to suggest that these entities ‘make decisions’ or ‘have decisions’, it would be better to say that they are decisions and ecologies are constitutively constructed by them.

The importance of neo-liberal digital transmissions are not that they become innovative, or worthy of a zeitgeist break: but that they demonstrably decide problems whose predominant significance is beneficial for self-individual efficiency and accumulation of capital. Digital efficiency is simply about the expansion of automating decisions and what sort of formalised significances must be propagated to solve social and economic problems, which creates new problems in a vicious circle.

The question can no longer simply be ‘who decides’, but now, ‘what decides?’ Is it the cafe menu board, the dinner party etiquette, the NASDAQ share price, Google Pagerank, railway network delays, unmanned combat drones, the newspaper crossword, the javascript regular expression or the differential calculus? It’s not quite right to say that algorithms rule the world, whether in algo-trading or in data capture, but the uncomfortable realisation that real entities are built to determine provable outcomes time and time again: most notably ones for cumulating profit and extracting revenue from multiple resources.

One pertinent example: consider George Dantzig’s simplex algorithm: this effective procedure (whose origins began in multidimensional geometry) can always decide solutions for large scale optimisation problems which continually affect multi-national corporations. The simplex algorithm’s proliferation and effectiveness has been critical since its first commercial application in 1952, when Abraham Charnes and William Cooper used it to decide how best to optimally blend four different petroleum products at the Gulf Oil Company (Elwes 35; Gass & Assad 79). Since then the simplex algorithm has had years of successful commercial use, deciding almost everything from bus timetables and work shift patterns to trade shares and Amazon warehouse configurations. According to the optimisation specialist Jacek Gondzio, the simplex algorithm runs at “tens, probably hundreds of thousands of calls every minute” (35), always deciding the most efficient method of extracting optimisation.

In contemporary times, nearly all decision ecologies work in this way, accompanying and facilitating neo-liberal methods of self-regulation and processing all resources through a standardised efficiency: from bureaucratic methods of formal standardisation, banal forms ready to be analysed one central system, to big-data initiatives and simple procedural methods of measurement and calculation. The technique of decision is a propagative method of embedding knowledge, optimisation and standardisation techniques in order to solve problems and an urge to solve the most unsolvable ones, including us.

Google do not build into their services an option to pay for the privilege of protecting privacy: the entire point of providing a free service which purports to improve daily life, is that it primarily benefits the interests of shareholders and extend commercial agendas. James Grimmelmann gave a heavily detailed exposition on Google’s own ‘net neutrality’ algorithms and how biased they happen to be. In short, PageRank does not simply decide relevant results, it decides visitor numbers and he concluded on this note.

With disturbing frequency, though, websites are not users’ friends. Sometimes they are, but often, the websites want visitors, and will be willing to do what it takes to grab them (Grimmelmann 458).

If the post-digital stands for the self-criticality of digitalisation already underpinning contemporary regimes of digital consumption and production, then its saliency lies in understanding the logic of decision inherent to such regimes. The reality of the post-digital, shows that machines remain curiously efficient whether we relish in cynicism or not. Such regimes of standardisation and determined results, were already ‘mistakenly built in’ to the theories which developed digital methods and means, irrespective of what computers can or cannot compute.

4.

Why then should such post-digital actors be understood as instantiations of propaganda? The familiarity of propaganda is manifestly evident in religious and political acts of ideological persuasion: brainwashing, war activity, political spin, mind control techniques, subliminal messages, political campaigns, cartoons, belief indoctrination, media bias, advertising or news reports. A definition of propaganda might follow from all of these examples: namely, the systematic social indoctrination of biased information that persuades the masses to take action on something which is neither beneficial to them, nor in their best interests: or as Peter Kenez writes, propaganda is “the attempt to transmit social and political values in the hope of affecting people’s thinking, emotions, and thereby behaviour” (Kenez 4) Following Stanley B. Cunningham’s watered down definition, propaganda might also denote a helpful and pragmatic “shorthand statement about the quality of information transmitted and received in the twentieth century” (Cunningham 3).

But propaganda isn’t as clear as this general definition makes out: in fact what makes propaganda studies such a provoking topic is that nearly every scholar agrees that no stable definition exists. Propaganda moves beyond simple ‘manipulation’ and ‘lies’ or derogatory, jingoistic representation of an unsubtle mood – propaganda is as much about the paradox of constructing truth, and the irrational spread of emotional pleas, as well as endorsing rational reason. As the master propagandist William J. Daugherty wrote;

It is a complete delusion to think of the brilliant propagandist as being a professional liar. The brilliant propagandist […] tells the truth, or that selection of the truth which is requisite for his purpose, and tells it in such a way that the recipient does not think that he is receiving any propaganda…. (Daugherty 39).

Propaganda, like ideology works by being inherently implicit and social. In the same way that post-ideology apologists ignore their symptom, propaganda is also ignored. It isn’t to be taken as a shadowy fringe activity, blown apart by the democratising fairy-dust of ‘the Internet’. As many others have noted, the purported ‘decentralising’ power of online networks, offer new methods for propagative techniques, or ‘spinternet’ strategies, evident in China (Brady). Iran’s recent investment into video game technology only makes sense, only when you discover that 70% of Iran’s population are under 30 years of age, underscoring a suitable contemporary method of dissemination. Similarly in 2011, the New York City video game developer Kuma Games was mired in controversy when it was discovered that an alleged CIA agent, Amir Mirza Hekmati, had been recruited to make an episodic video game series intending to “change the public opinion’s mindset in the Middle East.” (Tehran Times). The game in question, Kuma\War (2006 – 2011) was a free-to-play First-Person Shooter series, delivered in episodic chunks, the format of which attempted to simulate biased re-enactments of real-life conflicts, shortly after they reached public consciousness.

Despite his unremarkable leanings towards Christian realism, Jacques Ellul famously updated propaganda’s definition as the end product of what he previously lamented as ‘technique’. Instead of viewing propaganda as a highly organised systematic strategy for extending the ideologues of peaceful warfare, he understood it as a general social phenomenon in contemporary society.

Ellul outlined two types: political and sociological propaganda: Political propaganda involves government, administrative techniques which intend to directly change the political beliefs of an intended audience. By contrast, sociological propaganda is the implicit unification of involuntary public behaviour which creates images, aesthetics, problems, stereotypes, the purpose of which aren’t explicitly direct, nor overtly militaristic. Ellul argues that sociological propaganda exists; “in advertising, in the movies (commercial and non-political films), in technology in general, in education, in the Reader’s Digest; and in social service, case work, and settlement houses” (Ellul 64). It is linked to what Ellul called “pre” or “sub-propaganda”: that is, an imperceptible persuasion, silently operating within ones “style of life” or permissible attitude (63). Faintly echoing Louis Althusser’s Ideological State Apparatuses (Althusser 182) nearly ten years prior, Ellul defines it as “the penetration of an ideology by means of its sociological context.” (63) Sociological propaganda is inadequate for decisive action, paving the way for political propaganda – its strengthened explicit cousin – once the former’s implicitness needs to be transformed into the latter’s explicitness.

In a post-digital world, such implicitness no longer gathers wartime spirits, but instead propagates a neo-liberal way of life that is individualistic, wealth driven and opinionated. Ellul’s most powerful assertion is that ‘facts’ and ‘education’ are part and parcel of the sociological propagative effect: nearly everyone faces a compelling need to be opinionated and we are all capable of judging for ourselves what decisions should be made, without at first considering the implicit landscape from which these judgements take place. One can only think of the implicit digital landscape of Twitter: the archetype for self-promotion and snippets of opinions and arguments – all taking place within Ellul’s sub-propaganda of data collection and concealment. Such methods, he warns, will have “solved the problem of man” (xviii).

But information is of relevance here, and propaganda is only effective within a social community when it offers the means to solve problems using the communicative purview of information:

Thus, information not only provides the basis for propaganda but gives propaganda the means to operate; for information actually generates the problems that propaganda exploits and for which it pretends to offer solutions. In fact, no propaganda can work until the moment when a set of facts has become a problem in the eyes of those who constitute public opinion (114).

]]>
Wed, 11 Dec 2013 15:42:45 -0800 http://post-digital.projects.cavi.dk/?p=475
<![CDATA[Man of Steel Viral - General Zod's Warning (2013) Superman Movie HD]]> http://www.youtube.com/watch?v=5QkfmqsDTgY&feature=youtube_gdata

Watch the MAN OF STEEL ALIEN INVASION: http://goo.gl/7458e Watch our Trailer Review: http://goo.gl/y78FW Subscribe to TRAILERS: http://bit.ly/sxaw6h Subscribe to COMING SOON: http://bit.ly/H2vZUn Like us on FACEBOOK: http://goo.gl/dHs73 Man of Steel Viral Video

From the official Man of Steel facebook page.

A child sent to Earth from a dying planet is adopted by a couple in rural Kansas. Posing as a journalist, he uses his extraordinary powers to protect his new home from an insidious evil.

The Movieclips Trailers channel is your destination for hot new trailers the second they drop. Whether they are blockbusters, indie films, or that new comedy you've been waiting for, the Movieclips Trailers team is there day and night to make sure all the hottest new movie trailers are available whenever you need them, as soon as you can get them. All the summer blockbusters, Man of Steel, Oblivion, Pacific Rim, After Earth, The Lone Ranger, Star Trek Into Darkness and more! They are all available on Movieclips Trailers.

In addition to hot new trailers, the Movieclips Trailers page gives you original content like Ultimate Trailers, Instant Trailer Reviews, Monthly Mashups, and Meg's Movie News and more to keep you up-to-date on what's out this week and what you should be watching. "Zack Snyder" "Henry Cavill" "Russell Crowe" "Amy Adams" "David S. Goyer" "Kevin Costner" "Diane Lane" "Michael Shannon" "Christopher Meloni" "Laurence Fishburne" "Ayelet Zurer" "Christopher nolan" "superman movie" superman "man of steel movie" "man of steel trailer" "man of steel teaser" "man of steel HD" HD 2013 "DC comics" Metropolis Movieclips movie clips movieclipstrailers movieclipscomingsoon ahegele viral "general zod"

The Phantom Zone

]]>
Tue, 21 May 2013 19:44:03 -0700 http://www.youtube.com/watch?v=5QkfmqsDTgY&feature=youtube_gdata
<![CDATA[Claude Elwood Shannon's addition of a twenty-seventh letter to the alphabet]]> http://twitter.com/therourke/statuses/217279298548142080 ]]> Mon, 25 Jun 2012 08:33:00 -0700 http://twitter.com/therourke/statuses/217279298548142080 <![CDATA[Rigid Implementation vs Flexible Materiality]]> http://machinemachine.net/text/research/rigid-implementation-vs-flexible-materiality

Wow. It’s been a while since I updated my blog. I intend to get active again here soon, with regular updates on my research. For now, I thought it might be worth posting a text I’ve been mulling over for a while (!) Yesterday I came across this old TED presentation by Daniel Hillis, and it set off a bunch of bells tolling in my head. His book The Pattern on the Stone was one I leafed through a few months back whilst hunting for some analogies about (digital) materiality. The resulting brainstorm is what follows. (This blog post, from even longer ago, acts as a natural introduction: On (Text and) Exaptation) In the 1960s and 70s Roland Barthes named “The Text” as a network of production and exchange. Whereas “the work” was concrete, final – analogous to a material – “the text” was more like a flow, a field or event – open ended. Perhaps even infinite. In, From Work to Text, Barthes wrote: The metaphor of the Text is that of the network… (Barthes 1979) This semiotic approach to discourse, by initiating the move from print culture to “text” culture, also helped lay the ground for a contemporary politics of content-driven media. Skipping backwards through From Work to Text, we find this statement: The text must not be understood as a computable object. It would be futile to attempt a material separation of works from texts. I am struck here by Barthes” use of the phrase “computable object”, as well as his attention to the “material”. Katherine Hayles in her essay, Text is Flat, Code is Deep, (Hayles 2004) teases out the statement for us: ‘computable’ here mean[s] to be limited, finite, bound, able to be reckoned. Written twenty years before the advent of the microcomputer, his essay stands in the ironic position of anticipating what it cannot anticipate. It calls for a movement away from works to texts, a movement so successful that the ubiquitous ‘text’ has all but driven out the media-specific term book. Hayles notes that the “ubiquity” of Barthes” term “Text” allowed – in its wake – an erasure of media-specific terms, such as “book”. In moving from, The Work to The Text, we move not just between different politics of exchange and dissemination, we also move between different forms and materialities of mediation. (Manovich 2002)For Barthes the material work was computable, whereas the network of the text – its content – was not.

In 1936, the year that Alan Turing wrote his iconic paper ‘On Computable Numbers’, a German engineer by the name of Konrad Zuse built the first working digital computer. Like its industrial predecessors, Zuse’s computer was designed to function via a series of holes encoding its program. Born as much out of convenience as financial necessity, Zuse punched his programs directly into discarded reels of 35mm film-stock. Fused together by the technologies of weaving and cinema, Zuse’s computer announced the birth of an entirely new mode of textuality. The Z3, the world’s first working programmable, fully automatic computer, arrived in 1941. (Manovich 2002) A year earlier a young graduate by the name of Claude Shannon had published one of the most important Masters theses in history. In it he demonstrated that any logical expression of Boolean algebra could be programmed into a series of binary switches. Today computers still function with a logic impossible to distinguish from their mid-20th century ancestors. What has changed is the material environment within which Boolean expressions are implemented. Shannon’s work first found itself manifest in the fragile rows of vacuum tubes that drove much of the technical innovation of the 40s and 50s. In time, the very same Boolean expressions were firing, domino-like, through millions of transistors etched onto the surface of silicon chips. If we were to query the young Shannon today, he might well gawp in amazement at the material advances computer technology has gone through. But, if Shannon was to examine either your digital wrist watch or the world’s most advanced supercomputer in detail, he would once again feel at home in the simple binary – on/off – switches lining those silicon highways. Here the difference between how computers are implemented and what computers are made of digs the first of many potholes along our journey. We live in an era not only practically driven by the computer, but an era increasingly determined by the metaphors computers have injected into our language. Let us not make the mistake of presupposing that brains (or perhaps minds) are “like” computers. Tempting though it is to reduce the baffling complexities of the human being to the functions of the silicon chip, the parallel processor or Wide Area Network this reduction occurs most usefully at the level of metaphor and metonym. Again the mantra must be repeated that computers function through the application of Boolean logic and binary switches, something that can not be said about the human brain with any confidence a posteriori. Later I will explore the consequences on our own understanding of ourselves enabled by the processing paradigm, but for now, or at least the next few paragraphs, computers are to be considered in terms of their rigid implementation and flexible materiality alone. At the beginning of his popular science book, The Pattern on the Stone, (Hillis 1999) W.  Daniel Hillis narrates one of his many tales on the design and construction of a computer. Built from tinker-toys the computer in question was/is functionally complex enough to “play” tic-tac-toe (noughts and crosses). The tinker-toy was chosen to indicate the apparent simplicity of computer design, but as Hillis argues himself, he may very well have used pipes and valves to create a hydraulic computer, driven by water pressure, or stripped the design back completely, using flowing sand, twigs and twine or any other recipe of switches and connectors. The important point is that the tinker-toy tic-tac-toe computer functions perfectly well for the task it is designed for, perfectly well, that is, until the tinker-toy material begins to fail. This failure is what Chapter 1 of this thesis is about: why it happens, why its happening is a material phenomenon and how the very idea of “failure” is suspect. Tinker-toys fail because the mechanical operation of the tic-tac-toe computer puts strain on the strings of the mechanism, eventually stretching them beyond practical use. In a perfect world, devoid of entropic behaviour, the tinker-toy computer may very well function forever, its users setting O or X conditions, and the computer responding according to its program in perfect, logical order. The design of the machine, at the level of the program, is completely closed; finished; perfect. Only materially does the computer fail (or flail), noise leaking into the system until inevitable chaos ensues and the tinker-toys crumble back into jumbles of featureless matter. This apparent closure is important to note at this stage because in a computer as simple as the tic-tac-toe machine, every variable can be accounted for and thus programmed for. Were we to build a chess playing computer from tinker-toys (pretending we could get our hands on the, no doubt, millions of tinker-toy sets we”d need) the closed condition of the computer may be less simple to qualify. Tinker-toys, hydraulic valves or whatever material you choose, could be manipulated into any computer system you can imagine, even the most brain numbingly complicated IBM supercomputer is technically possible to build from these fundamental materials. The reason we don”t do this, why we instead choose etched silicon as our material of choice for our supercomputers, exposes another aspect of computers we need to understand before their failure becomes a useful paradigm. A chess playing computer is probably impossible to build from tinker-toys, not because its program would be too complicated, but because tinker-toys are too prone to entropy to create a valid material environment. The program of any chess playing application could, theoretically, be translated into a tinker-toy equivalent, but after the 1,000th string had stretched, with millions more to go, no energy would be left in the system to trigger the next switch along the chain. Computer inputs and outputs are always at the mercy of this kind of entropy: whether in tinker-toys or miniature silicon highways. Noise and dissipation are inevitable at any material scale one cares to examine. The second law of thermo dynamics ensures this. Claude Shannon and his ilk knew this, even back when the most advanced computers they had at their command couldn”t yet play tic-tac-toe. They knew that they couldn”t rely on materiality to delimit noise, interference or distortion; that no matter how well constructed a computer is, no matter how incredible it was at materially stemming entropy (perhaps with stronger string connectors, or a built in de-stretching mechanism), entropy nonetheless was inevitable. But what Shannon and other computer innovators such as Alan Turing also knew, is that their saviour lay in how computers were implemented. Again, the split here is incredibly important to note:

Flexible materiality: How and of what a computer is constructed e.g. tinker-toys, silicon Rigid implementation: Boolean logic enacted through binary on/off switches (usually with some kind of input à storage à feedback/program function à output). Effectively, how a computer works

Boolean logic was not enough on its own. Computers, if they were to avoid entropy ruining their logical operations, needed to have built within them an error management protocol. This protocol is still in existence in EVERY computer in the world. Effectively it takes the form of a collection of parity bits delivered alongside each packet of data that computers, networks and software deal with. The bulk of data contains the binary bits encoding the intended quarry, but the receiving element in the system also checks the main bits alongside the parity bits to determine whether any noise has crept into the system. What is crucial to note here is the error-checking of computers happens at the level of their rigid implementation. It is also worth noting that for every eight 0s and 1s delivered by a computer system, at least one of those bits is an error checking function. W. Daniel Hillis puts the stretched strings of his tinker-toy mechanism into clear distinction and in doing so, re-introduces an umbrella term set to dominate this chapter: I constructed a later version of the Tinker Toy computer which fixed the problem, but I never forgot the lesson of the first machine: the implementation technology must produce perfect outputs from imperfect inputs, nipping small errors in the bud. This is the essence of digital technology, which restores signals to near perfection at every stage. It is the only way we know – at least, so far – for keeping a complicated system under control. (Hillis 1999, 18)   Bibliography  Barthes, Roland. 1979. ‘From Work to Text.’ In Textual Strategies: Perspectives in Poststructuralist Criticism, ed. Josue V. Harari, 73–81. Ithaca, NY: Cornell University Press. Hayles, N. Katherine. 2004. ‘Print Is Flat, Code Is Deep: The Importance of Media-Specific Analysis.’ Poetics Today 25 (1) (March): 67–90. doi:10.1215/03335372-25-1-67. Hillis, W. 1999. The Pattern on the Stone : the Simple Ideas That Make Computers Work. 1st paperback ed. New York: Basic Books. Manovich, Lev. 2002. The Language of New Media. 1st MIT Press pbk. ed. Cambridge  Mass.: MIT Press.      

]]>
Thu, 07 Jun 2012 06:08:07 -0700 http://machinemachine.net/text/research/rigid-implementation-vs-flexible-materiality
<![CDATA["Models of communication are…not merely representations of communication but representations for..."]]> http://tumblr.machinemachine.net/post/17947545876

“Models of communication are…not merely representations of communication but representations for communication: templates that guide, unavailing or not, concrete processes of human interaction, mass and interpersonal.” - James Carey, Communication as Culture: Essays on Media and Society

The Shannon and Weaver Model - The Late Age of Print

]]>
Mon, 20 Feb 2012 07:21:08 -0800 http://tumblr.machinemachine.net/post/17947545876
<![CDATA[“The Shannon and Weaver Model”]]> http://www.thelateageofprint.org/2012/02/20/the-shannon-and-weaver-model/

The genius of Shanon’s original paper from 1948 and its subsequent popularization by Weaver lies in many things, among them, their having formulated a model of communication located on the threshold of these two understandings of theory. As a scientist Shannon surely felt accountable to the empirical world, and his work reflects that. Yet, it also seems clear that Shannon and Weaver’s work has, over the last 60 years or so, taken on a life of its own, feeding back into the reality they first set about describing. Shannon and Weaver didn’t merely model the world; they ended up enlarging it, changing it, and making it over in the image of their research.

]]>
Mon, 20 Feb 2012 06:51:16 -0800 http://www.thelateageofprint.org/2012/02/20/the-shannon-and-weaver-model/
<![CDATA[Noise; Mutation; Autonomy: A Mark on Crusoe’s Island]]> http://machinemachine.net/text/research/a-mark-on-crusoes-island

This mini-paper was given at the Escapologies symposium, at Goldsmiths University, on the 5th of December Daniel Defoe’s 1719 novel Robinson Crusoe centres on the shipwreck and isolation of its protagonist. The life Crusoe knew beyond this shore was fashioned by Ships sent to conquer New Worlds and political wills built on slavery and imperial demands. In writing about his experiences, Crusoe orders his journal, not by the passing of time, but by the objects produced in his labour. A microcosm of the market hierarchies his seclusion removes him from: a tame herd of goats, a musket and gunpowder, sheafs of wheat he fashions into bread, and a shelter carved from rock with all the trappings of a King’s castle. Crusoe structures the tedium of the island by gathering and designing these items that exist solely for their use-value: “In a Word, The Nature and Experience of Things dictated to me upon just Reflection, That all the good Things of this World, are no farther good to us, than they are for our Use…” [1] Although Crusoe’s Kingdom mirrors the imperial British order, its mirroring is more structural than anything else. The objects and social contrivances Crusoe creates have no outside with which to be exchanged. Without an ‘other’ to share your labour there can be no mutual assurance, no exchanges leading to financial agreements, no business partners, no friendships. But most importantly to the mirroring of any Kingdom, without an ‘other’ there can be no disagreements, no coveting of a neighbours ox, no domination, no war: in short, an Empire without an outside might be complete, total, final, but an Empire without an outside has also reached a state of complete inertia. Crusoe’s Empire of one subject, is what I understand as “a closed system”… The 2nd law of thermo dynamics maintains that without an external source of energy, all closed systems will tend towards a condition of inactivity. Eventually, the bacteria in the petri dish will multiply, eating up all the nutrients until a final state of equilibrium is reached, at which point the system will collapse in on itself: entropy cannot be avoided indefinitely. The term ‘negative entropy’ is often applied to living organisms because they seem to be able to ‘beat’ the process of entropy, but this is as much an illusion as the illusion of Crusoe’s Kingdom: negative entropy occurs at small scales, over small periods of time. Entropy is highly probable: the order of living beings is not. Umberto Eco: “Consider, for example, the chaotic effect… of a strong wind on the innumerable grains of sand that compose a beach: amid this confusion, the action of a human foot on the surface of the beach constitutes a complex interaction of events that leads to the statistically very improbable configuration of a footprint.” [2] The footprint in Eco’s example is a negative entropy event: the system of shifting sands is lent a temporary order by the cohesive action of the human foot. In physical terms, the footprint stands as a memory of the foot’s impression. The 2nd law of thermodynamics establishes a relationship between entropy and information: memory remains as long as its mark. Given time, the noisy wind and chaotic waves will cause even the strongest footprint to fade. A footprint is a highly improbable event. Before you read on, watch this scene from Luis Buñuel’s Robinson Crusoe (1954):

The footprint, when it first appears on the island, terrifies Crusoe as a mark of the outsider, but soon, realising what this outsider might mean for the totality of his Kingdom, Robinson begins the process of pulling the mark inside his conceptions: “Sometimes I fancied it must be the Devil; and reason joined in with me upon this supposition. For how should any other thing in human shape come into the place? Where was the vessel that brought them? What marks were there of any other footsteps? And how was it possible a man should come there?” [3] In the novel, it is only on the third day that Crusoe re-visits the site to compare his own foot with the print. The footprint is still there on the beach after all this time, a footprint Crusoe now admits is definitely not his own. This chain of events affords us several allegorical tools: firstly, that of the Devil, Crusoe believes to be the only rational explanation for the print. This land, which has been Crusoe’s own for almost 2 decades, is solid, unchanging and eternal. Nothing comes in nor goes beyond its shores, yet its abundance of riches have served Crusoe perfectly well: seemingly infinite riches for a Kingdom’s only inhabitant. Even the footprint, left for several days, remains upon Crusoe’s return. Like the novel of which it is a part, the reader of the mark may revisit the site of this unlikely incident again and again, each time drawing more meanings from its appearance. Before Crusoe entertains that the footprint might be that of “savages of the mainland” he eagerly believes it to be Satan’s, placed there deliberately to fool him. Crusoe revisits the footprint, in person and then, as it fades, in his own memory. He ‘reads’ the island, attributing meanings to marks he discovers that go far beyond what is apparent. As Susan Stewart has noted: “In allegory the vision of the reader is larger than the vision of the text; the reader dreams to an excess, to an overabundance.” [4] Simon O’Sullivan, following from Deleuze, takes this further, arguing that in his isolation, a world free from ‘others’, Crusoe has merged with, become the island. The footprint is a mark that must be recuperated if Crusoe’s identity, his “power of will”, is to be maintained. An outsider must have caused the footprint, but Crusoe is only capable of reading in the mark something about himself. The evocation of a Demon, then, is Crusoe’s way of re-totalising his Empire, of removing the ‘other’ from his self-subjective identification with the island. So, how does this relate to thermodynamics? To answer that I will need to tell the tale of a second Demon, more playful even than Crusoe’s. In his 1871 essay, Theory of Heat, James Clerk Maxwell designed a thought experiment to test the 2nd law of Thermodynamics. Maxwell imagines a microscopic being able to sort atoms bouncing around a closed system into two categories: fast and slow. If such a creature did exist, it was argued, no work would be required to decrease the entropy of a closed system. By sorting unlikely footprints from the chaotic arrangement of sand particles Maxwell’s Demon, as it would later become known, appeared to contradict the law Maxwell himself had helped to develop. One method of solving the apparent paradox was devised by Charles H. Bennet, who recognised that the Demon would have to remember where he placed the fast and slow particles. Here, once again, the balance between the order and disorder of a system comes down to the balance between memory and information. As the demon decreases the entropy of its environment, so it must increase the entropy of its memory. The information required by the Demon acts like a noise in the system. The laws of physics had stood up under scrutiny, resulting in a new branch of science we now know as ‘Information Theory’. Maxwell’s Demon comes from an old view of the universe, “fashioned by divine intervention, created for man and responsive to his will” [5]. Information Theory represents a threshold, a revelation that the “inhuman force of increasing entropy, [is] indifferent to man and uncontrollable by human will.” [6] Maxwell’s Demon shows that the law of entropy has only a statistical certainty, that nature orders only on small scales and, that despite any will to control, inertia will eventually be reached. Developed at the peak of the British Empire, thermodynamics was sometimes called “the science of imperialism”, as Katherine Hayles has noted: “…to thermodynamicists, entropy represented the tendency of the universe to run down, despite the best efforts of British rectitude to prevent it from doing so… The rhetoric of imperialism confronts the inevitability of failure. In this context, entropy represents an apparently inescapable limit on the human will to control.” [7] Like Maxwell, Crusoe posits a Demon, with faculties similar in kind to his own, to help him quash his “terror of mind”. Crusoe’s fear is not really about outsiders coming in, the terror he feels comes from the realisation that the outsiders may have been here all along, that in all the 20 years of his isolation those “savages of the mainland” may have visited his island time and again. It is not an outside ‘other’ that disturbs and reorganises Crusoe’s Kingdom. A more perverse logic is at work here, and once again Crusoe will have to restructure his imperial order from the inside out. Before you read on, watch another scene from Luis Buñuel’s Robinson Crusoe (1954):

Jacques Rancière prepares for us a parable. A student who is illiterate, after living a fulfilled life without text, one day decides to teach herself to read. Luckily she knows a single poem by heart and procures a copy of that poem, presumably from a trusted source, by which to work. By comparing her memory of the poem, sign by sign, word by word, with the text of the poem she can, Rancière believes, finally piece together a foundational understanding of her written language: “From this ignoramus, spelling out signs, to the scientist who constructs hypotheses, the same intelligence is always at work – an intelligence that translates signs into other signs and proceeds by comparisons and illustrations in order to communicate its intellectual adventures and understand what another intelligence is endeavouring to communicate to it… This poetic labour of translation is at the heart of all learning.” [8] What interests me in Rancière’s example is not so much the act of translation as the possibility of mis-translation. Taken in light of The Ignorant Schoolmaster we can assume that Rancière is aware of the wide gap that exists between knowing something and knowing enough about something for it to be valuable. How does one calculate the value of what is a mistake? The ignoramus has an autonomy, but it is effectively blind to the quality and make-up of the information she parses. If she makes a mistake in her translation of the poem, this mistake can be one of two things: it can be a blind error, or, it can be a mutation. In information theory, the two ways to understand change within a closed system are understood to be the product of ‘noise’. The amount of change contributed by noise is called ‘equivocation’. If noise contributes to the reorganisation of a system in a beneficial way, for instance if a genetic mutation in an organism results in the emergence of an adaptive trait, then the equivocation is said to be ‘autonomy-producing’. Too much noise is equivalent to too much information, a ‘destructive’ equivocation, leading to chaos. This balance is how evolution functions. An ‘autonomy-producing’ mutation will be blindly passed on to an organism’s offspring, catalysing the self-organisation of the larger system (in this case, the species). All complex, what are called ‘autopoietic’ systems, inhabit this fine divide between noise and inertia.  Given just the right balance of noise recuperated by the system, and noise filtered out by the system, a state of productive change can be maintained, and a state of inertia can be avoided, at least, for a limited time. According to Umberto Eco, in ‘The Open Work’: “To be sure, this word information in communication theory relates not so much to what you do say, as to what you could say… In the end… there is no real difference between noise and signal, except in intent.” [9] This rigid delineator of intent is the driving force of our contemporary, communication paradigm. Information networks underpin our economic, political and social interactions: the failure to communicate is to be avoided at all costs. All noise is therefore seen as a problem. These processes, according to W. Daniel Hillis, define, “the essence of digital technology, which restores signal to near perfection at every stage.” [10] To go back to Umberto Eco then, we appear to be living in a world of “do say” rather than “could say”. Maintenance of the network and the routines of error management are our primary economic and political concern: control the networks and the immaterial products will manage themselves. The modern network paradigm acts like a Maxwell Demon, categorising information as either pure signal or pure noise. As Mark Nunes has noted, following the work of Deleuze and Guattari: “This forced binary imposes a kind of violence, one that demands a rationalisation of all singularities of expressions within a totalising system… The violence of information is, then, the violence of silencing or making to speak that which cannot communicate.” [11] To understand the violence of this binary logic, we need go no further than Robinson Crusoe. Friday’s questions are plain spoken, but do not adhere to the “do say” logic of Crusoe’s conception. In the novel, Crusoe’s approach to Friday becomes increasingly one sided, until Friday utters little more than ‘yes’ and ‘no’ answers, “reducing his language to a pure function of immediate context and perpetuating a much larger imperialist tradition of levelling the vox populi.”[12] Any chance in what Friday “could say” has been violently obliterated. The logic of Ranciere’s Ignoramous, and of Crusoe’s levelling of Friday’s speech, are logics of imperialism: reducing the possibility of noise and information to an either/or, inside/outside, relationship. Mark Nunes again: “This balance between total flow and total control parallels Deleuze and Guattari’s discussion of a regime of signs in which anything that resists systematic incorporation is cast out as an asignifying scapegoat “condemned as that which exceeds the signifying regime’s power of deterritorialisation.” [13] In the system of communication these “asignifying” events are not errors, in the common sense of the word. Mutation names a randomness that redraws the territory of complex systems. The footprint is the mark that reorganised the Empire. In Ranciere’s parable, rather than note her intent to decode the poem, we should hail the moment when the Ignoramus fails, as her autonomous moment. In a world where actants “translate signs into other signs and proceed by comparison and illustration” [14] the figures of information and communication are made distinct not by the caprice of those who control the networks, nor the desires of those who send and receive the messages, but by mutation itself. Michel Foucault, remarking on the work of Georges Canguilhem, drew the conclusion that the very possibility of mutation, rather than existing in opposition to our will, was what human autonomy was predicated upon: “In this sense, life – and this is its radical feature – is that which is capable of error… Further, it must be questioned in regard to that singular but hereditary error which explains the fact that, with man, life has led to a living being that is never completely in the right place, that is destined to ‘err’ and to be ‘wrong’.” [15] In his writings on the history of Heredity, The Logic of Life, Francois Jacob lingers on another Demon in the details, fashioned by Rene Descartes in his infamous meditation on human knowledge. François Jacob positions Descartes’ meditation in a period of explosive critical thought focussed on the very ontology of ‘nature’: “For with the arrival of the 17th Century, the very nature of knowledge was transformed. Until then, knowledge had been grafted on God, the soul and the cosmos… What counted [now] was not so much the code used by God for creating nature as that sought by man for understanding it.” [16] The infinite power of God’s will was no longer able to bend nature to any whim. If man were to decipher nature, to reveal its order, Descartes surmised, it was with the assurance that “the grid will not change in the course of the operation”[17]. For Descartes, the evil Demon, is a metaphor for deception espoused on the understanding that underlying that deception, nature had a certainty. God may well have given the world its original impetus, have designed its original make-up, but that make-up could not be changed. The network economy has today become the grid of operations onto which we map the world. Its binary restrictions predicate a logic of minimal error and maximum performance: a regime of control that drives our economic, political and social interdependencies. Trapped within his imperial logic, Robinson Crusoe’s levelling of inside and outside, his ruthless tidying of Friday’s noisy speech into a binary dialectic, disguises a higher order of reorganisation. As readers navigating the narrative we are keen to recognise the social changes Defoe’s novel embodies in its short-sighted central character. Perhaps, though, the most productive way to read this fiction, is to allegorise it as an outside perspective on our own time? Gathering together the fruits of research, I am often struck by the serendipitous quality of so many discoveries. In writing this mini-paper I have found it useful to engage with these marks, that become like demonic footprints, mutations in my thinking. Comparing each side by side, I hope to find, in the words of Michel Foucault: “…a way from the visible mark to that which is being said by it and which, without that mark, would lie like unspoken speech, dormant within things.” [18]    

References & Bibliography [1] Daniel Defoe, Robinson Crusoe, Penguin classics (London: Penguin Books, 2001).

[2] Umberto Eco, The open work (Cambridge: Harvard University Press, n.d.).

[3] Defoe, Robinson Crusoe.

[4] Susan Stewart, On longing: narratives of the miniature, the gigantic, the souvenir, the collection (Duke University Press, 1993).

[5] N. Katherine Hayles, “Maxwell’s Demon and Shannon’s Choice,” in Chaos bound: orderly disorder in contemporary literature and science (Cornell University Press, 1990).

[6] Ibid.

[7] Ibid.

[8] Jacques Rancière, The emancipated spectator (London: Verso, 2009).

[9] Umberto Eco, The open work (Cambridge: Harvard University Press, n.d.). (My emphasis)

[10] W Hillis, The pattern on the stone?: the simple ideas that make computers work, 1st ed. (New York: Basic Books, 1999).

[11] Mark Nunes, Error: glitch, noise, and jam in new media cultures (Continuum International Publishing Group, 2010).

[12] Susan Stewart, On longing: narratives of the miniature, the gigantic, the souvenir, the collection (Duke University Press, 1993).

[13] Nunes, Error.

[14] Rancière, The emancipated spectator.

[15] Michel Foucault, “Life: Experience and Science,” in Aesthetics, method, and epistemology (The New Press, 1999).

[16] François Jacob, The logic of life: a history of heredity?; the possible and the actual (Penguin, 1989).

[17] Ibid.

[18] Michel Foucault, The order of things?: an archaeology of the human sciences., 2003.

]]>
Wed, 07 Dec 2011 08:50:14 -0800 http://machinemachine.net/text/research/a-mark-on-crusoes-island
<![CDATA[James Gleick’s History of Information]]> http://www.nytimes.com/2011/03/20/books/review/book-review-the-information-by-james-gleick.html

Gleick makes his case in a sweeping survey that covers the five millenniums of humanity’s engagement with information, from the invention of writing in Sumer to the elevation of information to a first principle in the sciences over the last half-century or so. It’s a grand narrative if ever there was one, but its key moment can be pinpointed to 1948, when Claude Shannon, a young mathematician with a background in cryptography and telephony, published a paper called “A Mathematical Theory of Communication” in a Bell Labs technical journal. For Shannon, communication was purely a matter of sending a message over a noisy channel so that someone else could recover it. Whether the message was meaningful, he said, was “irrelevant to the engineering problem.” Think of a game of Wheel of Fortune, where each card that’s turned over narrows the set of possible answers, except that here the answer could be anything: a common English phrase, a Polish surname, or just a set of license plate numbers

]]>
Sun, 20 Mar 2011 05:41:08 -0700 http://www.nytimes.com/2011/03/20/books/review/book-review-the-information-by-james-gleick.html