MachineMachine /stream - tagged with article en-us LifePress <![CDATA[Falling into the Digital Divide: Encounters with the Work of Hito Steyerl]]>

The highly compressed, deteriorated ‘poor image mocks the promises of digital technology. Not only is it often degraded to the point of being just a hurried blur, one even doubts whether it could be called an image at all.’ The aesthetic affect of digital images thus stands in metonymically for the networks they navigate and the means by which those networks are exposed.

Tue, 21 May 2013 02:58:50 -0700
<![CDATA[Kids, unlike adults, think technology is fundamentally human]]>

With children so easy to embrace robotics, it’s clear that there’s a ton of potential for integrating intelligent technologies into learning environments. Besides, the idea of “exploring and creating” sounds a heck of a lot better than answering true/false questions out of a booklet. Clearly there are tons of new and interesting ways to learn, and technology is, in many ways, responsible for this.

Taking a deeper look at the stories the children created, the survey found that unlike many adults who see technology as separate from humanness, it seems that “kids tend to think of technology as fundamentally human: as a social companion that can entertain, motivate, and empower them in various contexts.”

Thu, 19 Jan 2012 02:13:06 -0800
<![CDATA[All Language Is Murder]]>

Americans love people who kill people. We don’t want to say it that way, and we don’t want to be killed ourselves (most of us, not really), but it’s obvious there’s a kind of outright worship of those who go beyond what is generally considered the extent of fair or appropriate or even human behavior. This type of human will actually perform what much of our entertainment seems interested in only simulating.

I’ve been thinking a lot lately about this in relation to language, how it seems harder and harder now not only to have any sort of ordered idea of what’s going on, but to simply remember much of anything. It seems hard lately for me to remember anything I read. I don’t know when that started happening, or if it was always that way, but a lot of the time now I can’t even really remember what I’m reading while I’m reading it. It feels like sentences come in and, like, disappear in there somewhere. Into a field of shit of me.

Wed, 14 Dec 2011 02:50:14 -0800
<![CDATA[Mouse Trap: The dangers of using one lab animal to study every disease]]>

"I began to realize that the ‘control’ animals used for research studies throughout the world are couch potatoes," he tells me. It's been shown that mice living under standard laboratory conditions eat more and grow bigger than their country cousins. At the National Institute on Aging, as at every major research center, the animals are grouped in plastic cages the size of large shoeboxes, topped with a wire lid and a food hopper that's never empty of pellets. This form of husbandry, known as ad libitum feeding, is cheap and convenient since animal technicians need only check the hoppers from time to time to make sure they haven’t run dry. Without toys or exercise wheels to distract them, the mice are left with nothing to do but eat and sleep—and then eat some more.

Tue, 13 Dec 2011 12:34:32 -0800
<![CDATA[Computing Machinery and Intelligence (by Alan Turing)]]>

I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think." The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words "machine" and "think" are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, "Can machines think?" is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.

Mon, 31 Oct 2011 06:53:59 -0700
<![CDATA[Accuracy takes power: one man's 3GHz quest to build a perfect SNES emulator]]>

Emulators for playing older games are immensely popular online, with regular arguments breaking out over which emulator is best for which game. Today we present another point of view from a gentleman who has created the Super Nintendo emulator bsnes. He wants to share his thoughts on the most important part of the emulation experience: accuracy.

It doesn't take much raw power to play Nintendo or SNES games on a modern PC; emulators could do it in the 1990s with a mere 25MHz of processing power. But emulating those old consoles accurately—well, that's another challenge entirely; accurate emulators may need up to 3GHz of power to faithfully recreate aging tech. In this piece we'll take a look at why accuracy is so important for emulators and why it's so hard to achieve.

Put simply, accuracy is the measure of how well emulation software mimics the original hardware. Apparent compatibility is the most obvious measure of accuracy—will an old game run on my new emulator?

Thu, 11 Aug 2011 04:25:22 -0700
<![CDATA[Digital Autonomy]]>

“Is an ephemeral image, a moment in a streaming video, a thing? Or if the image is frozen as a still, is it now a thing? Is a dream, a city, a sensation, a derivative, an ideology, a decay, a kiss? I haven’t the least idea.” Extract from David Miller, Materiality : An Introduction [1]

In A Thing Like You and Me, Hito Steyerl plays out her ongoing obsession with the copy, skirting briefly over her wider, yet more implicit concern: the digital. Echoing the work of Bruno Latour, Steyerl acknowledges the materiality by which images are created, scarred and destroyed in order to get to a much deeper, ontological question about autonomy. Avoiding the kind of subject/object purification Latour warns about, Steyerl asks us to consider images as something we can participate in, even model our autonomy on. Is it possible to become a thing? And where does Hito Steyerl get off calling us ‘things’ in the first place? Continue reading this essay here…

Sat, 11 Jun 2011 04:02:00 -0700
<![CDATA[A Thing Like You and Me]]>

by Hito Steyerl

What happens to identification at this point? Who can we identify with? Of course, identification is always with an image. But ask anybody whether they’d actually like to be a JPEG file. And this is precisely my point: if identification is to go anywhere, it has to be with this material aspect of the image, with the image as thing, not as representation. And then it perhaps ceases to be identification, and instead becomes participation.3 I will come back to this point later.

But first of all: why should anybody want to become this thing—an object—in the first place?

Wed, 11 May 2011 03:03:10 -0700
<![CDATA[F/X PORN: David Foster Wallace]]>

What's the difference between a Hollywood special-effects blockbuster like "Terminator 2" and a hard-core porn film? Very little, claims novelist, essayist and footnote fetishist David Foster Wallace.

1990s moviegoers who have sat clutching their heads in both awe and disappointment at movies like "Twister" and "Volcano" and "The Lost World" can thank James Cameron's "Terminator 2: Judgment Day" for inaugurating what's become this decade's special new genre of big-budget film: Special Effects Porn. "Porn" because, if you substitute F/X for intercourse, the parallels between the two genres become so obvious they're eerie. Just like hard-core cheapies, movies like "Terminator 2" and "Jurassic Park" aren't really "movies" in the standard sense at all. What they really are is half a dozen or so isolated, spectacular scenes -- scenes comprising maybe twenty or thirty minutes of riveting, sensuous payoff -- strung together via another sixty to ninety minutes

Tue, 18 Jan 2011 03:57:21 -0800
<![CDATA[How Video Games Are Infiltrating—and Improving—Every Part of Our Lives]]>

Games are sneaking into every part of our lives -- at home, school, and work. Cisco, IBM, Microsoft, and even the Army depend on games. and Pretty soon, you'll be a part of one. We guarantee it.

If Schell's vision seems a little, well, out there, consider this: Much of what he discusses already exists, having infiltrated our culture and our business landscape in ways that are barely recognized. Sure, 97% of 12- to 17-year-olds play computer games, but so do almost 70% of the heads of American households, according to the Entertainment Software Association. The average gamer is 34 and has been at it a dozen years; 40% are women. One survey found that 35% of C-suite executives play video games.

Wed, 08 Dec 2010 07:46:00 -0800
<![CDATA[The Men Who Stole the World]]>,29239,2032304_2032746_2032903,00.html

A decade ago, four young men changed the way the world works. They did this not with laws or guns or money but with software: they had radical, disruptive ideas, which they turned into code, which they released on the Internet for free. These four men, not one of whom finished college, laid the foundations for much of the digital-media environment we currently inhabit. Then, for all intents and purposes, they vanished.

In 1999 a Northeastern University freshman named Shawn Fanning wrote Napster, thereby pioneering peer-to-peer file sharing and a new paradigm for consuming media without the intermediary of a big studio or retailer. TIME put him on its cover, as did FORTUNE. He was 19 years old. (See the 50 Best Inventions of 2010.)

That same year, a Norwegian teenager named Jon Lech Johansen, working with two other programmers whose identities are still unknown, wrote a program that could decrypt commercial DVDs, and he became internationally infamous as "DVD Jon." He was 15

Wed, 01 Dec 2010 04:22:00 -0800,29239,2032304_2032746_2032903,00.html
<![CDATA[Conflict or Cooperation?]]>

Among the theorists who jumped into the market for models of the future, three stood out: Francis Fukuyama, Samuel Huntington, and John Mearsheimer. Each made a splash with a controversial article, then refined the argument in a book -- Fukuyama in The End of History and the Last Man, Huntington in The Clash of Civilizations and the Remaking of World Order, and Mearsheimer in The Tragedy of Great Power Politics. Each presented a bold and sweeping vision that struck a chord with certain readers, and each was dismissed by others whose beliefs were offended or who jumped to conclusions about what they thought the arguments implied. (Reactions were extreme because most debate swirled around the bare-bones arguments in the initial articles rather than the full, refined versions in the later books. This essay aims to give the full versions of all three arguments their due.)

None of the three visions won out as the new conventional wisdom, although Fukuyama's rang truest when the Berlin Wall

Fri, 05 Nov 2010 05:54:00 -0700
<![CDATA[Against humanism]]>

by Mary Midgley

Does the term "humanism" really stand for a new and better form of religion? If so, what is that religion? Or is it something designed as a cure for religion itself, a way to get rid of it on Christopher Hitchens's principle that "religion poisons everything"?

Many people, no doubt, agree with Hitchens. But Auguste Comte, the founding father of modern humanism, would not have been one of them. For him, "humanism" was a word parallel to "theism". It just altered the object worshipped, substituting humanity for God. He called it the "religion of humanity" and devised ritual forms for it that were close to traditional Christian ones. He thought – and many others have agreed with him – that the trouble with religion was simply its having an unreal supernatural object, God. Apart from this, the attitudes and institutions characteristic of religion itself seemed to him valuable, indeed essential. And he certainly had no wish to get rid of the habit of worship, only to give

Thu, 04 Nov 2010 06:58:00 -0700
<![CDATA[In Defense of the Poor Image]]>

by Hito Steyerl

The poor image is a copy in motion. Its quality is bad, its resolution substandard. As it accelerates, it deteriorates. It is a ghost of an image, a preview, a thumbnail, an errant idea, an itinerant image distributed for free, squeezed through slow digital connections, compressed, reproduced, ripped, remixed, as well as copied and pasted into other channels of distribution.

The poor image is a rag or a rip; an AVI or a JPEG, a lumpen proletarian in the class society of appearances, ranked and valued according to its resolution. The poor image has been uploaded, downloaded, shared, reformatted, and reedited. It transforms quality into accessibility, exhibition value into cult value, films into clips, contemplation into distraction. The image is liberated from the vaults of cinemas and archives and thrust into digital uncertainty, at the expense of its own substance. The poor image tends towards abstraction: it is a visual idea in its very becoming.

Thu, 28 Oct 2010 07:27:00 -0700
<![CDATA[All Programs Considered by Bill McKibben]]>

Radio receives little critical attention. Of the various methods for communicating ideas and emotions—books, newspapers, visual art, music, film, television, the Web—radio may be the least discussed, debated, understood. This is likely because it serves largely as a transmission device, a way to take other art forms (songs, sermons) and spread them out into the world. Its other uses can be fairly pedestrian too: ball games and repetitive, if remarkably effective, right-wing commercial talk radio. Rush Limbaugh is the radio ratings champ; according to the industry’s trade journal he reaches 14.25 million listeners in an average week. Sean Hannity, working the same turf, trails him slightly.

But an equally large audience turns to the part of the dial where public radio in its various forms can be found. Public radio claims at least 5 percent of the radio market. National Public Radio’s flagship news programs, Morning Edition and All Things Considered, featuring news and commentary along

Thu, 28 Oct 2010 03:50:00 -0700
<![CDATA[The beastliness of modern art]]>

Taxidermy (and the chemistry of the morgue) has been something close to a cult obsession with our contemporary gang. We get it, we get it, you often want to howl in the presence of some of the postmodern confections, now show me something you’ve really pondered, not just a high-school truism about the world drowning in the bloody slops of the abattoir. And back they come as if to say, no, that’s not it at all, actually; the reason we dip carcases into formaldehyde, why we (or our hirelings) are so busy stuffin’ ’n’ stitchin’, is because we’re really making a point about art itself; the unselfconsciousness with which all representation is a form of gussied-up taxidermy; the fixing of fugitive moments. Art may be the victory over decay but, guess what, the contemporary artist protests, it can’t be done. The end result of all that effort is merely a sub-species of deadness. So the contrary gesture is to foreground precisely the repugnant processes that the fake aesthetic of the perfect de

Sun, 17 Oct 2010 14:35:00 -0700
<![CDATA[The Artificial Ape: How Technology Changed the Course of Human Evolution]]>

There has been a rash of books on human evolution in recent years, claiming that it was driven by art (Denis Dutton: The Art Instinct), cooking (Richard Wrangham: Catching Fire), sexual selection (Geoffrey Miller: The Mating Mind). Now, Timothy Taylor, reader in archaeology at the University of Bradford, makes a claim for technology in general and, in particular, the invention of the baby sling – not, as you may have thought, in the 1960s but more than 2m years ago.

All these theories and speculations are in truth complementary facets of an emerging Grand Universal Theory of Human Origins. The way they overlap, reinforce one another and suggest new leads is too striking to miss. What they have in common is a reversal of the received idea of evolution through natural selection. In this, a mutation takes place that happens to be useful; it is retained and spreads through the population. In the new theory, proto-human beings, through innovative technologies, created the conditions that l

Wed, 08 Sep 2010 03:20:00 -0700
<![CDATA[Art and Thingness, Part Two: Thingification]]>

by Sven Lütticken

In a text written in response to the upheavals of the Russian Revolution and the early Soviet avant-garde, Carl Einstein claimed that tradition “piles up in the object”; that the object is a “medium for passive thinking,” bound to tradition and bourgeois property relations; and that in order to “assert the human person, objects, which are preserve jars, must be destroyed.” Going so far as to state that “every destruction of objects is justified,” Einstein proclaimed a “dictatorship of the thingless.”

In a Latourian manner, one might present the recent turn to the thing as a break with the project of modernity: after all, isn’t modernity in theory and in praxis the desperate attempt to (re)form the world in accordance with the will of an autonomous, imperious subject that turns things into ordered and emaciated objects?

Mon, 19 Jul 2010 08:08:00 -0700
<![CDATA[Essay: Technology changes how art is created and perceived]]>,0,7851757.story

It used to be so simple. A book had an author; a film, a screenwriter and director; a piece of music, a composer and performer; a painting or sculpture, an artist; a play, a playwright. You could assume that the work actually erupted more or less full-blown from these folks. In addition, the book, film, musical composition, painting or play was a discrete object or event that existed in time and space. You could hold it in your hands or watch or listen to it in a theater or your living room. It didn't really change over time unless the artist decided to revise it or a performer reinterpreted it.

Well, not any more. For years now numerous observers have described the process by which the very fundaments of art are changing from the old principle of one man, one creation. Songs have remixes through which anyone so disposed can alter the original music; videos have mash-ups that use footage to reposition and change the original meaning; the visual arts have communal canvases and websites

Sun, 18 Jul 2010 05:20:00 -0700,0,7851757.story
<![CDATA[The Agnostic Cartographer]]>

Oe fateful day in early August, Google Maps turned Arunachal Pradesh Chinese. It happened without warning. One minute, the mountainous border state adjacent to Tibet was labeled with its usual complement of Indian place-names; the next it was sprinkled with Mandarin characters, like a virtual annex of the People’s Republic.

The error could hardly have been more awkward. Governed by India but claimed by China, Arunachal Pradesh has been a source of rankling dispute between the two nations for decades. Google’s sudden relabeling of the province gave the appearance of a special tip of the hat toward Beijing. Its timing, moreover, was freakishly bad: the press noticed that Google’s servers had started splaying Mandarin place-names all over the state only a few hours before Indian and Chinese negotiating teams sat down for talks in New Delhi to work toward resolving the delicate border issue.

Google rushed to admit its mistake, but not before a round of angry Indian blog posts and news ar

Sun, 11 Jul 2010 16:33:00 -0700