MachineMachine /stream - tagged with alan-turing https://machinemachine.net/stream/feed en-us http://blogs.law.harvard.edu/tech/rss LifePress therourke@gmail.com <![CDATA['A Perfect and Beautiful Machine': What Darwin's Theory of Evolution Reveals About Artificial Intelligence - Atlantic Mobile]]> http://m.theatlantic.com/technology/archive/2012/06/-a-perfect-and-beautiful-machine-what-darwins-theory-of-evolution-reveals-about-artificial-intelligence/258829/

Charles Darwin and Alan Turing, in their different ways, both homed in on the same idea: the existence of competence without comprehension. Francis Crick and James Watson closed their epoch-making paper on the structure of DNA with a single deliciously diffident sentence.

]]>
Tue, 29 Jul 2014 03:06:07 -0700 http://m.theatlantic.com/technology/archive/2012/06/-a-perfect-and-beautiful-machine-what-darwins-theory-of-evolution-reveals-about-artificial-intelligence/258829/
<![CDATA[Turing Test success marks milestone in computing history]]> http://www.reading.ac.uk/news-and-events/releases/PR583836.aspx

An historic milestone in artificial intelligence set by Alan Turing - the father of modern computer science - has been achieved at an event organised by the University of Reading.

]]>
Sun, 08 Jun 2014 10:11:13 -0700 http://www.reading.ac.uk/news-and-events/releases/PR583836.aspx
<![CDATA[Alan Turing's Body - Robinson Meyer - The Atlantic]]> http://www.theatlantic.com/technology/archive/2013/12/alan-turings-body/282641/

There's no command-Z for this great man's suffering. On Christmas Eve, Queen Elizabeth II pardoned the computer scientist Alan Mathison Turing. Nearly all of the modern world is constructed on Turing’s accomplishments.

]]>
Sun, 29 Dec 2013 09:42:25 -0800 http://www.theatlantic.com/technology/archive/2013/12/alan-turings-body/282641/
<![CDATA[An Ontology of Everything on the Face of the Earth]]> http://www.alluvium-journal.org/2013/12/04/an-ontology-of-everything-on-the-face-of-the-earth/

This essay was originally published as part of a special issue of Alluvium Journal on Digital Metaphors, edited by Zara Dinnen and featuring contributions from Rob Gallagher and Sophie Jones. John Carpenter’s 1982 film, The Thing, is a claustrophobic sci-fi thriller, exhibiting many hallmarks of the horror genre. The film depicts a sinister turn for matter, where the chaos of the replicating, cancerous cell is expanded to the human scale and beyond. In The Thing we watch as an alien force terrorises an isolated Antarctic outpost. The creature exhibits an awesome ability to imitate, devouring any creature it comes across before giving birth to an exact copy in a burst of blood and protoplasm. The Thing copies cell by cell and its process is so perfect – at every level of replication – that the resultant simulacrum speaks, acts and even thinks like the original. The Thing is so relentless, its copies so perfect, that the outpost’s Doctor, Blair, is sent mad at the implications: Blair: If a cell gets out it could imitate everything on the face of the Earth… and it’s not gonna stop!!! Based on John W. Campbell’s 1938 novella, Who Goes There?, Carpenter’s film revisits a gothic trope, as numerous in its incarnations as are the forms it is capable of taking. In Campbell’s original novella, the biologically impure is co-inhabited by a different type of infection: an infection of the Antarctic inhabitants’ inner lives. Plucked from an icy grave, The Thing sits, frozen solid, in a dark corner of the outpost, drip dripping towards re-animation. Before its cells begin their interstitial jump from alien to earthly biology, it is the dreams of the men that become infected: ‘So far the only thing you have said this thing gave off that was catching was dreams. I’ll go so far as to admit that.’ An impish, slightly malignant grin crossed the little man’s seamed face. ‘I had some, too. So. It’s dream-infectious. No doubt an exceedingly dangerous malady.’ (Campbell)

The Thing’s voracious drive to consume and imitate living beings calls to mind Freud’s uncanny: the dreadful creeping horror that dwells between homely and unhomely. According to Ernst Jentsch, whose work Freud references in his study, the uncanny is kindled, ‘when there is intellectual uncertainty whether an object is alive or not, and when an inanimate object becomes too much like an animate one’ (Grenville 233). A body in the act of becoming: John W. Campbell’s novella depicts The Thing as a monstrous body that “swallows the world and is itself swallowed by the world”

In the original novella, The Thing is condensed as much from the minds of the men, as from its own horrific, defrosting bulk. A slowly surfacing nightmare that acts to transform alien matter into earthly biology also has the effect of transferring the inner, mental lives of the men, into the resultant condensation. John W. Campbell had no doubts that The Thing could become viscous, mortal human flesh, but in order to truly imitate its prey, the creature must infect and steal inner life too, pulling ghosts, kicking and screaming, out of their biological machines. As a gothic figure, Campbell’s Thing disrupts the stable and integral vision of human being, of self-same bodies housing ‘unitary and securely bounded’ (Hurley 3) subjectivities, identical and extensive through time. John W. Campbell’s characters confront their anguish at being embodied: their nightmares are literally made flesh. As Kelly Hurley reminds us in her study on The Gothic Body, Mikhail Bakhtin noted: The grotesque body… is a body in the act of becoming. It is never finished, never completed; it is continually built, created, and builds and creates another body. Moreover, the body swallows the world and is itself swallowed by the world (Hurley 28). Each clone’s otherness is an uncanny exposure of the abject relationship we endure with ourselves as vicarious, fragmented, entropic forms. In the 44 years between the novella and John Carpenter’s 1982 film, there were many poor clones of The Thing depicted in cinema. Films such as Invasion of the Body Snatchers (1956) and, It Came from Outer Space (1953) are replete with alien dopplegangers, abject human forms, cast away very much as in gothic tradition. Howard Hawk’s film, The Thing from Another World (1951), the first to explicitly translate Who Goes There?, completely disfigures Campbell’s story. The resultant monster is nothing more than, what one character calls, ‘an intellectual carrot’, grown from alien cells in a laboratory. The film is worth considering though for its Cold War undertones. Recast in an Arctic military base, Hawk’s Thing is an isolated monster set against a small, well organised army of cooperative men. Faced with disaster the men group together, fighting for a greater good than each of them alone represents.

Cinematic clones of The Thing: 1950s American Science Fiction films like It Came From Outer Space and Invasion of the Body Snatchers are replete with alien doppelgangers and abject human forms [Images used under fair dealings provisions] The metaphor of discrete cells coordinating into autopoeitic organisms, does not extend to the inhabitants of the isolated Antarctic outpost in the original short story, nor in the 1982 version. Rather than unite against their foe, they begin to turn on each other, never knowing who might be The Thing. In a series of enactments of game-theory, the characters do piece together a collective comprehension: that if The Thing is to eventually imitate ‘everything on the face of the Earth’ it must not show itself now, lest the remaining humans group together and destroy it. The Thing’s alien biology calls to mind the original design of the internet, intended, according to Michael Hardt and Antonio Negri: …to withstand military attack. Since it has no center and almost any portion can operate as an autonomous whole, the network can continue to function even when part of it has been destroyed. The same design element that ensures survival, the decentralisation, is also what makes control of the network so difficult (Hardt and Negri 299). The novella Who Goes There? and the film, The Thing, sit either side of a pivotal era in the advancement of information technology. How a life form or a biological computer work is immaterial to the behaviours they present to an observer. John Carpenter’s The Thing explores the fulfilment of Alan Turing’s ‘Imitation Game.’ Moving away from Campbell’s original appeal to telepathy and a mind/body split, the materialist vision of Carpenter’s film confronts us with a more fundamental horror. That every part of us is reducible to every other. In her book Refiguring Life, Evelyn Fox Keller argues that: As a consequence of the technological and conceptual transformations we have witnessed in the last three decades, the body itself has been irrevocably transformed… The body of modern biology, like the DNA molecule – and also like the modern corporate or political body – has become just another part of an informational network, now machine, now message, always ready for exchange, each for the other (Keller 117–118). Meanwhile, eschewing Martin Heidegger’s definition of a thing (in which objects are brought out of the background of existence through human use), Bill Brown marks the emergence of things through the encounter: As they circulate through our lives… we look through objects because there are codes by which our interpretive attention makes them meaningful, because there is a discourse of objectivity that allows us to use them as facts. A thing, in contrast, can hardly function as a window. We begin to confront the thingness of objects when they stop working for us… (Brown 4).

A thing or an object? Bill Brown argues that we look through objects but are confronted by things [Image by Marc PhOtOnQuAnTiQuE under a CC BY-NC-ND license] In his infamous 1950 paper, Computing Machinery and Intelligence, Alan Turing introduced the notion that a computer is nothing more than a machine that functions by pretending to be other machines. (Turing) Asking the question ‘can machines think?’ Turing replaced the ambiguity of ‘thought’ and ‘intelligence’ with imitation, proposing a test that avoided the need to know what was going on inside a machine, in favour of merely experiencing its affects. In a lecture entitled ‘Can Digital Computers Think?’, Turing expounds his point: It is not difficult to design machines whose behaviour appears quite random to anyone who does not know the details of their construction. Naturally enough the inclusion of this random element, whichever technique is used, does not solve our main problem, how to programme a machine to imitate a brain, or as we might say more briefly, if less accurately, to think. But it gives us some indication of what the process will be like. We must not always expect to know what the computer is going to do. We should be pleased when the machine surprises us, in rather the same way as one is pleased when a pupil does something which he had not been explicitly taught to do (Shieber 114–115). The mutability of Earthly life, its ability to err, to stumble upon novel strategies through random, blind chance, represents its most innate capacity. Biological life changes by mutation, passing those mutations on to the next generation, ad infinitum. The Thing, in opposition to this, can only become its other absolutely. There is no room for error, for mutation, for change or evolution: instead, The Thingly cadaver of Norris must protect its otherness in the only way it knows how: by transforming itself into a defensive form previously programmed and stored in its protoplasm. In terms of creativity it cannot escape its programming. Turing’s lecture hints at a further unsettling conclusion we can make: that even though novel behaviour may be consistent with error, from appearances alone it is impossible to distinguish something ontologically novel, with a behaviour which has been programmed to appear as such. The Thing is a Universal Turing Machine, a post-digital plasma, encoded with the biological ticker-tape of a thousand alien worlds. Put more simply, in the words of protagonist John MacReady: MacReady: Somebody in this camp ain’t what he appears to be. [my emphasis]

The “Gothicity” of matter? The digital metaphor of the Thing reveals that through imitation computers confer humanity upon us [Image by 

]]>
Mon, 09 Dec 2013 10:34:38 -0800 http://www.alluvium-journal.org/2013/12/04/an-ontology-of-everything-on-the-face-of-the-earth/
<![CDATA[Artist Profile: Erica Scourti]]> http://rhizome.org/editorial/2013/oct/8/artist-profile-erica-scourti

The latest in a series of interviews with artists who have developed a significant body of work engaged (in its process, or in the issues it raises) with technology. See the full list of Artist Profiles here.   Daniel Rourke: Your recent work, You Could've Said, is described as "a Google keyword confessional for radio." I've often considered your work as having elements of the confession, partly because of the deeply personal stance you perform—addressing we, the viewer or listener, in a one-on-one confluence, but also through the way your work hijacks and exposes the unseen, often algorithmic, functions of social and network media. You allow Google keywords to parasitize your identity and in turn you apparently "confess" on Google's behalf. Are you in search of redemption for your social-media self? Or is it the soul of the algorithm you wish to save? Erica Scourti: Or maybe the algorithm and social media soul is now so intertwined and interdependent that it makes little sense to even separate the two, in a unlikely fulfillment of Donna Haraway's cyborg? Instead of having machines built into/onto us (Google glasses notwithstanding), the algorithms which parse our email content, Facebook behaviours, Amazon spending habits, and so on, don't just read us, but shape us. I'm interested in where agency resides when our desires, intentions and behaviours are constantly being tracked and manipulated through the media and technology that we inhabit; how can we claim to have any "authentic" desires? Facebook's "About" section actually states, "You can't be on Facebook without being your authentic self," and yet this is a self that must fit into the predetermined format and is mostly defined by its commercial choices (clothing brands, movies, ice cream, whatever). And those choices are increasingly influenced by the algorithms through the ambient, personalized advertising that surrounds us. So in You Could've Said, which is written entirely in an instrumentalised form of language, i.e. Google's AdWords tool, I'm relaying the impossibility of having an authentic feeling, or even a first-hand experience, despite the seemingly subjective, emotional content and tone. Google search stuff is often seen reflective of a kind of cute "collective self" (hey, we all want to kill our boyfriends sometimes!) but perhaps it's producing as much as reflecting us. It's not just that everything's already been said, and can be commodified but that the devices we share so much intimate time with are actively involved in shaping what we consider to be our "selves," our identities. And yet, despite being entirely mediated, my delivery is "sincere" and heartfelt; I'm really interested in the idea of sincere, but not authentic. I think it's the same reason spambots can have such unexpected pathos; they seem to "express" things in a sincere way, which suggests some kind of "soul" at work there, or some kind of agency,  and yet they totally lack interiority, or authenticity. In this and other work of mine (especially Life in AdWords) dissonance is produced by my apparent misrecognition of the algorithmically produced language as my own- mistaking the machine lingo as a true expression of my own subjectivity. Which is not to say that there is some separate, unmediated self that we could access if only we would disconnect our damn gadgets for a second, but the opposite—that autobiography, which my work clearly references, can no longer be seen as a narrative produced by some sort of autonomous subject, inseparable from the technology it interacts with. Also, autobiography often involves a confessional, affective mode, and I'm interested in how this relates to the self-exposure which the attention economy seems to encourage—TMI can secure visibility when there's not enough attention to go round. With the Google confessional, I'm enacting an exposure of my flaws and vulnerabilities and while it's potentially "bad" for me (i.e. my mediated self) since you might think I'm a loser, if you're watching, then it's worth it, since value is produced simply through attention-retention. Affective vitality doesn't so much resist commodification as actively participate within it…

DR: You mention agency. When it comes to the algorithms that drive the current attention economy I tend to think we have very little. Active participation is all well and good, but the opposite—an opting out, rather than a passivity—feels increasingly impossible. I am thinking about those reCaptcha questions we spend all our time filling in. If I want to access my account and check the recommendations it has this week, I'm required to take part in this omnipresent, undeniably clever, piece of crowd-sourcing. Alan Turing's predictions of a world filled with apparently intelligent machines has come true, except, its the machines now deciding whether we are human or not. ES: Except of course—stating the obvious here—it's just carrying out the orders another human instructed it to, a mediated form of gatekeeping that delegates responsibility to the machine, creating a distance from the entirely human, social, political etc structure that has deemed it necessary (a bit like drones then?). I'm very interested also in the notion of participation as compulsory—what Zizek calls the "You must, because you can" moral imperative of consumerism—especially online, not just at the banal level (missing out on events, job opportunities, interesting articles and so on if you're not on Facebook) but because your actions necessarily feed back into the algorithms tracking and parsing our behaviours. And even opting out becomes a choice that positions you within a particular demographic (more likely to be vegetarian, apparently). Also, this question of opting out seems to recur in conversations around art made online, in a way it doesn't for artists working with traditional media—like, if you're being critical of it, why not go make your own Facebook, why not opt out? My reasoning is that I like to work with widely used technology, out of an idea that the proximity of these media to mainstream, domestic and wider social contexts makes the work more able to reflect on its sociopolitical implications, just as some video artists working in the 80s specifically engaged with TV as the main mediator of public consciousness. Of course some say this is interpassiviity, just feebly participating in the platforms without making any real change, and I can understand that criticism. Now that coded spaces and ubiquitous computing are a reality of the world—and power structures—we inhabit, I do appreciate artists who can work with code and software (in a way that I can't) and use their deeper understanding of digital infrastructure to reflect critically on it. DR: You've been engaged in a commision for Colm Cille's Spiral, sending personal video postcards to anyone who makes a request. Your interpretation of the "confessional" mode seems in this piece to become very human-centric again, since the work is addressed specifically at one particular individual. How has this work been disseminated, and what does your approach have to do with "intimacy"? ES: I've always liked Walter Benjamin's take on the ability of mediating technologies to traverse spatial distances, bringing previously inaccessible events within touching distance. With this project, I wanted to heighten this disembodied intimacy by sending unedited videos shot on my iPhone, a device that's physically on me at all times, directly to the recipients' inbox. So it's not just "sharing" but actually "giving" them a unique video file gift, which only they see,  positioning the recipient as a captive audience of one, unlike on social media where you have no idea who is watching or who cares. But also, I asked them to "complete" the video by adding its metadata, which puts them on the spot—they have to respond, instead of having the option to ignore me—and also extracting some labor in return, which is exactly what social media does: extracting our affective and attentive labor, supposedly optionally, in exchange for the gift of the free service. The metadata—tags, title and optionally a caption—became the only viewable part of the exchange, since I used it to annotate a corresponding black, "empty" video on Instagram, also shared on Twitter and Facebook, so the original content remains private. These blank videos record the creative output of the recipient, while acting as proof of the transaction (i.e. that I sent them a video). They also act as performative objects which will continue to operate online due to their tagging, which connects them to other groups of media and renders them visible—i.e. searchable—online, since search bots cannot as yet "see" video content. I wanted to make a work which foregrounds its own connectedness, both to other images via the hashtags but also to the author-recipients through tagging them on social media. So the process of constantly producing and updating oneself within the restrictive and pre-determined formats of social media platforms, i.e. their desired user behaviours, becomes almost the content of the piece. I also like the idea that hashtag searches on all these platforms, for (let's say) Greece, will bring up these blank/ black videos (which by the way, involved a little hack, as Instagram will not allow you to upload pre-recorded content and it's impossible to record a black and silent video...). It's a tiny intervention into the regime of carefully filtered and cropped life-style depictions that Instagram is best known for. It's also a gesture of submitting oneself to the panoptical imperative to share one's experience no matter how private or banal, hence using Instagram for its associations with a certain solipsistic self-display; by willingly enacting the production of mediated self on social media I'm exploring a kind of masochistic humour which has some affinities with what Benjamin Noys identified as an accelerationist attitude of "the worse the better." And yet, by remaining hidden, and not publicly viewable, the public performance of a mediated self is denied.

DR: An accelerationist Social Media artwork would have to be loaded with sincerity, firstly, on the part of the human (artist/performer), but also, in an authentic attempt to utilise the network completely on its terms. Is there something, then, about abundance and saturation in your work? An attempt to overload the panopticon? ES: That's a very interesting way of putting it. I sometimes relate that oversaturation to the horror vacui of art that springs from a self-therapeutic need, which my work addresses, though it's less obsessive scribbles, more endless connection, output and flow and semi-ritualistic and repetitive working processes. And in terms of utilizing the network on its own terms, Geert Lovink's notion of the "natural language hack" (rather than the "deep level" hack) is one I've thought about—where your understanding of the social, rather than technical, operation of online platforms gets your work disseminated. For example my project Woman Nature Alone, where I re-enacted stock video which is freely available on my Youtube channel—some of those videos are high on the Google ranking page, so Google is effectively "marketing" my work without me doing anything.  Whether it overloads the panopticon, or just contributes more to the babble, is a pertinent question (as Jodi Dean's work around communicative capitalism has shown), since if the work is disseminated on commercial platforms like YouTube or Facebook, it operates within a system of value generation which benefits the corporation, involving, as is by now well known, a Faustian pact of personal data in exchange for "free" service. And going back to agency—the mutability of the platforms means that if the work makes use of particular features (suchas YouTube annotations) its existence is contingent on them being continued; since the content and the context are inextricable in situations like this, it would become impossible to display the original work exactly as it was first made and seen. Even then, as with Olia Lialina and Dragan Espenschied's One Terabyte of Kilobyte Age, it would become an archive, which preserves documents from a specific point in the web's history but cannot replicate the original viewing conditions because all the infrastructure around it has changed completely. So if the platforms—the corporations—control the context and viewing conditions, then artists working within them are arguably at their mercy- and keeping the endless flow alive by adding to it. I'm more interested in working within the flows rather than, as some artists prefer, rejecting the dissemination of their work online. Particularly with moving image work,  I'm torn between feeling that artists' insistence on certain very specific, usually high quality, viewing conditions for their work bolsters, as Sven Lütticken has argued, the notion of the rarefied auratic art object whose appreciation requires a kind of hushed awe and reverence, while being aware that the opposite—the image ripped from its original location and circulated in crap-res iPhone pics/ videos—is an example of what David Joselit would call image neoliberalism, which sees images as site-less and like any other commodity, to be traded across borders and contexts with no respect for the artist's intentions. However, I also think that this circulation is becoming an inevitability and no matter how much you insist your video is viewed on zillion lumens projector (or whatever), it will most likely end up being seen by the majority of viewers on YouTube or on a phone screen; I'm interested in how artists (like Hito Steyerl) address, rather than avoid, the fact of this image velocity and spread. DR: Lastly, what have you been working on recently? What's next? ES: I recently did a series of live, improvised performance series called Other People's Problems direct to people's desktops, with Field Broadcast, where I read out streams of tags and captions off Tumblr, Instagram and Facebook, randomly jumping to other tags as I went. I'm fascinated by tags—they're often highly idiosyncratic and personal, as well as acting as connective tissue between dispersed users; but also I liked the improvisation, where something can go wrong and the awkwardness it creates. (I love awkwardness!) Future projects are going to explore some of the ideas this work generated: how to improvise online (when things can always be deleted/ rejigged afterwards), how to embrace the relinquishing of authorial control which I see as integral to the online (or at least social media) experience, and how to work with hashtags/ metadata both as text in its own right and as a tool.   Age: 33 Location: London, Athens when I can manage it How long have you been working creatively with technology? How did you start? 14, 15 maybe, when I started mucking around with Photoshop—I remember scanning a drawing I'd made of a skunk from a Disney tale and making it into a horrendous composition featuring a rasta flag background... I was young. And I've always been obsessed with documenting things; growing up I was usually the one in our gang who had the camera—showing my age here, imagine there being one person with a camera—which has given me plenty of blackmail leverage and a big box of tastefully weathered photos that, despite my general frustration with analogue nostalgia, I know I will be carrying around with me for life. Where did you go to school? What did you study? After doing Physics, Chemistry and Maths at school, I did one year of a Chemistry BA, until I realized I wasn't cut out for lab work (too much like cooking) or what seemed like the black-and-white nature of scientific enquiry. I then did an art and design foundation at a fashion college, followed by one year of Fine Art Textiles BA—a nonsensical course whose only redeeming feature was its grounding in feminist theory—before finally entering the second year of a Fine Art BA. For a while this patchy trajectory through art school made me paranoid, until I realised it probably made me sound more interesting than I am. And in my attempt to alleviate the suspicion that there was some vital piece of information I was missing, I also did loads of philosophy diploma courses, which actually did come in handy when back at Uni last year: I recently finished a Masters of Research in moving image art. What do you do for a living or what occupations have you held previously? Do you think this work relates to your art practice in a significant way? At the moment I'm just about surviving as an artist and I've always been freelance apart from time done in bar, kitchen, shop (Londoners, remember Cyberdog?) cleaning and nightclub jobs, some of which the passage of time has rendered as amusingly risqué rather than borderline exploitative. After my B.A., I set up in business with the Prince's Trust, running projects with what are euphemistically known as hard-to-reach young people, making videos, digital art pieces and music videos until government funding was pulled from the sector. I mostly loved this work and it definitely fed into and reflects my working with members of loose groups, like the meditation community around the Insight Time app, or Freecycle, or Facebook friends. I've also been assisting artist and writer Caroline Bergvall on and off for a few years, which has been very helpful in terms of observing how an artist makes a life/ living. What does your desktop or workspace look like? I'm just settling into a new space at the moment but invariably, a bit of a mess, a cup of tea, piles of books, and both desktop and workspace are are covered in neon post-it notes. Generally I am a paradigmatic post-Fordist flexi worker though: I can and do work pretty much anywhere—to the occasional frustration of friends and family. 

]]>
Tue, 08 Oct 2013 07:30:18 -0700 http://rhizome.org/editorial/2013/oct/8/artist-profile-erica-scourti
<![CDATA[Turing Complete User]]> http://contemporary-home-computing.org/turing-complete-user/

Computers are getting invisible. They shrink and hide. They lurk under the skin and dissolve in the cloud. We observe the process like an eclipse of the sun, partly scared, partly overwhelmed. We divide into camps and fight about advantages and dangers of The Ubiquitous. But whatever side we take — we do acknowledge the significance of the moment. With the disappearance of the computer, something else is silently becoming invisible as well — the User. Users are disappearing as both phenomena and term, and this development is either unnoticed or accepted as progress — an evolutionary step. The notion of the Invisible User is pushed by influential user interface designers, specifically by Don Norman a guru of user friendly design and long time advocate of invisible computing. He can be actually called the father of Invisible Computing. Those who study interaction design read his “Why Interfaces Don’t Work” published in 1990 in which he asked and answered his own question: “The real probl

]]>
Mon, 31 Dec 2012 06:46:00 -0800 http://contemporary-home-computing.org/turing-complete-user/
<![CDATA[Why we want to build Charles Babbage's Victorian computer]]> http://www.guardian.co.uk/commentisfree/2012/oct/23/charles-babbage-analytical-engine-victorian-computer

To understand why it's worth building an almost 200-year-old mechanical computer, it's necessary to first understand what a computer is. Although Babbage's analytical engine is entirely mechanical, it has the same essence as a modern computer. That computer essence is one of the important consequences of another British computing pioneer's work, a century after Babbage. Exactly 99 years after Babbage invented the computer, Alan Turing wrote his now famous paper describing the universal Turing machine. An important mathematical idea arising from Turing's paper and another by American mathematician Alonzo Church is that all computers have the same capabilities, no matter how they are constructed. Because of the Church-Turing thesis, as it is called, we know that Babbage's analytical engine (with its levers and cogs), Turing's theoretical machine and the latest tablet all have the same fundamental limits. Of course, Babbage's machine would by modern standards have been painfully slow.

]]>
Wed, 24 Oct 2012 03:35:00 -0700 http://www.guardian.co.uk/commentisfree/2012/oct/23/charles-babbage-analytical-engine-victorian-computer
<![CDATA[Abject Materialities: An Ontology of Everything on the Face of the Earth]]> http://machinemachine.net/text/ideas/abject-materialities-an-ontology-of-everything-on-the-face-of-the-earth

On the 5th of October I took part in the ASAP/4 ‘Genres of the Present’ Conference at the Royal College of Art. In collusion with Zara Dinnen, Rob Gallagher and Simon Clark, I delivered a paper on The Thing, as part of a panel on contemporary ‘Figures’. Our idea was to perform the exhaustion of the Zombie as a contemporary trope, and then suggest some alternative figures that might usefully replace it. Our nod to the ‘Figure’ was inspired, in part, by this etymological diversion from Bruno Latour’s book, On the Modern Cult of the Factish Gods: To designate the aberration of the coastal Guinea Blacks, and to cover up their own misunderstanding, the Portuguese (very Catholic, explorers, conquerors, and to a certain extent slave traders as well) are thought to have used the adjective feitiço, from feito, the past participle of the Portuguese verb “to do, to make.” As a noun, it means form, figure, configuration, but as an adjective, artificial, fabricated, factitious and finally, enchanted. Right from the start, the word’s etymology refused, like the Blacks, to choose between what is shaped by work and what is artificial; this refusal, this hesitation, induced fascination and brought on spells. (pg. 6)

My paper is a short ‘work-in-progress’, and will eventually make-up a portion of my thesis. It contains elements of words I have splurged here before. The paper is on, or about, The Thing, using the fictional figure as a way to explore possible contradictions inherent in (post)human ontology. This synopsis might clarify/muddy things up further: Coiled up as DNA or proliferating through digital communication networks, nucleotides and electrical on/off signals figure each other in a coding metaphor with no origin. Tracing the evolution of The Thing over its 70 year history in science-fiction (including John W. Campbell’s 1938 novella and John Carpenter’s 1982 film), this paper explores this figure’s most terrifying, absolute other quality: the inability of its matter to err. The Thing re-constitutes the contemporary information paradigm, leaving us with/as an Earthly nature that was always already posthuman. You can read the paper here, or download a PDF, print it out, and pin it up at your next horror/sci-fi/philosophy convention.

]]>
Fri, 19 Oct 2012 06:36:00 -0700 http://machinemachine.net/text/ideas/abject-materialities-an-ontology-of-everything-on-the-face-of-the-earth
<![CDATA[The evolution of cheating in chess]]> http://grantland.com/story/_/id/8362701/the-evolution-cheating-chess

Gadgetry of any sort has a rocky history in chess.

In the late 18th century, for example, a Hungarian engineer named Wolfgang von Kempelen toured Europe with a machine called The Turk, which he promoted as a mechanical chess master. Legend holds that Napoleon and Ben Franklin are among the chess aficionados who lost to Kempelen's brainchild. Decades after those big wins, word got out that The Turk, which Kempelen built to woo Empress Maria Theresa Walburga Amalia Christina of Austria, was a royal scam: For all its pulleys and wheels, Kempelen always made sure an accomplished and totally human chess player was hiding inside the machine, making all the right moves.

The Virginia scandal involved the opposite ruse, in which a machine surreptitiously called the shots for a player. The chess engines this scheme centered on are relatively new: Computers only surpassed humans at the chessboard during young Smiley's lifetime. Scientists had an easier time designing digital brains that could produce atom bombs or navigate lunar landings than they did fashioning a machine that could play chess worth a darn. Plainly, until relatively recently, chess was too complicated for computers. An analysis of chess's complicatedness in Wired determined that the number of possible positions in an average 40-move game is 10 to the 128th power, a sum "vastly larger than the number of atoms in the known universe."

]]>
Tue, 18 Sep 2012 04:48:00 -0700 http://grantland.com/story/_/id/8362701/the-evolution-cheating-chess
<![CDATA[Magic: The Gathering is Turing complete]]> http://boingboing.net/2012/09/12/magic-the-gathering.html

Alex Churchill has posted a way to implement a Turing complete computer within a game of Magic: The Gathering ("Turing complete" is a way of classifying a calculating engine that is capable of general-purpose computation). The profound and interesting thing about the recurrence of Turing completeness in many unexpected places -- such as page-layout descriptive engines -- is that it suggests that there's something foundational about the ability to do general computation. It also suggests that attempts to limit general computation will be complicated by the continued discovery of new potential computing engines. That is, even if you lock down all the PCs so that they only play restricted music formats and not Ogg, if you allow a sufficiently speedy and scriptable Magic: The Gathering program to exist, someone may implement the Ogg player using collectible card games.

]]>
Sun, 16 Sep 2012 04:35:00 -0700 http://boingboing.net/2012/09/12/magic-the-gathering.html
<![CDATA[Creating Artificial Intelligence Based on the Real Thing]]> http://www.nytimes.com/2011/12/06/science/creating-artificial-intelligence-based-on-the-real-thing.html

For the most part, the biological metaphor has long been just that — a simplifying analogy rather than a blueprint for how to do computing. Engineering, not biology, guided the pursuit of artificial intelligence. As Frederick Jelinek, a pioneer in speech recognition, put it, “airplanes don’t flap their wings.”

Yet the principles of biology are gaining ground as a tool in computing. The shift in thinking results from advances in neuroscience and computer science, and from the prod of necessity.

The physical limits of conventional computer designs are within sight — not today or tomorrow, but soon enough. Nanoscale circuits cannot shrink much further. Today’s chips are power hogs, running hot, which curbs how much of a chip’s circuitry can be used. These limits loom as demand is accelerating for computing capacity to make sense of a surge of new digital data from sensors, online commerce, social networks, video streams and corporate and government databases.

]]>
Wed, 08 Aug 2012 02:44:00 -0700 http://www.nytimes.com/2011/12/06/science/creating-artificial-intelligence-based-on-the-real-thing.html
<![CDATA[Binary Nomination]]> http://machinemachine.net/text/ideas/binary-nomination

‘An important feature of a learning machine is that its teacher will often be very largely ignorant of quite what is going on inside, although he may still be able to some extent to predict his pupil’s behaviour.’ Alan Turing, Computing Machinery and Intelligence (1950)

Replenishing each worn-out piece of its glimmering hull, one by one, the day arrives when the entire ship of Argo has been displaced – each of its parts now distinct from those of the ‘original’ vessel. For Roland Barthes, this myth exposes two modest activities:

Substitution (one part replaces another, as in a paradigm) Nomination (the name is in no way linked to the stability of the parts) 1

The discrete breaches the continuous in the act of nomination. Take for instance the spectrum of colours, the extension of which ‘is verbally reduced to a series of discontinuous terms’ 2 such as red, green, lilac or puce. Each colour has no cause but its name. By being isolated in language the colour ‘blue’ is allowed to exist, but its existence is an act of linguistic and, some would argue, perceptual severance. The city of Hull, the phrase “I will”, the surface of an ice cube and an image compression algorithm are entities each sustained by the same nominative disclosure: a paradox of things that seem to flow into one another with liquid potential, but things, nonetheless, limited by their constant, necessary re-iteration in language. There is no thing more contradictory in this regard than the human subject, a figure Barthes’ tried to paradoxically side-step in his playful autobiography. Like the ship of Argo, human experience has exchangeable parts, but at its core, such was Barthes’ intention, ‘the subject, unreconciled, demands that language represent the continuity of desire.’ 3

In an esoteric paper, published in 1930, Lewis Richardson teased out an analogy between flashes of human insight and the spark that leaps across a stop gap in an electrical circuit. The paper, entitled The Analogy Between Mental Images and Sparks, navigates around a provocative sketch stencilled into its pages of a simple indeterminate circuit, whose future state it is impossible to predict. Richardson’s playful label for the diagram hides a deep significance. For even at the simplest binary level, Richardson argued, computation need not necessarily be deterministic.

The discrete and the continuous are here again blurred by analogy. Electricity flowing and electricity not flowing: a binary imposition responsible for the entire history of information technology.

 

1 Roland Barthes, Roland Barthes (University of California Press, 1994), 46.

2 Roland Barthes, Elements of Semiology (Hill and Wang, 1977), 64.

3 Paul John Eakin, Touching the World: Reference in Autobiography (Princeton University Press, 1992), 16.

]]>
Thu, 19 Jul 2012 09:32:00 -0700 http://machinemachine.net/text/ideas/binary-nomination
<![CDATA[The Great Pretender: Turing as a Philosopher of Imitation]]> http://www.theatlantic.com/technology/archive/2012/07/the-great-pretender-turing-as-a-philosopher-of-imitation/259824/

In proposing the imitation game as a stand-in for another definition of thought or intelligence, Turing does more than deliver a clever logical flourish that helps him creatively answer a very old question about what makes someone (or something) capable of thought. In fact, he really skirts the question of intelligence entirely, replacing it with the outcomes of thought--in this case, the ability to perform "being human" as convincingly and interestingly as a real human. To be intelligent is to act like a human rather than to have a mind that operates like one. Or, even better, intelligence--whatever it is, the thing that goes on inside a human or a machine--is less interesting and productive a topic of conversation than the effects of such a process, the experience it creates in observers and interlocutors.

This is a kind of pretense most readily found on stage and on screen. An actor's craft is best described in terms of its effect, the way he or she portrays a part, elicits emotion

]]>
Thu, 19 Jul 2012 08:20:00 -0700 http://www.theatlantic.com/technology/archive/2012/07/the-great-pretender-turing-as-a-philosopher-of-imitation/259824/
<![CDATA[The Manifest Destiny of Artificial Intelligence]]> http://www.americanscientist.org/issues/id.15837,y.2012,no.4,content.true,page.1,css.print/issue.aspx

Artificial intelligence began with an ambitious research agenda: To endow machines with some of the traits we value most highly in ourselves—the faculty of reason, skill in solving problems, creativity, the capacity to learn from experience. Early results were promising. Computers were programmed to play checkers and chess, to prove theorems in geometry, to solve analogy puzzles from IQ tests, to recognize letters of the alphabet. Marvin Minsky, one of the pioneers, declared in 1961: “We are on the threshold of an era that will be strongly influenced, and quite possibly dominated, by intelligent problem-solving machines.”

Fifty years later, problem-solving machines are a familiar presence in daily life. Computer programs suggest the best route through cross-town traffic, recommend movies you might like to see, recognize faces in photographs, transcribe your voicemail messages and translate documents from one language to another. As for checkers and chess, computers are not merely good

]]>
Tue, 10 Jul 2012 02:48:00 -0700 http://www.americanscientist.org/issues/id.15837,y.2012,no.4,content.true,page.1,css.print/issue.aspx
<![CDATA[Why Aren't We Reading Turing?]]> http://www.furtherfield.org/features/why-arent-we-reading-turing

It's a testament to Turing's fascination with nearly everything that 76 years since his first major paper, there's still so much to write about his work. Expect this week to offer more events and glimpses into these projects: Neuro-computational studies into the functional basis of cognition. The ever forward march for genuine artificial intelligence. New methods of simulating the complexity of biological forms nearly 60 years after Turing's paper on the chemical basis of morphogenesis (indeed this area of complexity theory is now an established area of major research). The slippery mathematical formalist discoveries which define what can or cannot be computed. And not forgetting key historical developments in cryptography, perhaps the field which Turing is most respected for. Moreover, Turing wasn't just one of the greatest mathematicians of the 20th Century, but also one of the greatest creative engineers; someone who wasn't afraid of putting his ideas into automation, through the ne

]]>
Wed, 27 Jun 2012 15:20:00 -0700 http://www.furtherfield.org/features/why-arent-we-reading-turing
<![CDATA[Artificial Intelligence Could Be on Brink of Passing Turing Test]]> http://www.wired.com/wiredscience/2012/04/turing-test-revisited

“Two revolutionary advances in information technology may bring the Turing test out of retirement,” wrote Robert French, a cognitive scientist at the French National Center for Scientific Research, in an Apr. 12 Science essay. “The first is the ready availability of vast amounts of raw data — from video feeds to complete sound environments, and from casual conversations to technical documents on every conceivable subject. The second is the advent of sophisticated techniques for collecting, organizing, and processing this rich collection of data.”

]]>
Sat, 14 Apr 2012 08:37:53 -0700 http://www.wired.com/wiredscience/2012/04/turing-test-revisited
<![CDATA[Roar so wildly: Spam, technology and language]]> http://www.radicalphilosophy.com/commentary/roar-so-wildly-spam-technology-and-language

This is the raw text output of a chat session with a bot I modified to act as an interlocutor. I use our conversation, which revolves around the history of spam, particularly algorithmic filtering, litspam, and the theories of Wiener and Turing, as a way of putting forward the outlines of new, machine-driven forms of language for which spam was the testing ground.

]]>
Thu, 16 Feb 2012 05:20:28 -0800 http://www.radicalphilosophy.com/commentary/roar-so-wildly-spam-technology-and-language
<![CDATA[Computing Machinery and Intelligence (by Alan Turing)]]> http://www.loebner.net/Prizef/TuringArticle.html

I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think." The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words "machine" and "think" are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, "Can machines think?" is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.

]]>
Mon, 31 Oct 2011 06:53:59 -0700 http://www.loebner.net/Prizef/TuringArticle.html