MachineMachine /stream - search for methodology https://machinemachine.net/stream/feed en-us http://blogs.law.harvard.edu/tech/rss LifePress therourke@gmail.com <![CDATA[Homo Ludens - About Video Game Design and the Meaning of Play]]> https://www.youtube.com/watch?v=hsazaCxMYtY

Playfully Subscribe: https://www.youtube.com/channel/UCCIS_QuklPMwuEnfnjjHKfg?sub_confirmation=1

Twitter: @FormingFiction

Watch Some Stuff: Extra Credits, Because Games Matter: https://www.youtube.com/watch?v=C6xz58O4xq8

Bibliography: Hunicke, Robin/LeBlanc, Marc/Zubek, Robert, MDA: A Formal Approach to Game Design and Game Research, https://users.cs.northwestern.edu/~hunicke/MDA.pdf

6-11 Framework: https://www.academia.edu/1571687/THE_6-11_FRAMEWORK_A_NEW_METHODOLOGY_FOR_GAME_ANALYSIS_AND_DESIGN

Huizinga, Johan, Homo Ludens. A study of the play-element in culture, http://art.yale.edu/file_columns/0000/1474/homo_ludens_johan_huizinga_routledge_1949_.pdf

Dillon, Roberto, On the Way to Fun. An Emotion-Based Approach to Successful Game Design, https://www.amazon.com/Way-Fun-Emotion-Based-Approach-Successful/dp/1568815824

Kotte, Andreas, Theaterwissenschaft. Eine Einführung, https://tinyurl.com/y43bdvj4

Jung, C. G., and Joan Chodorow, Jung on Active Imagination, https://tinyurl.com/y627n6gp

Music: "The Process" by LAKEY INSPIRED: https://www.youtube.com/watch?v=daWvummA8ZQ

"Pokemon Gym" by Mikel: https://www.youtube.com/watch?v=2DVpys50LVE

"Hopes & Dreams" by Jonas Munk Lindbo: https://www.youtube.com/watch?v=qNp4_pFkM5Q

Other: Ace Attorney Font: BMatSantos

http://www.kojimaproductions.jp/en/

]]>
Wed, 06 Nov 2019 04:00:06 -0800 https://www.youtube.com/watch?v=hsazaCxMYtY
<![CDATA[#Additivism selected for Vilém Flusser Residency Program for Artistic Research 2016]]> http://additivism.org/post/138290881251

Additivism selected for Vilém Flusser Residency Program for Artistic Research 2016

We are extremely excited to announce that our project #Additivism was accepted as recipient of the Vilém Flusser Residency Program for Artistic Research 2016. You can read the jury statement here:Morehshin Allahyari’s and Daniel Rourke’s project #Additivism sets in motion a critical approach towards 3d-printing as a technology which is all too often subsumed into the hype factor of “maker culture”. The project of additivism is a timely response to the (post-)anthropocene age where the originary agency of human creation is being called into question both by machinic automation and environmental crisis. As a bastard methodology located somewhere between accelerationism and subversion, it brings together art, design, and engineering in a radical mixture that aims at nothing less than writing the world anew. This approach that enables a concretion of the algorithmic abstraction of 3D printing resonates strongly with Vilém Flusser’s thinking on the technical image. [Read Full Statement]The residency program is a cooperation between the Vilém Flusser Archive at the Berlin University of Arts (UdK) and Transmediale, festival for art and digital culture Berlin.Morehshin Allahyari and Daniel Rourke will be in residence in Berlin through May and June of 2016 working closely on The 3D Additivist Cookbook and a related series of workshops and events.

]]>
Fri, 29 Jan 2016 10:37:00 -0800 http://additivism.org/post/138290881251
<![CDATA[There's Not Much 'Glitch' In Glitch Art | Motherboard]]> http://motherboard.vice.com/blog/theres-not-much-glitch-in-glitch-art

Artist Daniel Temkin has been creating and discussing glitch art for over seven years. In that time, he's exhibited in solo and group shows, and had his work featured in Rhizome and Fast Company, amongst other publications. For Temkin, glitch art is about the disruption of algorithms, though algorithmic art is a bit of a misnomer. He prefers "algo-glitch demented" in describing the methods, aesthetics, and philosophy of glitch.

In January, Temkin published a fascinating glitch art essay on NOOART titled "Glitch && Human/Computer Interaction." There he laid down the philosophy and "mythology" of glitch, which had really started in a series of email conversations with Hugh Manon. Though there is no shortage of writings on glitch art, many aspects of the these texts didn't address what Temkin loved most about how it is created.

"The glitch aesthetic may be rooted in the look of malfunction, but when it comes to actual practice, there’s often not much glitch in glitch art," wrote Temkin in the essay. "Yes, some glitch artists are actually exploiting bugs to get their results — but for most it would be more accurate to describe these methods as introducing noisy data to functional algorithms or applying these algorithms in unconventional ways." This, he said, doesn't make it traditional algorithmic art (algorithm-designed artworks), but a more demented form of it—algo-glitch demented.

Over a series of email conversations, Temkin elaborated on some of his conclusions in "Glitch && Human/Computer Interaction." Aside from highlighting some of the best algo-glitch demented art, Temkin also talked about bad data, image hacking, and why computers are no less "image makers" than humans even though they aren't sentient (yet).

MOTHERBOARD: Aside from being an artist working in glitch, would you say that you've also sort of become a philosopher of glitch or algorithmic art, if there is such a thing?

Temkin: There's tons of writing on glitch, much of it very good (Lab404.com, for instance), but some aspects of glitch theory didn't jibe with what really interested me about the style. Originally, Hugh Manon and I started a long email conversation about glitch, which evolved into our 2011 paper. It ranged across glitch aesthetics, methodology, and issues around authorship, while delving into glitch's ambivalence about error—the way the glitch is possible because of software's ability to "fail to fully fail" when coming across unexpected data.

We questioned why computer error is so emphasized in this form when nothing is really at stake in a digital file (a deleted but endlessly reproducible JPEG has none of the aura of an Erased DeKooning), and what it means to purposely simulate an error, something that ordinarily has power because it is unexpected and outside of our control.

Ted Davis, FFD8 project

These issues stuck with me, until I considered Clement Valla's familiar quote about his Postcards From Google Earth project: that "these images are not glitches... they are the absolute logical result of the system." It was a familiar quote, but in this instance got me thinking about how most glitchwork can be described the same way—as products of perfectly functional systems.

I wrote my recent piece for NOOART, arguing that glitch's preoccupation with error doesn't always serve it well, that it limits the scope of what's produced and how we talk about it. Bypassing computer error opened new avenues of investigation about our relationship both with technology and with logic systems more generally, and got at what interested me more about the style we call glitch.

In the NOOART essay, you write: "Some glitch artists are actually exploiting bugs to get their results — but for most it would be more accurate to describe these methods as introducing noisy data to functional algorithms or applying these algorithms in unconventional ways." Can you elaborate on that point?

In the paper, I discuss JPEG corruption, one of the fundamental glitch techniques. Introduce bad data to a JPEG file, and you'll see broken-looking images emerge. I use this example because it's so familiar to glitch practice. JPEG is not just a file format but an algorithm that compresses/decompresses image data.

When we "corrupt" a JPEG, we're altering compressed data so that it (successfully) renders to an image that no longer appears photographic, taking on a chunky, pixelated, more abstract character we associate with broken software. To the machine, it is not an error—if the image were structurally damaged, we would not be able to open it. This underscores the machine as an apparatus indifferent to what makes visual sense to us, at a place where our expectations clash with algorithmic logic.

Daniel Temkin, Dither Studies #2, 2011

The excitement of altering JPEG data directly is the sense of image hacking—making changes at the digital level without being able to predict the outcome. This becomes more apparent in other glitch techniques, such as sonification, which add layers of complexity to the process. Giving up control to a system or process has a long history in art.

Gerhard Richter describes committing to a systematic approach, veiling the work from conscious decisions that may ruin or limit it. As he puts it, "if the execution works, this is only because I partly destroy it, or because it works in spite of everything—by not detracting and by not looking the way I planned" [p179, Gerhard Richter, Panorama]. In digital art, we often function in an all-too-WYSIWYG environment. Glitch frees us from this, bringing us to unexpected places.

Can you draw a distinction between generative art (which can feature algorithms) and your concept of algo-glitch demented?

I call it algo-glitch demented, as opposed to algorithmic art (which I understand meaning generative art that uses algorithms). I'll have to paraphrase Philip Galanter and say that generative art is any practice where the artist sets a system "in motion with some degree of autonomy," resulting in a work.

"Glitch is a cyborg art, building on human/computer interaction. The patterns created by these unknown processes is what I call the wilderness within the machine." What makes algo-glitch demented is how we misuse existing algorithms, running them in contexts that had never been intended by their designers. Furthermore, there are moments of autonomy in algo-glitch, but this autonomy is not what defines it as algo-glitch; what's more important is the control we give up to the process.

You call glitch art a collaboration with the machine. That's an interesting point because the human is conscious of this, while the machine is not. Or, do you have another way of looking at that collaboration?

Machines are not sentient, but they are image-makers. Trevor Paglen, in a recent Frieze Magazine piece, says we are now or very soon to be at the point "where the majority of the world’s images are made by-machines-for-machines," and "seeing with the meat-eyes of our human bodies is increasingly the exception," refering to facial-recognition systems, qr code readers, and a host of other automation.

One of the most compelling ideas to come from James Bridle's New Aesthetic is how we can treat the machine as having a vision—even as we know it's not sentient—and just how strange this vision is, that does not hold human beings as its audience.

Jeff Donaldson, panasonic wj-mx12 video feedback, 2012

Glitch artists have been doing this for a long time, treating it as an equal collaborator and seeing where it leads us as we cede control to broken processes and zombie algorithms. Curt Cloninger describes it as "painting with a very blunt brush that has a mind of its own;" in this way, glitch is a cyborg art, building on human/computer interaction. The patterns created by these unknown processes is what I call the wilderness within the machine.

Can you talk about glitch as mythology? I've never heard it described as such.

I'm probably being a bit obnoxious there, using mythology to describe the gap between how we talk about glitch and what we're actually doing. There are several strains of work within glitch or that overlap with glitch. There is Dirty New Media, which is related to noise-based work; materialist explorations; the algo-glitch I've emphasized in the JPEG example; and what we might call "minimal slippage glitch" (a term that arose in a Facebook discussion between me and Rosa Menkman).

Minimal Slippage fits a familiar contemporary art scenario of the single gesture that puts things in motion and reveals something new. It's great when things actually work this way, but when this language is used to describe work made by manipulating data repeatedly, there's a problem.

I also take issue with the term glitch art. I don't propose we replace it, only to be more conscious of its influence. If we produce work with other visual styles using glitch processes, why limit ourselves to work that has an error-strewn appearance? This connection begins to seems artificial. I kept this in mind with my Glitchometry series. I use the sonification technique to process simple geometric shapes (b&w squares and triangles, etc.) into works that range from somewhat glitchy to abstractions that fall very far from a glitch aesthetic. They emphasize process, the back-and-forth with the machine, and an anxiety about giving up that control.

Clement Valla, from “Iconoclashes” 2013

With Glitchometry Stripes (an extension of the Glitchometry work), the results are even less glitchy in appearance; this time using only sound effects that cleanly transform the lines, ending up with Op Art-inspired, crisply graphic works that create optical buzzing when scrolled across the screen.

You mention Ted Davis's FFD8 project in your essay. What is it about the work that you like?

FFD8 is JPEG image hacking, with protection against messing up the header (which would make the image undisplayable). It's a gentle introduction to glitching, but it illustrates how it works, which encourages one to go deeper. I'm suspicious of glitch software that does all the work for you, essentially turning glitch styling into the equivalent of a Photoshop filter. With FFD8, enough of the process is exposed that folks starting out in the style might decide to take the next step and mess with raw files directly, or build their own software, or discover some new avenue to create work.

What's your opinion on something like the iPhone's panorama function, which, if you move the camera fast or in unexpected directions, creates glitches? It's movement-based as opposed to other types of glitch.

I think someone will come along with a brilliant idea of how to use it to do something fresh and interesting. One interesting work that uses photo-stitching (although not on the iPhone) is Clement Valla's Iconoclasts series. He loads images of gods from the Met's collection and lets Photoshop decide how to combine them, creating improbable composites, many physically impossible. It works because of how carefully the objects were photographed. Each is lit the same way with the same background. Many of these religious relics come from cultures where it was believed that such objects were not created by human hands. Now an algorithm, also not human, decides how to combine them to construct new artifacts.

Daniel Temkin, Glitchometry Circles #6, 2013

Where do you feel you've been most successful in your own projects?

I never trust artists to tell me which of their works are more successful. [laughs] I'll tell you the theme I'm most interested in. Much of my work revolves around this clash between human thinking and computer logic, and the compulsiveness that comes from trying to think in a logical way. My own experience with this comes from programming, which is my background from before art. Glitch gives me a way to create chaotic works as a release from the overly structured thinking programming requires.

As a few examples of work that deals with this, my Dither Studies expose the seemingly irrational patterns that come from the very simple rules of dithering patterns. They began as a collaboration with Photoshop, where I asked it to dither a solid color with two incompatible colors. From there, I constructed a web tool that walks through progressions of dithers.

In Drunk Eliza, I re-coded the classic chat bot using my language Entropy, where all data is unstable. Since the original Eliza has such a small databank of phrases, yet so clearly has a personality, I wanted to know how she would seem with her mind slowly disintegrating, HAL-style. Drunk Eliza was the result. The drunken responses she gets online have been a great source of amusement for me.

]]>
Tue, 18 Mar 2014 12:45:15 -0700 http://motherboard.vice.com/blog/theres-not-much-glitch-in-glitch-art
<![CDATA[Midday Traffic Time Collapsed and Reorganized by Color: San Diego Study #3]]> http://vimeo.com/82038912

I’m very humbled that the VICE Creator's Project has covered this series with a new video: youtu.be/iioPicXsAFg The source footage for this video is a 4-minute shot from the Washington Street bridge above State Route 163 in San Diego captured at 2:39pm Oct 1, 2013. My aim is to reveal the color palette and color preferences of contemporary San Diego drivers in addition to traffic patterns and volumes. There are no CG elements, these are all real cars that have been removed from one sample and reorganized. The source footage may be viewed here: vimeo.com/81846560 More details on the methodology + are here: cysfilm.com/?p=3345 The San Diego Studies is a series of short videos that collapse time to reveal otherwise unobservable rhythms and movement in the city. The project is supported my MOPA San Diego and the San Diego Foundation . For more information about this video please visit cysfilm.com and MOPA.org San Diego Study #1 vimeo.com/54658957 San Diego Study #2: vimeo.com/58240175 Contact+Info: cysfilm@gmail.com connect with me on Twitter: @cysfilm Shot on a Canon C100 in CLog with a Canon EFS 17-55 f/2.8 lens at 24p and most of the post work was done in After Effects. copyright © 2013 Cy KuckenbakerCast: Cy KuckenbakerTags: San Diego, San Diego Studies, cy, cy Kuckenbaker, cysfilm, traffic, cars, color, freeway, time collapse, time lapse, ethnography, 163 and hillcrest

]]>
Tue, 17 Dec 2013 01:19:52 -0800 http://vimeo.com/82038912
<![CDATA[When Art Goes Disruptive: The A/Moral Dis/Order of Recursive Publics | Public Interfaces]]> http://darc.imv.au.dk/publicinterfaces/?p=150

Although the analysis of geek community as a recursive public sharing social imaginary of openness, and a moral order of freedom, is a valid frame to understand geek culture through a sociological point of view, adopting a dialectical perspective in the analysis of network dynamics might open an opportunity to question the notion of artistic intervention itself. This thread connects multiple identities projects and hacker practices of the last decade with business strategies of today, reflecting on the role of activists and artists in social media. Their interventions are thought as a challenge to generate a critical understanding of contemporary informational power (or info-capitalism), and to imagine possible routes of political and artistic action. Furthermore, this analysis questions the methodology of radical clashes of opposite forces to generate socio-political transformation, proposing more flexible viral actions as relevant responses to the ubiquity of capitalism.

]]>
Mon, 10 Jan 2011 03:22:02 -0800 http://darc.imv.au.dk/publicinterfaces/?p=150
<![CDATA[The Next Great Discontinuity: The Data Deluge]]> http://www.3quarksdaily.com/3quarksdaily/2009/04/the-next-great-discontinuity-part-two.html

Speed is the elegance of thought, which mocks stupidity, heavy and slow. Intelligence thinks and says the unexpected; it moves with the fly, with its flight. A fool is defined by predictability… But if life is brief, luckily, thought travels as fast as the speed of light. In earlier times philosophers used the metaphor of light to express the clarity of thought; I would like to use it to express not only brilliance and purity but also speed. In this sense we are inventing right now a new Age of Enlightenment… A lot of… incomprehension… comes simply from this speed. I am fairly glad to be living in the information age, since in it speed becomes once again a fundamental category of intelligence. Michel Serres, Conversations on Science, Culture and Time

(Originally published at 3quarksdaily · Link to Part One) Human beings are often described as the great imitators: We perceive the ant and the termite as part of nature. Their nests and mounds grow out of the Earth. Their actions are indicative of a hidden pattern being woven by natural forces from which we are separated. The termite mound is natural, and we, the eternal outsiders, sitting in our cottages, our apartments and our skyscrapers, are somehow not. Through religion, poetry, or the swift skill of the craftsman smearing pigment onto canvas, humans aim to encapsulate that quality of existence that defies simple description. The best art, or so it is said, brings us closer to attaining a higher truth about the world that remains elusive from language, that perhaps the termite itself embodies as part of its nature. Termite mounds are beautiful, but were built without a concept of beauty. Termite mounds are mathematically precise, yet crawling through their intricate catacombs cannot be found one termite in comprehension of even the simplest mathematical constituent. In short, humans imitate and termites merely are. This extraordinary idea is partly responsible for what I referred to in Part One of this article as The Fallacy of Misplaced Concreteness. It leads us to consider not only the human organism as distinct from its surroundings, but it also forces us to separate human nature from its material artefacts. We understand the termite mound as integral to termite nature, but are quick to distinguish the axe, the wheel, the book, the skyscraper and the computer network from the human nature that bore them. When we act, through art, religion or with the rational structures of science, to interface with the world our imitative (mimetic) capacity has both subjective and objective consequence. Our revelations, our ideas, stories and models have life only insofar as they have a material to become invested through. The religion of the dance, the stone circle and the summer solstice is mimetically different to the religion of the sermon and the scripture because the way it interfaces with the world is different. Likewise, it is only with the consistency of written and printed language that the technical arts could become science, and through which our ‘modern’ era could be built. Dances and stone circles relayed mythic thinking structures, singular, imminent and ethereal in their explanatory capacities. The truth revealed by the stone circle was present at the interface between participant, ceremony and summer solstice: a synchronic truth of absolute presence in the moment. Anyone reading this will find truth and meaning through grapholectic interface. Our thinking is linear, reductive and bound to the page. It is reliant on a diachronic temporality that the pen, the page and the book hold in stasis for us. Imitation alters the material world, which in turn affects the texture of further imitation. If we remove the process from its material interface we lose our objectivity. In doing so we isolate the single termite from its mound and, after much careful study, announce that we have reduced termite nature to its simplest constituent. The reason for the tantalizing involutions here is obviously that intelligence is relentlessly reflexive, so that even the external tools that it uses to implement its workings become ‘internalized’, that is, part of its own reflexive process… To say writing is artificial is not to condemn it but to praise it. Like other artificial creations and indeed more than any other, it is utterly invaluable and indeed essential for the realisation of fuller, interior, human potentials. Technologies are not mere exterior aids but also interior transformations of consciousness, and never more than when they affect the word. Walter J. Ong, Orality and Literacy

Anyone reading this article cannot fail but be aware of the changing interface between eye and text that has taken place over the past two decades or so. New Media – everything from the internet database to the Blackberry – has fundamentally changed the way we connect with each other, but it has also altered the way we connect with information itself. The linear, diachronic substance of the page and the book have given way to a dynamic textuality blurring the divide between authorship and readership, expert testament and the simple accumulation of experience. The main difference between traditional text-based systems and newer, data-driven ones is quite simple: it is the interface. Eyes and fingers manipulate the book, turning over pages in a linear sequence in order to access the information stored in its printed figures. For New Media, for the digital archive and the computer storage network, the same information is stored sequentially in databases which are themselves hidden to the eye. To access them one must commit a search or otherwise run an algorithm that mediates the stored data for us. The most important distinction should be made at the level of the interface, because, although the database as a form has changed little over the past 50 years of computing, the Human Control Interfaces (HCI) we access and manipulate that data through are always passing from one iteration to another. Stone circles interfacing the seasons stayed the same, perhaps being used in similar rituals over the course of a thousand years of human cultural accumulation. Books, interfacing text, language and thought, stay the same in themselves from one print edition to the next, but as a format, books have changed very little in the few hundred years since the printing press. The computer HCI is most different from the book in that change is integral to it structure. To touch a database through a computer terminal, through a Blackberry or iPhone, is to play with data at incredible speed: Sixty years ago, digital computers made information readable. Twenty years ago, the Internet made it reachable. Ten years ago, the first search engine crawlers made it a single database. Now Google and like-minded companies are sifting through the most measured age in history, treating this massive corpus as a laboratory of the human condition… Kilobytes were stored on floppy disks. Megabytes were stored on hard disks. Terabytes were stored in disk arrays. Petabytes are stored in the cloud. As we moved along that progression, we went from the folder analogy to the file cabinet analogy to the library analogy to — well, at petabytes we ran out of organizational analogies. At the petabyte scale, information is not a matter of simple three- and four-dimensional taxonomy and order but of dimensionally agnostic statistics… This is a world where massive amounts of data and applied mathematics replace every other tool that might be brought to bear. Out with every theory of human behavior, from linguistics to sociology. Forget taxonomy, ontology, and psychology. Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves. Wired Magazine, The End of Theory, June 2008

And as the amount of data has expanded exponentially, so have the interfaces we use to access that data and the models we build to understand that data. On the day that Senator John McCain announced his Vice Presidential Candidate the best place to go for an accurate profile of Sarah Palin was not the traditional media: it was Wikipedia. In an age of instant, global news, no newspaper could keep up with the knowledge of the cloud. The Wikipedia interface allowed knowledge about Sarah Palin from all levels of society to be filtered quickly and efficiently in real-time. Wikipedia acted as if it was encyclopaedia, as newspaper as discussion group and expert all at the same time and it did so completely democratically and at the absence of a traditional management pyramid. The interface itself became the thinking mechanism of the day, as if the notes every reader scribbled in the margins had been instantly cross-checked and added to the content. In only a handful of years the human has gone from merely dipping into the database to becoming an active component in a human-cloud of data. The interface has begun to reflect back upon us, turning each of us into a node in a vast database bigger than any previous material object. Gone are the days when clusters of galaxies had to a catalogued by an expert and entered into a linear taxonomy. Now, the same job is done by the crowd and the interface, allowing a million galaxies to be catalogued by amateurs in the same time it would have taken a team of experts to classify a tiny percentage of the same amount. This method of data mining is called ‘crowdsourcing’ and it represents one of the dominant ways in which raw data will be turned into information (and then knowledge) over the coming decades. Here the cloud serves as more than a metaphor for the group-driven interface, becoming a telling analogy for the trans-grapholectic culture we now find ourselves in. To grasp the topological shift in our thought patterns it pays to move beyond the interface and look at a few of the linear, grapholectic models that have undergone change as a consequence of the information age. One of these models is evolution, a biological theory the significance of which we are still in the process of discerning:

If anyone now thinks that biology is sorted, they are going to be proved wrong too. The more that genomics, bioinformatics and many other newer disciplines reveal about life, the more obvious it becomes that our present understanding is not up to the job. We now gaze on a biological world of mind-boggling complexity that exposes the shortcomings of familiar, tidy concepts such as species, gene and organism. A particularly pertinent example [was recently provided in New Scientist] - the uprooting of the tree of life which Darwin used as an organising principle and which has been a central tenet of biology ever since. Most biologists now accept that the tree is not a fact of nature - it is something we impose on nature in an attempt to make the task of understanding it more tractable. Other important bits of biology - notably development, ageing and sex - are similarly turning out to be much more involved than we ever imagined. As evolutionary biologist Michael Rose at the University of California, Irvine, told us: “The complexity of biology is comparable to quantum mechanics.” New Scientist, Editorial, January 2009

As our technologies became capable of gathering more data than we were capable of comprehending, a new topology of thought, reminiscent of the computer network, began to emerge. For the mindset of the page and the book science could afford to be linear and diachronic. In the era of The Data Deluge science has become more cloud-like, as theories for everything from genetics to neuroscience, particle physics to cosmology have shed their linear constraints. Instead of seeing life as a branching tree, biologists are now speaking of webs of life, where lineages can intersect and interact, where entire species are ecological systems in themselves. As well as seeing the mind as an emergent property of the material brain, neuroscience and philosophy have started to consider the mind as manifest in our extended, material environment. Science has exploded, and picking up the pieces will do no good. Through the topology of the network we have begun to perceive what Michel Serres calls ‘The World Object’, an ecology of interconnections and interactions that transcends and subsumes the causal links propounded by grapholectic culture. At the limits of science a new methodology is emerging at the level of the interface, where masses of data are mined and modelled by systems and/or crowds which themselves require no individual understanding to function efficiently. Where once we studied events and ideas in isolation we now devise ever more complex, multi-dimensional ways for those events and ideas to interconnect; for data sources to swap inputs and output; for outsiders to become insiders. Our interfaces are in constant motion, on trajectories that curve around to meet themselves, diverge and cross-pollinate. Thought has finally been freed from temporal constraint, allowing us to see the physical world, life, language and culture as multi-dimensional, fractal patterns, winding the great yarn of (human) reality: The advantage that results from it is a new organisation of knowledge; the whole landscape is changed. In philosophy, in which elements are even more distanced from one another, this method at first appears strange, for it brings together the most disparate things. People quickly crit[cize] me for this… But these critics and I no longer have the same landscape in view, the same overview of proximities and distances. With each profound transformation of knowledge come these upheavals in perception. Michel Serres, Conversations on Science, Culture and Time

]]>
Tue, 05 May 2009 07:35:00 -0700 http://www.3quarksdaily.com/3quarksdaily/2009/04/the-next-great-discontinuity-part-two.html