MachineMachine /stream - search for aggregation https://machinemachine.net/stream/feed en-us http://blogs.law.harvard.edu/tech/rss LifePress therourke@gmail.com <![CDATA[Algorithmic Narratives and Synthetic Subjects (paper)]]> http://machinemachine.net/portfolio/paper-at-theorizing-the-web-synthetic-subjects/

This was the paper I delivered at The Theorizing the Web Conference, New York, 18th April 2015. This video of the paper begins part way in, and misses out some important stuff. I urge you to watch the other, superb, papers on my panel by Natalie Kane, Solon Barocas, and Nick Seaver. A better video is forthcoming. I posted this up partly in response to this post at Wired about the UK election, Facebook’s echo-chamber effect, and other implications well worth reading into.

Data churning algorithms are integral to our social and economic networks. Rather than replace humans these programs are built to work with us, allowing the distinct strengths of human and computational intelligences to coalesce. As we are submerged into the era of ‘big data’, these systems have become more and more common, concentrating every terrabyte of raw data into meaningful arrangements more easily digestible by high-level human reasoning. A company calling themselves ‘Narrative Science’, based in Chicago, have established a profitable business model based on this relationship. Their slogan, ‘Tell the Stories Hidden in Your Data’, [1] is aimed at companies drowning in spreadsheets of cold information: a promise that Narrative Science can ‘humanise’ their databases with very little human input. Kristian Hammond, Chief Technology Officer of the company, claims that within 15 years over 90% of all news stories will also be written by algorithms. [2] But rather than replacing the jobs that human journalists now undertake, Hammond claims the vast majority of their ‘robonews’ output will report on data currently not covered by traditional news outlets. One family-friendly example of this is the coverage of little-league baseball games. Very few news organisations have the resources, or desire, to hire a swathe of human journalists to write-up every little-league game. Instead, Narrative Science offer leagues, parents and their children a miniature summary of each game gleaned from match statistics uploaded by diligent little league attendees, and then written up by Narrative Science in a variety of journalistic styles. In their book ‘Big Data’ from 2013, Oxford University Professor of internet governance Viktor Mayer-Schönberger, and  ‘data editor’ of The Economist, Kenneth Cukier, tell us excitedly about another data aggregation company, Prismatic, who: …rank content from the web on the basis of text analysis, user preferences, social network-popularity, and big-data analysis. [3] According to Mayer- Schönberger and Cukier this makes Prismatic able ‘to tell the world what it ought to pay attention to better than the editors of the New York Times’. [4] A situation, Steven Poole reminds us, we can little argue with so long as we agree that popularity underlies everything that is culturally valuable. Data is now the lifeblood of technocapitalism. A vast endless influx of information flowing in from the growing universe of networked and internet connected devices. As many of the papers at Theorizing the Web attest, our environment is more and more founded by systems whose job it is to mediate our relationship with this data. Technocapitalism still appears to respond to Jean Francois Lyotard’s formulation of Postmodernity: that whether something is true has less relevance, than whether it is useful. In 1973 Jean Francois Lyotard described the Postmodern Condition as a change in “the status of knowledge” brought about by new forms of techno-scienctific and techno-economic organisation. If a student could be taught effectively by a machine, rather than by another human, then the most important thing we could give the next generation was what he called, “elementary training in informatics and telematics.” In other words, as long as our students are computer literate “pedagogy would not necessarily suffer”. [5] The next passage – where Lyotard marks the Postmodern turn from the true to the useful – became one of the book’s most widely quoted, and it is worth repeating here at some length:

It is only in the context of the grand narratives of legitimation – the life of the spirit and/or the emancipation of humanity – that the partial replacement of teachers by machines may seem inadequate or even intolerable. But it is probable that these narratives are already no longer the principal driving force behind interest in acquiring knowledge. [6] Here, I want to pause to set in play at least three elements from Lyotard’s text that colour this paper. Firstly, the historical confluence between technocapitalism and the era now considered ‘postmodern’. Secondly, the association of ‘the grand-narrative’ with modern, and pre-modern conditions of knowledge. And thirdly, the idea that the relationship between the human and the machine – or computer, or software – is generally one-sided: i.e. we may shy away from the idea of leaving the responsibility of our children’s education to a machine, but Lyotard’s position presumes that since the machine was created and programmed by humans, it will therefore necessarily be understandable and thus controllable, by humans. Today, Lyotard’s vision of an informatically literate populous has more or less come true. Of course we do not completely understand the intimate workings of all our devices or the software that runs them, but the majority of the world population has some form of regular relationship with systems simulated on silicon. And as Lyotard himself made clear, the uptake of technocapitalism, and therefore the devices and systems it propagates, is piece-meal and difficult to predict or trace. At the same time Google’s fleet of self-driving motor vehicles are let-loose on Californian state highways, in parts of sub-Saharan Africa models of mobile-phones designed 10 or more years ago are allowing farming communities to aggregate their produce into quantities with greater potential to make profit on a world market. As Brian Massumi remarks, network technology allows us the possibility of “bringing to full expression a prehistory of the human”, a “worlding of the human” that marks the “becoming-planetary” of the body itself. [7] This “worlding of the human” represents what Edmund Berger argues is the death of the Postmodern condition itself: [T]he largest bankruptcy of Postmodernism is that the grand narrative of human mastery over the cosmos was never unmoored and knocked from its pulpit. Instead of making the locus of this mastery large aggregates of individuals and institutions – class formations, the state, religion, etc. – it simply has shifted the discourse towards the individual his or herself, promising them a modular dreamworld for their participation… [8] Algorithmic narratives appear to continue this trend. They are piece-meal, tending to feedback user’s dreams, wants and desires, through carefully aggregated, designed, packaged Narratives for individual ‘use’. A world not of increasing connectivity and understanding between entities, but a network worlded to each individual’s data-shadow. This situation is reminiscent of the problem pointed out by Eli Pariser of the ‘filter bubble’, or the ‘you loop’, a prevalent outcome of social media platforms tweaked and personalised by algorithms to echo at the user exactly the kind of thing they want to hear. As algorithms develop in complexity the stories they tell us about the vast sea of data will tend to become more and more enamoring, more and more palatable. Like some vast synthetic evolutionary experiment, those algorithms that devise narratives users dislike, will tend to be killed off in the feedback loop, in favour of other algorithms whose turn of phrase, or ability to stoke our egos, is more pronounced. For instance, Narrative Science’s early algorithms for creating little league narratives tended to focus on the victors of each game. What Narrative Science found is that parents were more interested in hearing about their own children, the tiny ups and downs that made the game significant to them. So the algorithms were tweaked in response. Again, to quote chief scientist Kris Hammond from Narrative Science: These are narratives generated by systems that understand data, that give us information to support the decisions we need to make about tomorrow. [9] Whilst we can program software to translate the informational nuances of a baseball game, or internet social trends, into human palatable narratives, larger social, economic and environmental events also tend to get pushed through an algorithmic meatgrinder to make them more palatable. The ‘tomorrow’ that Hammond claims his company can help us prepare for is one that, presumably, companies like Narrative Science and Prismatic will play an ever larger part in realising. In her recently published essay on Crisis and the Temporality of Networks, Wendy Chun reminds us of the difference between the user and the agent in the machinic assemblage: Celebrations of an all powerful user/agent – ‘you’ as the network, ‘you’ as the producer- counteract concerns over code as law as police by positing ‘you’ as the sovereign subject, ‘you’ as the decider. An agent however, is one who does the  actual labor, hence agent is one who acts on behalf of another. On networks, the agent would seem to be technology, rather than the users or programmers who authorize actions through their commands and clicks. [10] In order to unpack Wendy Chun’s proposition here we need only look at two of the most powerful, and impactful algorithms from the last ten years of the web. Firstly, Amazon’s recommendation system, which I assume you have all interacted with at some point. And secondly, Facebook’s news feed algorithm, that ranks and sorts posts on your personalised stream. Both these algorithms rely on a community of user interactions to establish a hierarchy of products, or posts, based on popularity. Both these algorithms also function in response to user’s past activity, and both, of course, have been tweaked and altered over time by the design and programming teams of the respective companies. As we are all no doubt aware, one of the most significant driving principles behind these extraordinarily successful pieces of code is capitalism itself. The drive for profit, and the relationship that has on distinguishing between a successful or failing company, service or product. Wendy Chun’s reminder that those that carry out an action, that program and click, are not the agents here should give use solace. We are positioned as sovereign subjects over our data, because that idea is beneficial to the propagation of the ‘product’. Whether we are told how well our child has done at baseball, or what particular kinds of news stories we might like, personally, to read right now, it is to the benefit of technocapitalism that those narratives are positive, palatable and uncompromising. However the aggregation and dissemination of big data effects our lives over the coming years, the likelihood is that at the surface – on our screens, and ubiquitous handheld devices – everything will seem rosey, comfortable, and suited to the ‘needs’ and ‘use’ of each sovereign subject.

TtW15 #A7 @npseaver @nd_kane @s010n @smwat pic.twitter.com/BjJndzaLz1

— Daniel Rourke (@therourke) April 17, 2015

So to finish I just want to gesture towards a much much bigger debate that I think we need to have about big data, technocapitalism and its algorithmic agents. To do this I just want to read a short paragraph which, as far as I know, was not written by an algorithm: Surface temperature is projected to rise over the 21st century under all assessed emission scenarios. It is very likely that heat waves will occur more often and last longer, and that extreme precipitation events will become more intense and frequent in many regions. The ocean will continue to warm and acidify, and global mean sea level to rise. [11] This is from a document entitled ‘Synthesis Report for Policy Makers’ drafted by The Intergovernmental Panel on Climate Change – another organisation who rely on a transnational network of computers, sensors, and programs capable of modeling atmospheric, chemical and wider environmental processes to collate data on human environmental impact. Ironically then, perhaps the most significant tool we have to understand the world, at present, is big data. Never before has humankind had so much information to help us make decisions, and help us enact changes on our world, our society, and our selves. But the problem is that some of the stories big data has to tell us are too big to be narrated, they are just too big to be palatable. To quote Edmund Berger again: For these reasons we can say that the proper end of postmodernism comes in the gradual realization of the Anthropocene: it promises the death of the narrative of human mastery, while erecting an even grander narrative. If modernism was about victory of human history, and postmodernism was the end of history, the Anthropocene means that we are no longer in a “historical age but also a geological one. Or better: we are no longer to think history as exclusively human…” [12] I would argue that the ‘grand narratives of legitimation’ Lyotard claimed we left behind in the move to Postmodernity will need to return in some way if we are to manage big data in a meaningful way. Crises such as catastrophic climate change will never be made palatable in the feedback between users, programmers and  technocapitalism. Instead, we need to revisit Lyotard’s distinction between the true and the useful. Rather than ask how we can make big data useful for us, we need to ask what grand story we want that data to tell us.   References [1] Source: www.narrativescience.com, accessed 15/10/14 [2] Steven Levy, “Can an Algorithm Write a Better News Story Than a Human Reporter?,” WIRED, April 24, 2012, http://www.wired.com/2012/04/can-an-algorithm-write-a-better-news-story-than-a-human-reporter/. [3] “Steven Poole – On Algorithms,” Aeon Magazine, accessed May 8, 2015, http://aeon.co/magazine/technology/steven-poole-can-algorithms-ever-take-over-from-humans/. [4] Ibid. [5] Jean-François Lyotard, The Postmodern Condition: A Report on Knowledge, Repr, Theory and History of Literature 10 (Manchester: Univ. Pr, 1992), 50. [6] Ibid., 51. [7] Brian Massumi, Parables for the Virtual: Movement, Affect, Sensation (Duke University Press, 2002), 128. [8] Edmund Berger, “The Anthropocene and the End of Postmodernism,” Synthetic Zero, n.d., http://syntheticzero.net/2015/04/01/the-anthropocene-and-the-end-of-postmodernism/. [9] Source: www.narrativescience.com, accessed 15/10/14 [10] Wendy Chun, “Crisis and the Temporality of Networks,” in The Nonhuman Turn, ed. Richard Grusin (Minneapolis: University of Minnesota Press, 2015), 154. [11] Rajendra K. Pachauri et al., “Climate Change 2014: Synthesis Report. Contribution of Working Groups I, II and III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change,” 2014, http://epic.awi.de/37530/. [12] Berger, “The Anthropocene and the End of Postmodernism.”

]]>
Fri, 08 May 2015 04:02:51 -0700 http://machinemachine.net/portfolio/paper-at-theorizing-the-web-synthetic-subjects/
<![CDATA[Interview with Domenico Quaranta]]> http://www.furtherfield.org/features/interviews/interview-domenico-quaranta

Daniel Rourke: At Furtherfield on November 22nd 2014 you launched a Beta version of a networked project, 6PM Your Local Time, in collaboration with Fabio Paris, Abandon Normal Devices and Gummy Industries. #6PMYLT uses twitter hashtags as a nexus for distributed art happenings. Could you tell us more about the impetus behind the project? Domenico Quaranta: In September 2012, the Link Art Center launched the Link Point in Brescia: a small project space where, for almost two years, we presented installation projects by local and international artists. The Link Point was, since the beginning, a “dual site”: a space where to invite our local audience, but also a set for photographic documentation meant to be distributed online to a global audience. Fabio Paris’ long experience with his commercial gallery – that used the same space for more than 10 years, persuaded us that this was what we had to offer to the artists invited. So, the space was reduced to a small cube, white from floor to ceiling, with neon lights and a big logo (a kind of analogue watermark) on the back door. Thinking about this project, and the strong presence of the Link Point logo in all the documentation, we realized that the Link Point was actually not bound to that space: as an abstract, highly formalized space, it could actually be everywhere. Take a white cube and place the Link Point logo in it, and that’s the Link Point.

This realization brought us, on the one hand, to close the space in Brescia and to turn the Link Point into a nomad, erratic project, that can resurrect from time to time in other places; and, on the other hand, to conceive 6PM Your Local Time. The idea was simple: if exhibition spaces are all more or less similar; if online documentation has become so important to communicate art events to a wider audience, and if people started perceiving it as not different from primary experience, why not set up an exhibition that takes place in different locations, kept together only by documentation and by the use of the same logo? All the rest came right after, as a natural development from this starting point (and as an adaptation of this idea to reality). Of course, this is a statement as well as a provocation: watching the documentation of the UK Beta Test you can easily realize that exhibition spaces are NOT more or less the same; that attending or participating in an event is different from watching pictures on a screen; that some artworks work well in pictures but many need to be experiences. We want to stress the value of networking and of giving prominence to your network rather than to your individual identity; but if the project would work as a reminder that reality is still different from media representation, it would be successful anyway. Daniel Rourke: There is something of Hakim Bey’s Temporary Autonomous Zones in your proposal. The idea that geographic, economic and/or political boundaries need no longer define the limits of social collective action. We can criticise Bey’s 1991 text now, because in retrospect the Internet and its constitutive protocols have themselves become a breeding ground for corporate and political concerns, even as technology has allowed ever more distributed methods of connectivity. You foreground network identity over individual identity in the 6PM YLT vision, yet the distinction between the individuals that create a network and the corporate hierarchies that make that networkingpossible are less clear. I am of course gesturing towards the use of Twitter as the principal platform of the project, a question that Ruth Catlow brought up at the launch. Do you still believe that TAZs are possible in our hyper-connected, hyper-corporate world? Domenico Quaranta: In its first, raw conceptualization, 6PM YLT had to come with its own smartphone app, that had to be used both to participate in the project and to access the gallery. The decision to aggregate content published on different social platforms came from the realization that people already had the production and distribution tools required to participate in the action, and were already familiar with some gestures: take a photo, apply a filter, add an hashtag, etc. Of course, we could invite participants and audiences to use some specific, open source social network of our choice, but we prefer to tell them: just use the fucking platform of your choice. We want to facilitate and expand participation, not to reduce it; and we are not interested in adding another layer to the project. 6PM YLT is not a TAZ, it’s just a social game that wants to raise some awareness about the importance of documentation, the power of networks, the public availability of what we do with our phones. And it’s a parasitic tool that, as anything else happening online, implies an entire set of corporate frameworks in order to exist: social networks, browsers, operative systems, internet providers, server farms etc. That said, yes, I think TAZs are still possible. The model of TAZ has been designed for an hyper-connected, hyper-corporate world; they are temporary and nomadic; they exist in interstices for a short time. But I agree that believing in them is mostly an act of faith.

Daniel Rourke: The beta-tested, final iteration of 6pm YLT will be launched in the summer of 2015. How will you be rolling out the project in the forthcoming months? How can people get involved? Domenico Quaranta: 6PM Your Local Time has been conceived as an opportunity, for the organizing subject, to bring to visibility its network of relationships and to improve it. It’s not an exhibition with a topic, but a social network turned visible. To put it simply: our identity is defined not just by what we do, but also by the people we hang out with. After organizing 6PM Your Local Time Europe, the Link Art Center would like to take a step back and to offer the platform to other organizing subjects, to allow them to show off their network as well. So, what we are doing now is preparing a long list of institutions, galleries and artists we made love with in the past or we’d like to make love with in the future, and inviting them to participate in the project. We won’t launch an open call, but we already made the event public saying that if anyone is interested to participate, they are allowed to submit a proposal. We won’t accept anybody, but we would be happy to get in touch with people we didn’t know. After finalizing the list of participants, we will work on all the organizational stuff, basically informing them about the basic rules of the game, gathering information about the events, answering questions, etc. On the other hand, we have of course to work on the presentation. While every participant presents an event of her choice, the organizer of a 6PM Your Local Time event has to present to its local audience the platform event, as an ongoing installation / performance. We are from Brescia, Italy, and that’s where we will make our presentation. We made an agreement with MusicalZOO, a local festival of art and electronic music, in order to co-produce the presentation and have access to their audience. This is what determined the date of the event in the first place. Since the festival takes place outdoor during the summer, we are working with them on designing a temporary office where we can coordinate the event, stay in touch with the participants, discuss with the audience, and a video installation in which the live stream of pics and videos will be displayed. Since we are expecting participants from Portugal to the Russian Federation, the event will start around 5 PM, and will follow the various opening events up to late night. One potential reference for this kind of presentation may be those (amazing) telecommunication projects that took place in the Eighties: Robert Adrian’s The World in 24 Hours, organized at Ars Electronica in 1982; the Planetary Network set up in 1986 at the Venice Biennale; and even Nam June Paik’s satellite communication project Good Morning Mr Orwell (1984). Left to Right – Enrico Boccioletti, Kim Asendorf, Ryder Ripps, Kristal South, Evan Roth Daniel Rourke: Your exhibition Unoriginal Genius, featuring the work of 17 leading net and new media artists, was the last project to be hosted in the Carroll/Fletcher Project Space (closing November 22nd, 2014). Could you tell us more about the role you consider ‘genius’ plays in framing contemporary art practice? Domenico Quaranta: The idea of genius still plays an important role in Western culture, and not just in the field of art. Whether we are talking about the Macintosh, Infinite Jest, a space trip or Nymphomaniac, we are always celebrating an individual genius, even if we perfectly know that there is a team and a concerted action behind each of these things. Every art world is grounded in the idea that there are gifted people who, provided specific conditions, can produce special things that are potentially relevant for anybody. This is not a problem in itself – what’s problematic are some corollaries to our traditional idea of genius – namely “originality” and “intellectual property”. The first claims that a good work of creation is new and doesn’t depend on previous work by others; the second claims that an original work belongs to the author. In my opinion, creation never worked this way, and I’m totally unoriginal in saying this: hundreds of people, before and along to me, say that creating consists in taking chunks of available material and assembling them in ways that, in the best situation, allow us to take a small step forward from what came before. But in the meantime, entire legal systems have been built upon such bad beliefs; and what’s happening now is that, while on the one hand the digitalization of the means of production and dissemination allow us to look at this process with unprecedented clarity; on the other hand these regulations have evolved in such a way that they may eventually slow down or stop the regular evolution of culture, which is based on the exchange of ideas. We – and creators in particular – have to fight against this situation. But Unoriginal Genius shouldn’t be read in such an activist way. It is just a small attempt to show how the process of creation works today, in the shared environment of a networked computer, and to bring this in front of a gallery audience. Left to Right – Kim Asendorf, Ryder Ripps, Kristal South, Evan Roth Daniel Rourke: So much online material ‘created’ today is free-flowing and impossible to trace back to an original author, yet the tendency to attribute images, ideas or ‘works’ to an individual still persists – as it does in Unoriginal Genius. I wonder whether you consider some of the works in the show as more liberated from authorial constraints than others? That is, what are the works that appear to make themselves; floating and mutating regardless of particular human (artist) intentions? Domenico Quaranta: Probably Museum of the Internet is the one that fits best to your description. Everybody can contribute anonymously to it by just dropping images on the webpage; the authors’ names are not available on the website, and there’s no link to their homepage. It’s so simple, so necessary and so pure that one may think that it always existed out there in some way or another. And in a way it did, because the history of the internet is full of projects that invite people to do more or less the same. Left to Right – Brout & Marion, Gervais & Magal, Sara Ludy Daniel Rourke: 2014 was an exciting year for the recognition of digital art cultures, with the appointment of Dragan Espenschied as lead Digital Conservator at Rhizome, the second Paddles On! auction of digital works in London, with names like Hito Steyerl and Ryan Trecartin moving up ArtReview’s power list, and projects like Kenneth Goldsmith’s ‘Printing out the Internet’ highlighting the increasing ubiquity – and therefore arguable fragility – of web-based cultural aggregation. I wondered what you were looking forward to in 2015 – apart from 6PM YLT of course. Where would you like to see the digital/net/new media arts 12 months from now? Domenico Quaranta: On the moon, of course! Out of joke: I agree that 2014 has been a good year for the media arts community, as part of a general positive trend along the last few years. Other highlighs may include, in various order: the September 2013 issue of Artforum, on “Art and Media”, and the discussion sparked by Claire Bishop’s essay; Cory Arcangel discovering and restoring lost Andy Warhol’s digital files from floppy disks; Ben Fino-Radin becoming digital conservator at MoMA, New York; JODI winning the Prix Net Art; the Barbican doing a show on the Digital Revolution with Google. Memes like post internet, post digital and the New Aesthetic had negative side effects, but they helped establishing digital culture in the mainstream contemporary art discourse, and bringing to prominence some artists formerly known as net artists. In 2015, the New Museum Triennial will be curated by Lauren Cornell and Ryan Trecartin, and DIS has been announced to be curator of the 9th Berlin Biennial in 2016. All this looks promising, but one thing that I learned from the past is to be careful with optimistic judgements. The XXI century started with a show called 010101. Art in Technological Times, organized by SFMoMA. The same year, net art entered the Venice Biennale, the Whitney organizedBitstreams and Data Dynamics, the Tate Art and Money Online. Later on, the internet was announced dead, and it took years for the media art community to get some prominence in the art discourse again. The situation now is very different, a lot has been done at all levels (art market, institutions, criticism), and the interest in digital culture and technologies is not (only) the result of the hype and of big money flushed by corporations unto museums. But still, where we really are? The first Paddles On! Auction belongs to history because it helped selling the first website ever on auction; the second one mainly sold digital and analogue paintings. Digital Revolution was welcomed by sentences like: “No one could fault the advances in technology on display, but the art that has emerged out of that technology? Well, on this showing, too much of it seems gimmicky, weak and overly concerned with spectacle rather than meaning, or making a comment on our culture.” (The Telegraph) The upcoming New Museum Triennial will include artists like Ed Atkins, Aleksandra Domanovic, Oliver Laric, K-HOLE, Steve Roggenbuck, but Lauren and Ryan did their best to avoid partisanship. There’s no criticism in this statement, actually I would have done exactly the same, and I’m sure it will be an amazing show that I can’t wait to see. Just, we don’t have to expect too much from this show in terms of “digital art recognition”. So, to put it short: I’m sure digital art and culture is slowly changing the arts, and that this revolution will be dramatic; but it won’t take place in 2015

http://www.6pmyourlocaltime.com/

]]>
Wed, 08 Apr 2015 03:57:20 -0700 http://www.furtherfield.org/features/interviews/interview-domenico-quaranta
<![CDATA[Can Artists Help Us Reboot Humanism in an Over-Connected Age?]]> http://artinfo.com/news/story/800410/can-artists-help-us-reboot-humanism-in-an-over-connected-age

How does aesthetic experience fare in such an environment? Within art-tech circles, the buzz these days is about something called the “New Aesthetic,” a coinage of James Bridle, who launched a Tumblr of the same name dedicated to aggregating phenomena that blur together digital culture and real-world design, and seem characteristic of the present's plugged-in sensibility. In his response to the “New Aesthetic,” techno-pundit Bruce Sterling takes it to task for lacking any rigor or specificity, and just basically being a meusli of wicked cool images. My response to this response would be that it is this lack of rigor that makes this Aesthetic characteristically New. That’s the aesthetics of the shallows; that’s an avant-garde that’s been programmed to speed read — an aggregation of cool-looking things, with little to no logical connection, brought to you via Tumblr.

]]>
Wed, 25 Apr 2012 16:44:07 -0700 http://artinfo.com/news/story/800410/can-artists-help-us-reboot-humanism-in-an-over-connected-age
<![CDATA[Michel Serres on the word 'human']]> http://www.universite-du-si.com/en/conferences/8-paris-usi-2011/sessions/961-michel-serres

Son of a barge man, Michel Serres joined the Ecole Navale in 1949 and the Ecole Normale supérieure in 1952 where he obtained the aggregation of philosophy in 1955. From 1956 to 1958, he served as an officer of the navy: squadron of the Atlantic, reopening of the Suez Canal, Algeria, and squadron of the Mediterranean Sea.   Michel Serres defended his thesis in 1968 and taught philosophy in Clermont-Ferrand, Vincennes (Paris I) and at Standford University. In his books, he focuses, among other themes, on the history of sciences (“Hermes”, 1969-1980). His philosophy, concerning as much sensibility as conceptual intelligence, searches for the possible junctions between exact sciences and social sciences.    He has been appointed to the Académie Française in 1990 and became commandeur of the Légion d’honneur.   Rigorous epistemologist, he is also concerned by education and diffusion of knowledge. 

]]>
Wed, 06 Jul 2011 15:15:38 -0700 http://www.universite-du-si.com/en/conferences/8-paris-usi-2011/sessions/961-michel-serres