MachineMachine /stream - tagged with web en-us LifePress <![CDATA[The Thoughts of a Spiderweb | Quanta Magazine]]>

Millions of years ago, a few spiders abandoned the kind of round webs that the word “spiderweb” calls to mind and started to focus on a new strategy. Before, they would wait for prey to become ensnared in their webs and then walk out to retrieve it.

Wed, 06 Sep 2017 03:24:19 -0700
<![CDATA[The Dark Web Is Mostly Full of Garbage]]>

The dark web—the portion of the deep web only accessible through specific software—exists to serve the needs of hackers-for-hire, hitmen, internet drug kingpins, child pornographers, and their inevitable customers. That’s the public consensus. Then there’s the counter-narrative.

Wed, 21 Sep 2016 10:15:13 -0700
<![CDATA["She wore a USB cord instead of a necklace": whatever happened to Cyberfeminism?]]>

The movement was young, energetic, educated, and art school-heavy. Above all it was “positive”: both cyber-positive and sex-positive. Sometime in the late 1990s, I met someone else called Joanna Walsh. The fact that this is also my name drew me to study her closely.

Fri, 27 May 2016 12:30:26 -0700
<![CDATA[UC Davis paid $175,000 or more to scrub police pepper spray incident from web searches / Boing Boing]]>

Looks like the geniuses who run UC Davis never Googled the words “Streisand Effect.

Sun, 17 Apr 2016 06:02:40 -0700
<![CDATA[The Research Pirates of the Dark Web - The Atlantic]]>

There’s a battle raging over whether academic research should be free, and it’s overflowing into the dark web.

Tue, 16 Feb 2016 08:17:50 -0800
<![CDATA[Theorizing the Web 2015: 'Living with Algorithms' Panel]]>

Note Due to technical issues, Solon Barocas' presentation 'The Alterity of Algorithms' was not recorded.

.Presider Sara M. Watson .Hashmod Ava Kofman

Daniel Rourke: Synthetic Subjects Natalie Kane: Ghost Stories Nick Seaver: Traps: Algorithms and the Anthropology of Technology

Mon, 23 Nov 2015 14:13:48 -0800
<![CDATA[Algorithmic Narratives and Synthetic Subjects (paper)]]>

This was the paper I delivered at The Theorizing the Web Conference, New York, 18th April 2015. This video of the paper begins part way in, and misses out some important stuff. I urge you to watch the other, superb, papers on my panel by Natalie Kane, Solon Barocas, and Nick Seaver. A better video is forthcoming. I posted this up partly in response to this post at Wired about the UK election, Facebook’s echo-chamber effect, and other implications well worth reading into.

Data churning algorithms are integral to our social and economic networks. Rather than replace humans these programs are built to work with us, allowing the distinct strengths of human and computational intelligences to coalesce. As we are submerged into the era of ‘big data’, these systems have become more and more common, concentrating every terrabyte of raw data into meaningful arrangements more easily digestible by high-level human reasoning. A company calling themselves ‘Narrative Science’, based in Chicago, have established a profitable business model based on this relationship. Their slogan, ‘Tell the Stories Hidden in Your Data’, [1] is aimed at companies drowning in spreadsheets of cold information: a promise that Narrative Science can ‘humanise’ their databases with very little human input. Kristian Hammond, Chief Technology Officer of the company, claims that within 15 years over 90% of all news stories will also be written by algorithms. [2] But rather than replacing the jobs that human journalists now undertake, Hammond claims the vast majority of their ‘robonews’ output will report on data currently not covered by traditional news outlets. One family-friendly example of this is the coverage of little-league baseball games. Very few news organisations have the resources, or desire, to hire a swathe of human journalists to write-up every little-league game. Instead, Narrative Science offer leagues, parents and their children a miniature summary of each game gleaned from match statistics uploaded by diligent little league attendees, and then written up by Narrative Science in a variety of journalistic styles. In their book ‘Big Data’ from 2013, Oxford University Professor of internet governance Viktor Mayer-Schönberger, and  ‘data editor’ of The Economist, Kenneth Cukier, tell us excitedly about another data aggregation company, Prismatic, who: …rank content from the web on the basis of text analysis, user preferences, social network-popularity, and big-data analysis. [3] According to Mayer- Schönberger and Cukier this makes Prismatic able ‘to tell the world what it ought to pay attention to better than the editors of the New York Times’. [4] A situation, Steven Poole reminds us, we can little argue with so long as we agree that popularity underlies everything that is culturally valuable. Data is now the lifeblood of technocapitalism. A vast endless influx of information flowing in from the growing universe of networked and internet connected devices. As many of the papers at Theorizing the Web attest, our environment is more and more founded by systems whose job it is to mediate our relationship with this data. Technocapitalism still appears to respond to Jean Francois Lyotard’s formulation of Postmodernity: that whether something is true has less relevance, than whether it is useful. In 1973 Jean Francois Lyotard described the Postmodern Condition as a change in “the status of knowledge” brought about by new forms of techno-scienctific and techno-economic organisation. If a student could be taught effectively by a machine, rather than by another human, then the most important thing we could give the next generation was what he called, “elementary training in informatics and telematics.” In other words, as long as our students are computer literate “pedagogy would not necessarily suffer”. [5] The next passage – where Lyotard marks the Postmodern turn from the true to the useful – became one of the book’s most widely quoted, and it is worth repeating here at some length:

It is only in the context of the grand narratives of legitimation – the life of the spirit and/or the emancipation of humanity – that the partial replacement of teachers by machines may seem inadequate or even intolerable. But it is probable that these narratives are already no longer the principal driving force behind interest in acquiring knowledge. [6] Here, I want to pause to set in play at least three elements from Lyotard’s text that colour this paper. Firstly, the historical confluence between technocapitalism and the era now considered ‘postmodern’. Secondly, the association of ‘the grand-narrative’ with modern, and pre-modern conditions of knowledge. And thirdly, the idea that the relationship between the human and the machine – or computer, or software – is generally one-sided: i.e. we may shy away from the idea of leaving the responsibility of our children’s education to a machine, but Lyotard’s position presumes that since the machine was created and programmed by humans, it will therefore necessarily be understandable and thus controllable, by humans. Today, Lyotard’s vision of an informatically literate populous has more or less come true. Of course we do not completely understand the intimate workings of all our devices or the software that runs them, but the majority of the world population has some form of regular relationship with systems simulated on silicon. And as Lyotard himself made clear, the uptake of technocapitalism, and therefore the devices and systems it propagates, is piece-meal and difficult to predict or trace. At the same time Google’s fleet of self-driving motor vehicles are let-loose on Californian state highways, in parts of sub-Saharan Africa models of mobile-phones designed 10 or more years ago are allowing farming communities to aggregate their produce into quantities with greater potential to make profit on a world market. As Brian Massumi remarks, network technology allows us the possibility of “bringing to full expression a prehistory of the human”, a “worlding of the human” that marks the “becoming-planetary” of the body itself. [7] This “worlding of the human” represents what Edmund Berger argues is the death of the Postmodern condition itself: [T]he largest bankruptcy of Postmodernism is that the grand narrative of human mastery over the cosmos was never unmoored and knocked from its pulpit. Instead of making the locus of this mastery large aggregates of individuals and institutions – class formations, the state, religion, etc. – it simply has shifted the discourse towards the individual his or herself, promising them a modular dreamworld for their participation… [8] Algorithmic narratives appear to continue this trend. They are piece-meal, tending to feedback user’s dreams, wants and desires, through carefully aggregated, designed, packaged Narratives for individual ‘use’. A world not of increasing connectivity and understanding between entities, but a network worlded to each individual’s data-shadow. This situation is reminiscent of the problem pointed out by Eli Pariser of the ‘filter bubble’, or the ‘you loop’, a prevalent outcome of social media platforms tweaked and personalised by algorithms to echo at the user exactly the kind of thing they want to hear. As algorithms develop in complexity the stories they tell us about the vast sea of data will tend to become more and more enamoring, more and more palatable. Like some vast synthetic evolutionary experiment, those algorithms that devise narratives users dislike, will tend to be killed off in the feedback loop, in favour of other algorithms whose turn of phrase, or ability to stoke our egos, is more pronounced. For instance, Narrative Science’s early algorithms for creating little league narratives tended to focus on the victors of each game. What Narrative Science found is that parents were more interested in hearing about their own children, the tiny ups and downs that made the game significant to them. So the algorithms were tweaked in response. Again, to quote chief scientist Kris Hammond from Narrative Science: These are narratives generated by systems that understand data, that give us information to support the decisions we need to make about tomorrow. [9] Whilst we can program software to translate the informational nuances of a baseball game, or internet social trends, into human palatable narratives, larger social, economic and environmental events also tend to get pushed through an algorithmic meatgrinder to make them more palatable. The ‘tomorrow’ that Hammond claims his company can help us prepare for is one that, presumably, companies like Narrative Science and Prismatic will play an ever larger part in realising. In her recently published essay on Crisis and the Temporality of Networks, Wendy Chun reminds us of the difference between the user and the agent in the machinic assemblage: Celebrations of an all powerful user/agent – ‘you’ as the network, ‘you’ as the producer- counteract concerns over code as law as police by positing ‘you’ as the sovereign subject, ‘you’ as the decider. An agent however, is one who does the  actual labor, hence agent is one who acts on behalf of another. On networks, the agent would seem to be technology, rather than the users or programmers who authorize actions through their commands and clicks. [10] In order to unpack Wendy Chun’s proposition here we need only look at two of the most powerful, and impactful algorithms from the last ten years of the web. Firstly, Amazon’s recommendation system, which I assume you have all interacted with at some point. And secondly, Facebook’s news feed algorithm, that ranks and sorts posts on your personalised stream. Both these algorithms rely on a community of user interactions to establish a hierarchy of products, or posts, based on popularity. Both these algorithms also function in response to user’s past activity, and both, of course, have been tweaked and altered over time by the design and programming teams of the respective companies. As we are all no doubt aware, one of the most significant driving principles behind these extraordinarily successful pieces of code is capitalism itself. The drive for profit, and the relationship that has on distinguishing between a successful or failing company, service or product. Wendy Chun’s reminder that those that carry out an action, that program and click, are not the agents here should give use solace. We are positioned as sovereign subjects over our data, because that idea is beneficial to the propagation of the ‘product’. Whether we are told how well our child has done at baseball, or what particular kinds of news stories we might like, personally, to read right now, it is to the benefit of technocapitalism that those narratives are positive, palatable and uncompromising. However the aggregation and dissemination of big data effects our lives over the coming years, the likelihood is that at the surface – on our screens, and ubiquitous handheld devices – everything will seem rosey, comfortable, and suited to the ‘needs’ and ‘use’ of each sovereign subject.

TtW15 #A7 @npseaver @nd_kane @s010n @smwat

— Daniel Rourke (@therourke) April 17, 2015

So to finish I just want to gesture towards a much much bigger debate that I think we need to have about big data, technocapitalism and its algorithmic agents. To do this I just want to read a short paragraph which, as far as I know, was not written by an algorithm: Surface temperature is projected to rise over the 21st century under all assessed emission scenarios. It is very likely that heat waves will occur more often and last longer, and that extreme precipitation events will become more intense and frequent in many regions. The ocean will continue to warm and acidify, and global mean sea level to rise. [11] This is from a document entitled ‘Synthesis Report for Policy Makers’ drafted by The Intergovernmental Panel on Climate Change – another organisation who rely on a transnational network of computers, sensors, and programs capable of modeling atmospheric, chemical and wider environmental processes to collate data on human environmental impact. Ironically then, perhaps the most significant tool we have to understand the world, at present, is big data. Never before has humankind had so much information to help us make decisions, and help us enact changes on our world, our society, and our selves. But the problem is that some of the stories big data has to tell us are too big to be narrated, they are just too big to be palatable. To quote Edmund Berger again: For these reasons we can say that the proper end of postmodernism comes in the gradual realization of the Anthropocene: it promises the death of the narrative of human mastery, while erecting an even grander narrative. If modernism was about victory of human history, and postmodernism was the end of history, the Anthropocene means that we are no longer in a “historical age but also a geological one. Or better: we are no longer to think history as exclusively human…” [12] I would argue that the ‘grand narratives of legitimation’ Lyotard claimed we left behind in the move to Postmodernity will need to return in some way if we are to manage big data in a meaningful way. Crises such as catastrophic climate change will never be made palatable in the feedback between users, programmers and  technocapitalism. Instead, we need to revisit Lyotard’s distinction between the true and the useful. Rather than ask how we can make big data useful for us, we need to ask what grand story we want that data to tell us.   References [1] Source:, accessed 15/10/14 [2] Steven Levy, “Can an Algorithm Write a Better News Story Than a Human Reporter?,” WIRED, April 24, 2012, [3] “Steven Poole – On Algorithms,” Aeon Magazine, accessed May 8, 2015, [4] Ibid. [5] Jean-François Lyotard, The Postmodern Condition: A Report on Knowledge, Repr, Theory and History of Literature 10 (Manchester: Univ. Pr, 1992), 50. [6] Ibid., 51. [7] Brian Massumi, Parables for the Virtual: Movement, Affect, Sensation (Duke University Press, 2002), 128. [8] Edmund Berger, “The Anthropocene and the End of Postmodernism,” Synthetic Zero, n.d., [9] Source:, accessed 15/10/14 [10] Wendy Chun, “Crisis and the Temporality of Networks,” in The Nonhuman Turn, ed. Richard Grusin (Minneapolis: University of Minnesota Press, 2015), 154. [11] Rajendra K. Pachauri et al., “Climate Change 2014: Synthesis Report. Contribution of Working Groups I, II and III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change,” 2014, [12] Berger, “The Anthropocene and the End of Postmodernism.”

Fri, 08 May 2015 04:02:51 -0700
<![CDATA[Andrew O’Hagan · The Lives of Ronald Pinn · LRB 8 January 2015]]>

I first went to Camberwell New Cemetery about six years ago, looking for the grave of a young man called Melvin Bryan, a petty criminal who died after being stabbed at a drug-house in Edmonton.

Mon, 02 Feb 2015 11:14:00 -0800
<![CDATA[Use of The City as a metaphor for the Internet/Web]]>

I am looking for examinations of the Internet and World Wide Web that use the structure and/or history of the city as a metaphor. I'm afraid I have no original example of this phenomenon to kick things off. I have this image in my head of 'the city' that always goes back to Plato and his Republic. Plato's city was a physical, social construction, as well as a philosophical metaphor, at one and the same time. It feels that many have talked about the Internet in similar, overlapping, terms.

(It need not be 'the city as metaphor', rather any social, physical space that humans build and live in will suffice. Also, metonymy rather than metaphor would be great.)

Writings that explore the political history of the city, it's technological expansion, that consider the city as a nexus for theories of human civilisation, of emergence perhaps, of structure, social and political control and, perhaps most importantly, of space vs place - all as a way to think about similar phenomena taking place online. The Internet as emerging network with similarities to the city; the World Wide Web considered as spatio-social metaphor?

etc. etc.

Any ideas?

Fri, 12 Apr 2013 10:19:31 -0700
<![CDATA[Where Memes Really Come From]]>

Though history will probably remember Richard Dawkins as the activist who spearheaded a new atheist movement, there is something far more famous and important that he invented — and few people know it. He is the guy who first popularized the idea ...

Tue, 29 Jan 2013 15:08:16 -0800
<![CDATA[Rhizome | Using, Using, Used]]>

Within the pages of Digital Folklore Reader, Olia Lialina, one of the book's editors, refers to a claim by the social media researcher Danah Boyd, that some American teenagers identify as Facebook and others as MySpace-preferring a conformist and clean interface persona, or a rebellious and visually

Tue, 22 Jan 2013 15:51:00 -0800
<![CDATA[Digital Life Is A Hoax…Because There’s No Such Thing » Cyborgology]]>

I've poked fun at these lazy op-eds before and, indeed, it must be tempting to retreat into the safe conceptual territory of "The Internet is fake!" when a juicy story of lies, deception, and computers makes headlines. The Te'o case is an almost unbelievable account of a football star allegedly trick

Tue, 22 Jan 2013 15:24:00 -0800
<![CDATA[On Online 'Becoming' #Pizza]]>

In the hot lava of social media flows the critical distance between performing and being a ‘somebody’, a subject type, seems to collapse, along with the ‘integrity’ of art’s own space and frame. In the lead up to their exhibition at Arcadia Missa, Web 2.0 artists Jennifer Chan and Ann Hirsch discuss the politics of performance and the productive nausea of networked becoming with art historian Cadence Kinsey and curator Rozsa Farkas

‘State and Lore (Revolutionizing Desire: A Reclamation of Representation for its Affective Potential) is a day of talks and performances hosted by Arcadia Missa at South London Gallery’s Clore studio. Conceived as an interdisciplinary event, State and Lore plans to bring together artists, writers and academics who are currently working through questions of representation and embodiment in relation to digital media and Web 2.0 technology. There will be a live piece by Jesse Darling, a performance by Ann Hirsch featuring excerpts related to her Scandalishious project, and a talk from Paul Kneale. There will be skype talks with Jennifer Chan and Faith Holland. The symposium has been put together in conjunction with And Lore, an exhibition of works by Chan and Hirsch at Arcadia Missa, a gallery, publishers and studios in Peckham, South East London.

Tue, 20 Nov 2012 07:13:13 -0800
<![CDATA[Why Do Sign Language Interpreters Look So Animated?]]>

As New York City Mayor Bloomberg gave numerous televised addresses about the preparations the city was making for Hurricane Sandy, and then the storm’s aftermath, he was joined at the podium by a sign language interpreter, who immediately became a twitter darling. People watching the addresses tweeted that she was "amazing," "mesmerizing," "hypnotizing," and "AWESOME." Soon, her name was uncovered—Lydia Callis—and animated .gifs of her signing were posted. A couple of hours later, a tumblr was born. New York magazine called her "Hurricane Sandy's breakout star."

Callis was great, but not because she was so lively and animated. She was great because she was performing a seriously difficult mental task—simultaneously listening and translating on the spot—in a high-pressure, high-stakes situation. Sure, she was expressive, but that's because she was speaking a visual language. Signers are animated not because they are bubbly and energetic, but because sign language uses face and body movements as part of its grammar.

Sun, 11 Nov 2012 01:02:08 -0800
<![CDATA[Lies, Damn Lies, and Twitter Bots]]>

I’m particularly interested in the political uses of technology-enabled deception—uses that I suspect are likely to become more prevalent in the near future.

Two of my rules for constructing useful and interesting scenarios are to (a) think about what happens when seemingly disparate changes smash together, and (b) imagine how new developments might be misused. In both cases, the goal is to uncover something unexpected, but (upon reflection) disturbingly plausible. I’d like to lay out for you the chain of connections that lead me to believe that we’re on the verge of something big.

Tue, 18 Sep 2012 06:18:00 -0700
<![CDATA[How Google Builds Its Maps—and What It Means for the Future of Everything]]>

Behind every Google Map, there is a much more complex map that's the key to your queries but hidden from your view. The deep map contains the logic of places: their no-left-turns and freeway on-ramps, speed limits and traffic conditions. This is the data that you're drawing from when you ask Google to navigate you from point A to point B -- and last week, Google showed me the internal map and demonstrated how it was built. It's the first time the company has let anyone watch how the project it calls GT, or "Ground Truth," actually works.

Google opened up at a key moment in its evolution. The company began as an online search company that made money almost exclusively from selling ads based on what you were querying for. But then the mobile world exploded. Where you're searching from has become almost as important as what you're searching for. Google responded by creating an operating system, brand, and ecosystem in Android that has become the only significant rival to Apple's iOS.

Wed, 12 Sep 2012 15:23:00 -0700
<![CDATA[Digital Curation Resource Guide]]>

Digital curation involves selection and appraisal by creators and archivists; evolving provision of intellectual access; redundant storage; data transformations; and, for some materials, a commitment to long-term preservation. Digital curation is stewards

Sat, 25 Aug 2012 02:53:00 -0700
<![CDATA[At ROFLCon, watching memes go mainstream]]>

What is ROFLCon? It's a biennial convention (this year was its third) held to celebrate and discuss internet memes and the celebrity that is often created alongside them. This year's invited guests included Chuck "Nope" Testa, Antoine Dodson, who became famous when he appeared on local news after a home invasion, Paul "Bear" Vasquez, AKA the "Double Rainbow" guy, and "Tron Guy" Jay Maynard. There are also internet celebs of a different ilk — people who have created loved and admired "works," like Chris Torres, creator of Nyan Cat, Matt Oswald, creator of the "Me Gusta" guy, or film editor Duncan Robson, creator of the very well known supercut "Let's Enhance." There were also academics, thinkers, and media on hand to round out the very diverse crew. Oh, and Scumbag Steve was there.

Tue, 08 May 2012 14:17:47 -0700
<![CDATA[The Web Browser As Aesthetic Framework: Why Digital Art Today Looks Different]]>

Collective cultural memory is the foundation on which the significance of a creative practice stands. As summarized in Emerson Rosenthal’s post for #DIGART week, online collections and exhibition spaces have been around since the pre-web BBS years—artists have been online since day one, and this is not to even begin to mention the computer-based creative practices that date back to the mid-20th Century. Then why, in the face of this history, do web-based creative practices (and so too, markets) seem to suffer from a case of eternal amnesia or perpetual newness? In this post for #DIGART week, I propose that an overlooked reality is that half the history of this medium lies in the discarded machines and software of the past.

When anyone sits down to code, they interface with and work within various abstractions and frameworks. For artists who make work for the web, the ultimate and final of these is the web browser—it is the point of delivery and consumption. It renders, encapsulates, and mediates the viewer’s experience of the web. More than a utility, the web browser is an aesthetic and cultural framework with implicit stylistic and functional biases. It is the white cube. It is a museum in flux, whose aesthetic paradigms have drifted over the course of twenty plus years. The web browser itself possesses inherent artifactual significance.

Tue, 08 May 2012 14:14:43 -0700
<![CDATA[Just how big are porn sites?]]>

"It’s probably not unrealistic to say that porn makes up 30% of the total data transferred across the internet"

It is a truth universally acknowledged, that a person in possession of a fast internet connection must be in want of some porn.

While it’s difficult domain to penetrate — hard numbers are few and far between — we know for a fact that porn sites are some of the most trafficked parts of the internet. According to Google’s DoubleClick Ad Planner, which tracks users across the web with a cookie, dozens of adult destinations populate the top 500 websites. Xvideos, the largest porn site on the web with 4.4 billion page views per month, is three times the size of CNN or ESPN, and twice the size of Reddit. LiveJasmin isn’t much smaller. YouPorn, Tube8, and Pornhub — they’re all vast, vast sites that dwarf almost everything except the Googles and Facebooks of the internet.

Tue, 10 Apr 2012 08:06:36 -0700