MachineMachine /stream - tagged with post en-us LifePress <![CDATA[You're wrong about how the internet fuels conspiracy theories]]>

Conspiracy theories are popular and there is no doubt that the internet has fuelled them on. From the theory that 9/11 was an inside job to the idea that reptilian humanoids rule the world, conspiracy theories have found a natural home online.

Sat, 07 Jul 2018 08:32:44 -0700
<![CDATA[Why memes matter: our best shot to talk about the world in the post-truth era - Pulsar Platform]]>

We have entered a post-truth, post-authenticity era, in which dichotomies like true/false or real/fake no longer serve us very well, especially on social media platforms.

Sat, 07 Jul 2018 08:32:43 -0700
<![CDATA[Memes Are For Tricksters: The Biology of Disinformation - Mondo 2000]]>

Back in 1990, when MONDO 2000 magazine promised Screaming Memes on its cover, it was more or less a secret argot winking at our technohip Mondoid readers.

Sun, 24 Jun 2018 02:18:30 -0700
<![CDATA[Hito Steyerl | Politics of Post-Representation «DIS Magazine]]>

From the militarization of social media to the corporatization of the art world, Hito Steyerl’s writings represent some of the most influential bodies of work in contemporary cultural criticism today.

Sat, 14 Apr 2018 05:54:20 -0700
<![CDATA[Before and After Comparisons of the Visual Effects in Mad Max: Fury Road]]>

One of the big Hollywood blockbusters to hit the silver screen this year has been Mad Max: Fury Road, which has gotten rave reviews, with many praising the insane and complex visual design of the film.

Sun, 31 May 2015 05:38:41 -0700
<![CDATA[Four Notes Towards Post-Digital Propaganda | post-digital-research]]>

“Propaganda is called upon to solve problems created by technology, to play on maladjustments and to integrate the individual into a technological world” (Ellul xvii).

How might future research into digital culture approach a purported “post-digital” age? How might this be understood?


A problem comes from the discourse of ‘the digital’ itself: a moniker which points towards units of Base-2 arbitrary configuration, impersonal architectures of code, massive extensions of modern communication and ruptures in post-modern identity. Terms are messy, and it has never been easy to establish a ‘post’ from something, when pre-discourse definitions continue to hang in the air. As Florian Cramer has articulated so well, ‘post-digital’ is something of a loose, ‘hedge your bets’ term, denoting a general tendency to criticise the digital revolution as a modern innovation (Cramer).

Perhaps it might be aligned with what some have dubbed “solutionism” (Morozov) or “computationalism” (Berry 129; Golumbia 8): the former critiquing a Silicon Valley-led ideology oriented towards solving liberalised problems through efficient computerised means. The latter establishing the notion (and critique thereof) that the mind is inherently computable, and everything associated with it. In both cases, digital technology is no longer just a business that privatises information, but the business of extending efficient, innovative logic to all corners of society and human knowledge, condemning everything else through a cultural logic of efficiency.

In fact, there is a good reason why ‘digital’ might as well be an synonym for ‘efficiency’. Before any consideration is assigned to digital media objects (i.e. platforms, operating systems, networks), consider the inception of ‘the digital’ inception as such: that is information theory. If information was a loose, shabby, inefficient method of vagueness specific to various mediums of communication, Claude Shannon compressed all forms of communication into a universal system with absolute mathematical precision (Shannon). Once information became digital, the conceptual leap of determined symbolic logic was set into motion, and with it, the ‘digital’ became synonymous with an ideology of effectivity. No longer would miscommunication be subject to human finitude, nor be subject to matters of distance and time, but only the limits of entropy and the matter of automating messages through the support of alternating ‘true’ or ‘false’ relay systems.

However, it would be quite difficult to envisage any ‘post-computational’ break from such discourses – and with good reason: Shannon’s breakthrough was only systematically effective through the logic of computation. So the old missed encounter goes: Shannon presupposed Alan Turing’s mathematical idea of computation to transmit digital information, and Turing presupposed Shannon’s information theory to understand what his Universal Turing Machines were actually transmitting. The basic theories of both have not changed, but the materials affording greater processing power, extensive server infrastructure and larger storage space have simply increased the means for these ideas to proliferate, irrespective of what Turing and Shannon actually thought of them (some historians even speculate that Turing may have made the link between information and entropy two years before Bell Labs did) (Good).

Thus a ‘post-digital’ reference point might encompass the historical acknowledgment of Shannon’s digital efficiency, and Turing’s logic but by the same measure, open up a space for critical reflection, and how such efficiencies have transformed not only work, life and culture but also artistic praxis and aesthetics. This is not to say that digital culture is reducibly predicated on efforts made in computer science, but instead fully acknowledges these structures and accounts for how ideologies propagate reactionary attitudes and beliefs within them, whilst restricting other alternatives which do not fit their ‘vision’. Hence, the post-digital ‘task’ set for us nowadays might consist in critiquing digital efficiency and how it has come to work against commonality, despite transforming the majority of Western infrastructure in its wake.

The purpose of these notes is to outline how computation has imparted an unwarranted effect of totalised efficiency, and to label this effect the type of description it deserves: propaganda. The fact that Shannon and Turing had multiple lunches together at Bell labs in 1943, held conversations and exchanged ideas, but not detailed methods of cryptanalysis (Price & Shannon) provides a nice contextual allegory for how digital informatics strategies fail to be transparent.

But in saying this, I do not mean that companies only use digital networks for propagative means (although that happens), but that the very means of computing a real concrete function is constitutively propagative. In this sense, propaganda resembles a post-digital understanding of what it means to be integrated into an ecology of efficiency, and how technical artefacts are literally enacted as propagative decisions. Digital information often deceives us into accepting its transparency, and of holding it to that account: yet in reality it does the complete opposite, with no given range of judgements available to detect manipulation from education, or persuasion from smear. It is the procedural act of interacting with someone else’s automated conceptual principles, embedding pre-determined decisions which not only generate but pre-determine ones ability to make choices about such decisions, like propaganda.

This might consist in distancing ideological definitions of false consciousness as an epistemological limit to knowing alternatives within thought, to engaging with a real programmable systems which embeds such limits concretely, withholding the means to transform them. In other words, propaganda incorporates how ‘decisional structures’ structure other decisions, either conceptually or systematically.


Two years before Shannon’s famous Masters thesis, Turing published what would be a theoretical basis for computation in his 1936 paper “On Computable Numbers, with an Application to the Entscheidungsproblem.” The focus of the paper was to establish the idea of computation within a formal system of logic, which when automated would solve particular mathematical problems put into function (Turing, An Application). What is not necessarily taken into account is the mathematical context to that idea: for the foundations of mathematics were already precarious, way before Turing outlined anything in 1936. Contra the efficiency of the digital, this is a precariousness built-in to computation from its very inception: the precariousness of solving all problems in mathematics.

The key word of that paper, its key focus, was on the Entscheidungsproblem, or decision problem. Originating from David Hilbert’s mathematical school of formalism, ‘decision’ means something more rigorous than the sorts of decisions in daily life. It really means a ‘proof theory’, or how analytic problems in number theory and geometry could be formalised, and thus efficiently solved (Hilbert 3). Solving a theorem is simply finding a provable ‘winning position’ in a game. Similar to Shannon, ‘decision’ is what happens when an automated system of function is constructed in such a sufficiently complex way, that an algorithm can always ‘decide’ a binary, yes or no answer to a mathematical problem, when given an arbitrary input, in a sufficient amount of time. It does not require ingenuity, intuition or heuristic gambles, just a combination of simple consistent formal rules and a careful avoidance of contradiction.

The two key words there are ‘always’ and ‘decide’. The progressive end-game of twentieth century mathematicians who, like Hilbert, sought after a simple totalising conceptual system to decide every mathematical problem and work towards absolute knowledge. All Turing had to do was make explicit Hilbert’s implicit computational treatment of formal rules, manipulate symbol strings and automate them using an ’effective’ or “systematic method” (Turing, Solvable and Unsolvable Problems 584) encoded into a machine. This is what Turing’s thesis meant (discovered independently to Alonzo Church’s equivalent thesis (Church)): any systematic algorithm solved by a mathematical theorem can be computed by a Turing machine (Turing, An Application), or in Robin Gandy’s words, “[e]very effectively calculable function is a computable function” (Gandy).

Thus effective procedures decide problems, and they resolve puzzles providing winning positions (like theorems) in the game of functional rules and formal symbols. In Turing’s words, “a systematic procedure is just a puzzle in which there is never more than one possible move in any of the positions which arise and in which some significance is attached to the final result” (Turing, Solvable and Unsolvable Problems 590). The significance, or the winning position, becomes the crux of the matter for the decision: what puzzles or problems are to be decided? This is what formalism attempted to do: encode everything through the regime of formalised efficiency, so that all of mathematically inefficient problems are, in principle, ready to be solved. Programs are simply proofs: if it could be demonstrated mathematically, it could be automated.

In 1936, Turing had showed some complex mathematical concepts of effective procedures could simulate the functional decisions of all the other effective procedures (such as the Universal Turing Machine). Ten years later, Turing and John von Neumann would independently show how physical general purpose computers, offered the same thing and from that moment on, efficient digital decisions manifested themselves in the cultural application of physical materials. Before Shannon’s information theory offered the precision of transmitting information, Hilbert and Turing developed the structure of its transmission in the underlying regime of formal decision.

Yet, there was also a non-computational importance here, for Turing was also fascinated by what decisions couldn’t compute. His thesis was quite precise, so as to elucidate that if no mathematical problem could be proved, a computer was not of any use. In fact, the entire focus of his 1936 paper, often neglected by Silicon Valley cohorts, was to show that Hilbert’s particular decision problem could not be solved. Unlike Hilbert, Turing was not interested in using computation to solve every problem, but as a curious endeavour for surprising intuitive behaviour. The most important of all, Turing’s halting, or printing problem was influential, precisely as it was undecidable; a decision problem which couldn’t be decided.

We can all picture the halting problem, even obliquely. Picture the frustrated programmer or mathematician starting at their screen, waiting to know when an algorithm will either halt and spit out a result, or provide no answer. The computer itself has already determined the answer for us, the programmer just has to know when to give up. But this is a myth, inherited with a bias towards human knowledge, and a demented understanding of machines as infinite calculating engines, rather than concrete entities of decision. For reasons that escape word space, Turing didn’t understand the halting problem in this way: instead he understood it as a contradictory example of computational decisions failing to decide on each other, on the account that there could never be one totalising decision or effective procedure. There is no guaranteed effective procedure to decide on all the others, and any attempt to build one (or invest in a view which might help build one), either has too much investment in absolute formal reason, or it ends up with ineffective procedures.

Undecidable computation might be looked at as a dystopian counterpart against the efficiency of Shannon’s ‘digital information’ theory. A base 2 binary system of information resembling one of two possible states, whereby a system can communicate with one digit, only in virtue of the fact that there is one other digit alternative to it. Yet the perfect transmission of that information, is only subject to a system which can ‘decide’ on the digits in question, and establish a proof to calculate a success rate. If there is no mathematical proof to decide a problem, then transmitting information becomes problematic for establishing a solution.


What has become clear is that our world is no longer simply accountable to human decision alone. Decisions are no longer limited to the borders of human decisions and ‘culture’ is no longer simply guided by a collective whole of social human decisions. Nor is it reducible to one harmonious ‘natural’ collective decision which prompts and pre-empts everything else. Instead we seem to exist in an ecology of decisions: or better yet decisional ecologies. Before there was ever the networked protocol (Galloway), there was the computational decision. Decision ecologies are already set up before we enter the world, implicitly coterminous with our lives: explicitly determining a quantified or bureaucratic landscape upon which an individual has limited manoeuvrability.

Decisions are not just digital, they are continuous as computers can be: yet decisions are at their most efficient when digitally transferred. Decisions are everywhere and in everything. Look around. We are constantly told by governments and states that are they making tough decisions in the face of austerity. CEOs and Directors make tough decisions for the future of their companies and ‘great’ leaders are revered for being ‘great decisive leaders’: not just making decisions quickly and effectively, but also settling issues and producing definite results.

Even the word ‘decide’, comes from the Latin origin of ‘decidere’, which means to determine something and ‘to cut off.’ Algorithms in financial trading know not of value, but of decision: whether something is marked by profit or loss. Drones know not of human ambiguity, but can only decide between kill and ignore, cutting off anything in-between. Constructing a system which decides between one of two digital values, even repeatedly, means cutting off and excluding all other possible variables, leaving a final result at the end of the encoded message. Making a decision, or building a system to decide a particular ideal or judgement must force other alternatives outside of it. Decisions are always-already embedded into the framework of digital action, always already deciding what is to be done, how it can be done or what is threatening to be done. It would make little sense to suggest that these entities ‘make decisions’ or ‘have decisions’, it would be better to say that they are decisions and ecologies are constitutively constructed by them.

The importance of neo-liberal digital transmissions are not that they become innovative, or worthy of a zeitgeist break: but that they demonstrably decide problems whose predominant significance is beneficial for self-individual efficiency and accumulation of capital. Digital efficiency is simply about the expansion of automating decisions and what sort of formalised significances must be propagated to solve social and economic problems, which creates new problems in a vicious circle.

The question can no longer simply be ‘who decides’, but now, ‘what decides?’ Is it the cafe menu board, the dinner party etiquette, the NASDAQ share price, Google Pagerank, railway network delays, unmanned combat drones, the newspaper crossword, the javascript regular expression or the differential calculus? It’s not quite right to say that algorithms rule the world, whether in algo-trading or in data capture, but the uncomfortable realisation that real entities are built to determine provable outcomes time and time again: most notably ones for cumulating profit and extracting revenue from multiple resources.

One pertinent example: consider George Dantzig’s simplex algorithm: this effective procedure (whose origins began in multidimensional geometry) can always decide solutions for large scale optimisation problems which continually affect multi-national corporations. The simplex algorithm’s proliferation and effectiveness has been critical since its first commercial application in 1952, when Abraham Charnes and William Cooper used it to decide how best to optimally blend four different petroleum products at the Gulf Oil Company (Elwes 35; Gass & Assad 79). Since then the simplex algorithm has had years of successful commercial use, deciding almost everything from bus timetables and work shift patterns to trade shares and Amazon warehouse configurations. According to the optimisation specialist Jacek Gondzio, the simplex algorithm runs at “tens, probably hundreds of thousands of calls every minute” (35), always deciding the most efficient method of extracting optimisation.

In contemporary times, nearly all decision ecologies work in this way, accompanying and facilitating neo-liberal methods of self-regulation and processing all resources through a standardised efficiency: from bureaucratic methods of formal standardisation, banal forms ready to be analysed one central system, to big-data initiatives and simple procedural methods of measurement and calculation. The technique of decision is a propagative method of embedding knowledge, optimisation and standardisation techniques in order to solve problems and an urge to solve the most unsolvable ones, including us.

Google do not build into their services an option to pay for the privilege of protecting privacy: the entire point of providing a free service which purports to improve daily life, is that it primarily benefits the interests of shareholders and extend commercial agendas. James Grimmelmann gave a heavily detailed exposition on Google’s own ‘net neutrality’ algorithms and how biased they happen to be. In short, PageRank does not simply decide relevant results, it decides visitor numbers and he concluded on this note.

With disturbing frequency, though, websites are not users’ friends. Sometimes they are, but often, the websites want visitors, and will be willing to do what it takes to grab them (Grimmelmann 458).

If the post-digital stands for the self-criticality of digitalisation already underpinning contemporary regimes of digital consumption and production, then its saliency lies in understanding the logic of decision inherent to such regimes. The reality of the post-digital, shows that machines remain curiously efficient whether we relish in cynicism or not. Such regimes of standardisation and determined results, were already ‘mistakenly built in’ to the theories which developed digital methods and means, irrespective of what computers can or cannot compute.


Why then should such post-digital actors be understood as instantiations of propaganda? The familiarity of propaganda is manifestly evident in religious and political acts of ideological persuasion: brainwashing, war activity, political spin, mind control techniques, subliminal messages, political campaigns, cartoons, belief indoctrination, media bias, advertising or news reports. A definition of propaganda might follow from all of these examples: namely, the systematic social indoctrination of biased information that persuades the masses to take action on something which is neither beneficial to them, nor in their best interests: or as Peter Kenez writes, propaganda is “the attempt to transmit social and political values in the hope of affecting people’s thinking, emotions, and thereby behaviour” (Kenez 4) Following Stanley B. Cunningham’s watered down definition, propaganda might also denote a helpful and pragmatic “shorthand statement about the quality of information transmitted and received in the twentieth century” (Cunningham 3).

But propaganda isn’t as clear as this general definition makes out: in fact what makes propaganda studies such a provoking topic is that nearly every scholar agrees that no stable definition exists. Propaganda moves beyond simple ‘manipulation’ and ‘lies’ or derogatory, jingoistic representation of an unsubtle mood – propaganda is as much about the paradox of constructing truth, and the irrational spread of emotional pleas, as well as endorsing rational reason. As the master propagandist William J. Daugherty wrote;

It is a complete delusion to think of the brilliant propagandist as being a professional liar. The brilliant propagandist […] tells the truth, or that selection of the truth which is requisite for his purpose, and tells it in such a way that the recipient does not think that he is receiving any propaganda…. (Daugherty 39).

Propaganda, like ideology works by being inherently implicit and social. In the same way that post-ideology apologists ignore their symptom, propaganda is also ignored. It isn’t to be taken as a shadowy fringe activity, blown apart by the democratising fairy-dust of ‘the Internet’. As many others have noted, the purported ‘decentralising’ power of online networks, offer new methods for propagative techniques, or ‘spinternet’ strategies, evident in China (Brady). Iran’s recent investment into video game technology only makes sense, only when you discover that 70% of Iran’s population are under 30 years of age, underscoring a suitable contemporary method of dissemination. Similarly in 2011, the New York City video game developer Kuma Games was mired in controversy when it was discovered that an alleged CIA agent, Amir Mirza Hekmati, had been recruited to make an episodic video game series intending to “change the public opinion’s mindset in the Middle East.” (Tehran Times). The game in question, Kuma\War (2006 – 2011) was a free-to-play First-Person Shooter series, delivered in episodic chunks, the format of which attempted to simulate biased re-enactments of real-life conflicts, shortly after they reached public consciousness.

Despite his unremarkable leanings towards Christian realism, Jacques Ellul famously updated propaganda’s definition as the end product of what he previously lamented as ‘technique’. Instead of viewing propaganda as a highly organised systematic strategy for extending the ideologues of peaceful warfare, he understood it as a general social phenomenon in contemporary society.

Ellul outlined two types: political and sociological propaganda: Political propaganda involves government, administrative techniques which intend to directly change the political beliefs of an intended audience. By contrast, sociological propaganda is the implicit unification of involuntary public behaviour which creates images, aesthetics, problems, stereotypes, the purpose of which aren’t explicitly direct, nor overtly militaristic. Ellul argues that sociological propaganda exists; “in advertising, in the movies (commercial and non-political films), in technology in general, in education, in the Reader’s Digest; and in social service, case work, and settlement houses” (Ellul 64). It is linked to what Ellul called “pre” or “sub-propaganda”: that is, an imperceptible persuasion, silently operating within ones “style of life” or permissible attitude (63). Faintly echoing Louis Althusser’s Ideological State Apparatuses (Althusser 182) nearly ten years prior, Ellul defines it as “the penetration of an ideology by means of its sociological context.” (63) Sociological propaganda is inadequate for decisive action, paving the way for political propaganda – its strengthened explicit cousin – once the former’s implicitness needs to be transformed into the latter’s explicitness.

In a post-digital world, such implicitness no longer gathers wartime spirits, but instead propagates a neo-liberal way of life that is individualistic, wealth driven and opinionated. Ellul’s most powerful assertion is that ‘facts’ and ‘education’ are part and parcel of the sociological propagative effect: nearly everyone faces a compelling need to be opinionated and we are all capable of judging for ourselves what decisions should be made, without at first considering the implicit landscape from which these judgements take place. One can only think of the implicit digital landscape of Twitter: the archetype for self-promotion and snippets of opinions and arguments – all taking place within Ellul’s sub-propaganda of data collection and concealment. Such methods, he warns, will have “solved the problem of man” (xviii).

But information is of relevance here, and propaganda is only effective within a social community when it offers the means to solve problems using the communicative purview of information:

Thus, information not only provides the basis for propaganda but gives propaganda the means to operate; for information actually generates the problems that propaganda exploits and for which it pretends to offer solutions. In fact, no propaganda can work until the moment when a set of facts has become a problem in the eyes of those who constitute public opinion (114).

Wed, 11 Dec 2013 15:42:45 -0800
<![CDATA[Rhizome | Using, Using, Used]]>

Within the pages of Digital Folklore Reader, Olia Lialina, one of the book's editors, refers to a claim by the social media researcher Danah Boyd, that some American teenagers identify as Facebook and others as MySpace-preferring a conformist and clean interface persona, or a rebellious and visually

Tue, 22 Jan 2013 15:51:00 -0800
<![CDATA[Scientist wants human woman to give birth to a Neanderthal - Crave]]>

A scientist believes he's close to perfecting the necessary technology to clone a Neanderthal - all he needs is a human woman to gestate it.

Tue, 22 Jan 2013 15:44:00 -0800
<![CDATA[The culture of the copy by James Panero - The New Criterion]]>

Technological revolutions are far less obvious than political revolutions to the generations that live through them. This is true even as new tools, for better and worse, shift human history more than new regimes do. Innovations offer silent coups. We rarely appreciate the changes they bring until th

Tue, 22 Jan 2013 15:27:00 -0800
<![CDATA[Digital Life Is A Hoax…Because There’s No Such Thing » Cyborgology]]>

I've poked fun at these lazy op-eds before and, indeed, it must be tempting to retreat into the safe conceptual territory of "The Internet is fake!" when a juicy story of lies, deception, and computers makes headlines. The Te'o case is an almost unbelievable account of a football star allegedly trick

Tue, 22 Jan 2013 15:24:00 -0800
<![CDATA[ / in print]]>;id=31944

WHATEVER HAPPENED TO DIGITAL ART? Cast your mind back to the late 1990s, when we got our first e-mail accounts. Wasn’t there a pervasive sense that visual art was going to get digital, too, harnessing the new technologies that were just beginning to transform our lives? But somehow the venture never

Mon, 14 Jan 2013 17:29:00 -0800;id=31944
<![CDATA[the real internet]]>

Have you watched the last few moments of Saddam’s life? Or the necrophilic videos with Gaddafi’s behind? Al Zarqawi’s internet kill rooms? Magnotta’s cat suffocation videos? Ronal Poppo’s eaten face? I will admit that I have and it is ticklish, but not in a good way. Gore videos on the internet are abundant and they certainly work up the stomach. It’s no trek to watch them, but apparently that is the point. Recently, I stumbled (as one does on the internet) on a range of gore forums and videos. All sorts of weird kinks flourish in these platforms, and they will give you a good dose of weekly shock ‘n’ awe material. In these marginalized discussion groups, a certain thought intrigued me. Gore aficionados claim that watching ‘real’ murder protests the distorted and censored imagery of world horror events, and these videos correct our vision by portraying a more realistic representation of atrocities and the macabre. Also, gore audiences are stigmatized as those who engage in snuff activi

Mon, 31 Dec 2012 07:00:00 -0800
<![CDATA[Matthew Fuller » Giffed Economy]]>

Why look at animated GIFs now? They are one of the first forms of image native to computer networks making them charmingly passé, a characteristic that gives them contradictory longevity. Animated GIFs crystallise a form of the combination of computing and the camera. As photography moves almost entirely into digital modes, the fascination with such quirky formats increases. The story of photography will be, in no small part, that of its file formats, the kinds of compression and storage it undergoes, as they in turn produce what is conjurable as an image. The Graphics Interchange Format was first developed through the computer network firm Compuserve. As an eight-bit file format it introduced the amazing spectacle of 256 colour images to be won over the thin lines of dial-up connections. Due to this, when a picture is converted to GIF, it’s likely that posterization occurs – where gradations of tone turn to patches of reduced numbers of colours. Such aliasing introduces a key part of

Mon, 31 Dec 2012 06:59:00 -0800
<![CDATA[Humanism: not an ‘impossible dream’]]>

Andrew Brown, at The Guardian‘s ‘Comment is Free’ (CIF) wrote an article a couple of weeks ago now rubbishing humanism and the British Humanist Association. I’ve responded today on the Huffington Post. Why has it taken so long? Well, I originally asked CIF if I could do a response. I was told yes but when I sent it to them they changed their mind and said it was too positive about humanism. I went back to them and said that this wasn’t quite fair and so they said okay, I could do a piece but it would have to be more general and not a response as such. So, I worked on another version, but then was told that it didn’t make sense. (You can judge that for yourself – I’ve pasted it below the Huffington Post one below).

The Huffington Post one:

Andrew Brown, in his blog last week, criticised the British Humanist Association (BHA) for promoting humanism as an essentially negative approach to life defined by what it isn’t and for being on an incoherent and self-defeating mission to eliminate

Mon, 31 Dec 2012 06:57:00 -0800
<![CDATA[Darwin Among the Machines — [To the Editor of the Press, Christchurch, New Zealand, 13 June, 1863.]]]>

Sir—There are few things of which the present generation is more justly proud than of the wonderful improvements which are daily taking place in all sorts of mechanical appliances. And indeed it is matter for great congratulation on many grounds. It is unnecessary to mention these here, for they are sufficiently obvious; our present business lies with considerations which may somewhat tend to humble our pride and to make us think seriously of the future prospects of the human race. If we revert to the earliest primordial types of mechanical life, to the lever, the wedge, the inclined plane, the screw and the pulley, or (for analogy would lead us one step further) to that one primordial type from which all the mechanical kingdom has been PAGE 180 developed, we mean to the lever itself, and if we then examine the machinery of the Great Eastern, we find ourselves almost awestruck at the vast development of the mechanical world, at the gigantic strides with which it has advanced in compari

Mon, 31 Dec 2012 06:54:00 -0800
<![CDATA[Neanderthals smart enough to copy humans]]>

Fossils and artefacts pulled from the Grotte du Renne cave in central France present anthropologists with a Pleistocene puzzle. Strewn among the remains of prehistoric mammals are the bones of Neanderthals, along with bladelets, bone points and body ornaments belonging to what archaeologists call the Châtelperronian culture. Such complex artefacts are often attributed to modern humans, but a new report in the Proceedings of the National Academy of Sciences suggests that Neanderthals created the objects in imitation of their Homo sapiens neighbors1.

How the Grotte du Renne deposit formed has important implications for how we view our extinct sister species. If Neanderthals left the assemblage, then they were capable of a degree of symbolic behaviour thought to be unique to humans.

The remains and artefacts were found together during excavations between 1949 and 1963, but they were not necessarily deposited at the same time. In 2010, Thomas Higham, an archaeologist at the University of

Mon, 31 Dec 2012 06:52:00 -0800
<![CDATA[Neanderthal vs. Homo sapiens: Who would win in a fight?]]>

team of archaeologists, paleoanthropologists, and paleoartists has created a more accurate Neanderthal reconstruction, based on a nearly complete skeleton discovered in France more than 100 years ago. The La Ferrassie Neanderthal man was short but stocky. If a modern man came nose-to-nose with a Neanderthal, could he take him in a fight? Possibly. A Neanderthal would have a clear power advantage over his Homo sapiens opponent. Many of the Neanderthals archaeologists have recovered had Popeye forearms, possibly the result of a life spent stabbing wooly mammoths and straight-tusked elephants to death and dismantling their carcasses. Neanderthals also developed strong trapezius, deltoid, and tricep muscles by dragging 50 pounds of meat 30 miles home to their families. A Neanderthal had a wider pelvis and lower center of gravity than Homo sapiens, which would have made him a powerful grappler. That doesn’t mean, however, that we would be an easy kill for our extinct relative. Homo sapiens

Mon, 31 Dec 2012 06:49:00 -0800
<![CDATA[Turing Complete User]]>

Computers are getting invisible. They shrink and hide. They lurk under the skin and dissolve in the cloud. We observe the process like an eclipse of the sun, partly scared, partly overwhelmed. We divide into camps and fight about advantages and dangers of The Ubiquitous. But whatever side we take — we do acknowledge the significance of the moment. With the disappearance of the computer, something else is silently becoming invisible as well — the User. Users are disappearing as both phenomena and term, and this development is either unnoticed or accepted as progress — an evolutionary step. The notion of the Invisible User is pushed by influential user interface designers, specifically by Don Norman a guru of user friendly design and long time advocate of invisible computing. He can be actually called the father of Invisible Computing. Those who study interaction design read his “Why Interfaces Don’t Work” published in 1990 in which he asked and answered his own question: “The real probl

Mon, 31 Dec 2012 06:46:00 -0800
<![CDATA[A vested interest in palimpsest]]>

The English language contains certain meaning-rich words that command attention and stir controversy. “Paradigm,” for instance: When Thomas Kuhn used it in 1966 to describe accepted scientific theories, and gave us the phrase “paradigm shift,” he launched a thousand articles, several hundred books and quite a few careers, some just distantly related to science.

That kind of word raises curiosity and pries open the imagination, encouraging us to think about what we might otherwise ignore. My favourite is “palimpsest.” When I first noticed it in print, four decades ago, it struck me as odd, beautiful and full of promise. It’s a term that engages many writers and continues to attract new meanings but to some readers it still seems slightly far-fetched, maybe outrageous.

Thu, 20 Dec 2012 03:45:00 -0800
<![CDATA[Devastation in Meatspace]]>

The missile rushing over your head was processed through an Instagram filter just hours previously. As you see it pass out of sight behind the apartment block opposite some young conscript is preparing for video footage of it to be compressed and uploaded to YouTube before the hour is out. By nightfall tonight that explosion which just shook your neighborhood, in one of the most densely populated areas on earth, will have been liked over 8,000 times on Facebook. Welcome to Gaza City.

Fri, 14 Dec 2012 03:04:00 -0800