NYC

It was 1978. I was new to New York. A rich acquaintance had invited me to a housewarming party, and, as my cabdriver wound his way down increasingly potholed and dingy streets, I began wondering whether he'd got the address right. Finally he stopped at the doorway of a gloomy, unwelcoming industrial building. Two winos were crumpled on the steps, oblivious. There was no other sign of life in the whole street.

"I think you may have made a mistake", I ventured.

But he hadn't. My friend's voice called "Top Floor!" when I rang the bell, and I thought - knowing her sense of humour - "Oh this is going to be some kind of joke!" I was all ready to laugh. The elevator creaked and clanked slowly upwards, and I stepped out - into a multi-million dollar palace. The contrast with the rest of the building and the street outside couldn't have been starker.

I just didn't understand. Why would anyone spend so much money building a place like that in a neighbourhood like this? Later I got into conversation with the hostess. "Do you like it here?" I asked. "It's the best place I've ever lived", she replied. "But I mean, you know, is it an interesting neighbourhood?" "Oh, the neighbourhood? Well that's outside!" she laughed.

The incident stuck in my mind. How could you live so blind to your surroundings? How could you not think of where I live as including at least some of the space outside your four walls, some of the bits you couldn't lock up behind you? I felt this was something particular to New York: I called it "The Small Here". I realised that, like most Europeans, I was used to living in a bigger Here.

I noticed that this very local attitude to space in New York paralleled a similarly limited attitude to time. Everything was exciting, fast, current, and temporary. Enormous buildings came and went, careers rose and crashed in weeks. You rarely got the feeling that anyone had the time to think two years ahead, let alone ten or a hundred. Everyone seemed to be passing through. It was undeniably lively, but the downside was that it seemed selfish, irresponsible and randomly dangerous. I came to think of this as "The Short Now", and this suggested the possibility of its opposite - "The Long Now".

"Now" is never just a moment. The Long Now is the recognition that the precise moment you're in grows out of the past and is a seed for the future. The longer your sense of Now, the more past and future it includes. It's ironic that, at a time when humankind is at a peak of its technical powers, able to create huge global changes that will echo down the centuries, most of our social systems seem geared to increasingly short nows. Huge industries feel pressure to plan for the bottom line and the next shareholders meeting. Politicians feel forced to perform for the next election or opinion poll. The media attract bigger audiences by spurring instant and heated reactions to human interest stories while overlooking longer-term issues - the real human interest.

Meanwhile, we struggle to negotiate our way through an atmosphere of Utopian promises and dystopian threats, a minefield studded with pots of treasure. We face a future where almost anything could happen. Will we be crippled by global warming, weapons proliferation and species depletion, or liberated by space travel, world government and molecule-sized computers? We don't even want to start thinking about it. This is our peculiar form of selfishness, a studied disregard of the future. Our astonishing success as a technical civilisation has led us to complacency, to expect that things will probably just keep getting better.

But there is no reason to believe this. We might be living in the last gilded bubble of a great civilisation about to collapse into a new Dark Age, which, given our hugely amplified and widespread destructive powers, could be very dark indeed.

If we want to contribute to some sort of tenable future, we have to reach a frame of mind where it comes to seem unacceptable - gauche, uncivilised - to act in disregard of our descendants. Such changes of social outlook are quite possible - it wasn't so long ago, for example, that we accepted slavery, an idea which most of us now find repellent. We felt no compulsion to regard slaves as fellow-humans and thus placed them outside the circle of our empathy. This changed as we began to realise, perhaps it was partly the glory of their music, that they were real people, and that it was no longer acceptable that we should cripple their lives just so that ours could be freer. It just stopped feeling right.

The same type of change happened when we stopped employing kids to work in mines, or when we began to accept that women had voices too. Today we view as fellow-humans many whom our grandparents may have regarded as savages, and even feel some compulsion to share their difficulties - aid donations by individuals to others they will never meet continue to increase. These extensions of our understanding of who qualifies for our empathy, indicate that culturally, economically and emotionally we live in an increasingly Big Here, unable to lock a door behind us and pretend the rest of the world is just "outside".

We don't yet, however, live in The Long Now. Our empathy doesn't extend far forward in time. We need now to start thinking of our great-grandchildren, and their great-grandchildren, as other fellow-humans who are going to live in a real world which we are incessantly, though only semi-consciously, building. But can we accept that our actions and decisions have distant consequences, and yet still dare do anything? It was an act of complete faith to believe, in the days of slavery, that a way of life which had been materially very successful could be abandoned and replaced by another, as yet unimagined, but somehow it happened. We need to make a similar act of imagination now.

Since this act of imagination concerns our relationship to time, a Millennium is a good moment to articulate it. Can we grasp this sense of ourselves as existing in time, part of the beautiful continuum of life? Can we become inspired by the prospect of contributing to the future? Can we shame ourselves into thinking that we really do owe those who follow us some sort of consideration, just as the people of the nineteenth century shamed themselves out of slavery? Can we extend our empathy to the lives beyond ours?

I think we can. Humans are capable of a unique trick: creating realities by first imagining them, by experiencing them in their minds. When Martin Luther King said "I have a dream", he was inviting others to dream it with him. Once a dream becomes shared in that way, current reality gets measured against it and then modified towards it. As soon as we sense the possibility of a more desirable world, we begin behaving differently, as though that world is starting to come into existence, as though, in our minds at least, we're already there. The dream becomes an invisible force which pulls us forward. By this process it starts to come true. The act of imagining something makes it real.

This imaginative process can be seeded and nurtured by artists and designers, for, since the beginning of the 20th century, artists have been moving away from an idea of art as something finished, perfect, definitive and unchanging towards a view of artworks as processes or the seeds for processes - things that exist and change in time, things that are never finished. Sometimes this is quite explicit - as in Walter de Maria's "Lightning Field", a huge grid of metal poles designed to attract lightning. Many musical compositions don't have one form, but change unrepeatingly over time - many of my own pieces and Jem Finer's Artangel installation "LongPlayer" are like this. Artworks in general are increasingly regarded as seeds - seeds for processes that need a viewer's (or a whole culture's) active mind in which to develop. Increasingly working with time, culture-makers see themselves as people who start things, not finish them.

And what is possible in art becomes thinkable in life. We become our new selves first in simulacrum, through style and fashion and art, our deliberate immersions in virtual worlds. Through them we sense what it would be like to be another kind of person with other kinds of values. We rehearse new feelings and sensitivities. We imagine other ways of thinking about our world and its future.

Danny Hillis's Clock of the Long Now is a project designed to achieve such a result. It is, on the face of it, far-fetched to think that one could make a clock which will survive and work for the next 10,000 years. But the act of even trying is valuable: it puts time and the future on the agenda and encourages thinking about them. As Stewart Brand, a colleague in The Long Now Foundation, says:

“Such a clock, if sufficiently impressive and well engineered, would embody deep time for people. It should be charismatic to visit, interesting to think about, and famous enough to become iconic in the public discourse. Ideally, it would do for thinking about time what the photographs of Earth from space have done for thinking about the environment. Such icons reframe the way people think.

The 20th Century yielded its share of icons, icons like Muhammad Ali and Madonna that inspired our attempts at self-actualisation and self-reinvention. It produced icons to our careless and misdirected power - the mushroom cloud, Auschwitz, and to our capacity for compassion - Live Aid, the Red Cross.

In this, the 21st century, we may need icons more than ever before. Our conversation about time and the future must necessarily be global, so it needs to be inspired and consolidated by images that can transcend language and geography. As artists and culture-makers begin making time, change and continuity their subject-matter, they will legitimise and make emotionally attractive a new and important conversation.

BE.jpg

Prozac blues.

No matter how corrupt, greedy, and heartless our government, our corporations, our media, and our religious & charitable institutions may become, the music will still be wonderful.

If I should ever die, God forbid, let this be my epitaph:

THE ONLY PROOF HE NEEDED
FOR THE EXISTENCE OF GOD
WAS MUSIC

***

Back to music. It makes practically everybody fonder of life than he or she would be without it. Even military bands, although I am a pacifist, always cheer me up. And I really like Strauss and Mozart and all that, but the priceless gift that African Americans gave the whole world when they were still in slavery was a gift so great that it is now almost the only reason many foreigners still like us at least a little bit. That specific remedy for the worldwide epidemic of depression is a gift called the blues. All pop music today—jazz, swing, be-bop, Elvis Presley, the Beatles, the Stones, rock-and-roll, hip-hop, and on and on—is derived from the blues.
 
A gift to the world? One of the best rhythm-and-blues combos I ever heard was three guys and a girl from Finland playing in a club in Krakow, Poland.
 
The wonderful writer Albert Murray, who is a jazz historian and a friend of mine among other things, told me that during the era of slavery in this country—an atrocity from which we can never fully recover—the suicide rate per capita among slave owners was much higher than the suicide rate among slaves.
 
Murray says he thinks this was because slaves had a way of dealing with depression, which their white owners did not: They could shoo away Old Man Suicide by playing and singing the Blues. He says something else which also sounds right to me. He says the blues can't drive depression clear out of a house, but can drive it into the corners of any room where it's being played. 

220px-AManWithoutACountry.jpg

Google Street View, word edition.

This is a story about love and death in the golden land, and begins with the country. The San Bernardino Valley lies only an hour east of Los Angeles by the San Bernardino Freeway but is in certain ways an alien place: not the coastal California of the subtropical twilights and the soft westerlies off the Pacific but a harsher California, haunted by the Mojave just beyond the mountains, devastated by the hot dry Santa Ana wind that comes down through the passes at 100 miles an hour and whines through the eucalyptus windbreaks and works on the nerves. October is the bad month for the wind, the month when breathing is difficult and the hills blaze up spontaneously. There has been no rain since April. Every voice seems a scream. It is the season of suicide and divorce and prickly dread, wherever the wind blows. The Mormons settled this ominous country, and then they abandoned it, but by the time they left the first orange tree had been planted and for the next hundred years the San Bernardino Valley would draw a kind of people who imagined they might live among the talismanic fruit and prosper in the dry air, people who brought with them Midwestern ways of building and cooking and praying and who tried to graft those ways upon the land. The graft took in curious ways. This is the California where it is possible to live and die without ever eating an artichoke, without ever meeting a Catholic or a Jew. This is the California where it is easy to Dial-A-Devotion, but hard to buy a book. This is the country in which a belief in the literal interpretation of Genesis has slipped imperceptibly into a belief in the literal interpretation of Double Indemnity, the country of the teased hair and the Capris and the girls for whom all life’s promise comes down to a waltz-length white wedding dress and the birth of a Kimberly or a Sherry or a Debbi and a Tijuana divorce and a return to hairdressers’ school. “We were just crazy kids,” they say without regret, and look to the future. The future always looks good in the golden land, because no one remembers the past. Here is where the hot wind blows and the old ways do not seem relevant, where the divorce rate is double the national average and where one person in every thirty-eight lives in a trailer. Here is the last stop for all those who come from somewhere else, for all those who drifted away from the cold and the past and the old ways. Here is where they are trying to find a new life style, trying to find it in the only places they know to look: the movies and the newspapers.

IB.jpg

Let's take it outside.

The indigenous resistance to the westering anglos as they punched their way over the Appalachian Crest for the seizure and enclosing of the continent was led by a young Shawnee warrior, Tecumseh. Invited into the governor's mansion to negotiate, Tecumseh refused: "Houses are built for you to hold council in: Indians hold theirs in the open air." He also refused to take a chair when offered one, saying that he would repose on the bosom of the earth. There is a resonance here with the spirit of Occupy, and its mode of inhabiting space.

Popular protest takes to the streets because that is the space that remains. To be sure, most of the major events in history have happened outside, in the sense that the decisions taken inside—in the chancelleries, the boardrooms, the smoke-filled backrooms, etc.—are ultimately conditioned by things that happen, or do not happen, in the open air. Tecumseh's retort to Governor Harrison points up the significance of spatial and political form, and of the different architectonics of societies rooted in common—as opposed to private—property.

The spaces of modernity are shaped and dominated by private and state interests; under modernity public space is a subordinate category, residual even, and confined to what is left once land has been commodified and parceled into private lots. What remains is the open air. Literally. The air itself treated by economists as and "externality" (in the language of the business schools), which are commons of a peculiar capitalist kind. The air becomes a sink for the waste produced during the manufacture of commodities. This amounts to the theft of a common, though it is sometimes hard to see since it doesn't happen all at once, nor everywhere. Like many other things, pollution is very uneven, and has a class geography.

(...)

As more and more of modern life moves away into a representation, as the image-world comes to dominate civil society, and as the spectacular state gets drawn further into the day-to-day management of consumer obedience, internal policing, and the prevention of riposte to its lies, so the problem of image politics and of the very possibility of making spaces for strategic discussion takes on critical importance. (...) Given the de-realization of human collectivity under conditions of spectacle, these brief interruptions—the occupations of the squares and critical mass rides—produce real manifestations of community, albeit fleeting. Yet in each case it's a move beyond the horizon of representational politics, and takes life off the screen for a moment.

IB.png

White-out history.

El febrer de 1948 el dirigent comunista Klement Gottwald va sortir al balcó d'un palau barroc de Praga per adreçar-se als centenars de ciutadans txecs que omplien la plaça de la Ciutat Vella. Aquella havia de ser una gran fita en la història de Bohèmia. Un d'aquells moments fatídics que ocorren una o dues vegades cada mil·lenni. Gottwald estava envoltat pels seus camarades i just al seu costat hi havia Clementis. La neu planava en l'aire, feia fred i Gottwald anava amb el cap descobert. Clementis, tot sol·lícit, es va treure la gorra de pell i la va posar al cap de Gottwald. 

La secció de propaganda va reproduir en centenars de milers d'exemplars la fotografia del balcó des del qual Gottwald, amb una gorra de pell al cap i envoltat de camarades, s'adreça al poble. En aquell balcó va començar la història de la Bohemia comunista. Aquella fotografia, la coneixien tots els nens d'haver-la vista en els cartells, en els Ilibres de text i en els museus. 

Quatre anys més tard Clementis va ser acusat de traició i penjat. La secció de propaganda el va esborrar immediatament de la història i, naturalment, de totes les fotografies. Des d'aleshores Gottwald està tot sol al balcó. En el lloc on havia estat Clementis hi ha només el mur buit del palau. De Clementis tan sols ha quedat la gorra de pell al cap de Gottwald. 

mk.png

Animal rights vs. biblical literalism and unbridled science.

Another fundamental error of Christianity is that it has in an unnatural fashion sundered mankind from the animal world to which it essentially belongs and now considers mankind alone as of any account, regarding the animals as no more than things. This error is a consequence of creation out of nothing, after which the Creator, in the first and second chapters of Genesis, takes all the animals just as if they were things, and without so much as the recommendation of kind treatment which even a dog-seller usually adds when he parts with his dogs, hands them over to man for man to rule, that is to do with them what he likes; subsequently, in the second chapter, the Creator goes on to appoint him the first professor of zoology by commissioning him to give the animals the names they shall thenceforth bear, which is once more only a symbol of their total dependence on him, i.e their total lack of rights.

It can truly be said: Men are the devils of the earth, and the animals are the tormented souls. This is the consequence of that installation scene in the Garden of Eden. For the mob can be controlled only by force or by religion, and here Christianity leaves us shamefully in the lurch. I heard from a reliable source that a Protestant pastor, requested by an animal protection society to preach a sermon against cruelty to animals, replied that with the best will in the world he was unable to do so, because he could find no support in his religion. The man was honest, and he was right.

When I was studying at Göttingen, Blumenbach spoke to us very seriously about the horrors of vivisection and told us what a cruel and terrible thing it was; wherefore it should be resorted to only very seldom and for very important experiments which would bring immediate benefit, and even then it must be carried out as publicly as possible so that the cruel sacrifice on the altar of science should be of the maximum possible usefulness. Nowadays, on the contrary, every little medicine-man thinks he has the right to torment animals in the cruellest fashion in his torture chamber so as to decide problems whose answers have for long stood written in books into which he is too lazy and ignorant to stick his nose. – Special mention should be made of an abomination committed by Baron Ernst von Bibra at Nürnberg and, with incomprehensible naïveté, tanquam re bene gesta, [As if the thing were done well] narrated by him to the public in his Vergleichende Untersuchungen über das Gehirn des Menschen und der Wirbelthiere: he deliberately let two rabbits starve to death! – in order to undertake the totally idle and useless experiment of seeing whether starvation produces a proportional change in the chemical composition of the brain! For the ends of science – n'est-ce pas? Have these gentlemen of the scalpel and crucible no notion at all then that they are first and foremost men, and chemists only secondly? How can you sleep soundly knowing you have harmless animals under lock and key in order to starve them slowly to death? Don't you wake up screaming in the night?

It is obviously high time that the Jewish conception of nature, at any rate in regard to animals, should come to an end in Europe, and that the eternal being which, as it lives in us, also lives in every animal should be recognized as such, and as such treated with care and consideration. One must be blind, deaf and dumb, or completely chloroformed by the foetor judaicus, not to see that the animal is in essence absolutely the same thing that we are, and that the difference lies merely in the accident, the intellect, and not in the substance, which is the will.

The greatest benefit conferred by the railways is that they spare millions of draught-horses their miserable existence.

AS.png

Arthur Schopenhauer (1788-1860) The Horrors and Absurdities of Religion

Immersive censorship.

The future of virtual reality isn’t just about whether or not it causes a stampede at your local big-box retailer, ends up in bedrooms and living rooms around the world, or enables new artistic achievements­—if, in short, VR becomes a mass medium. Let’s get past that and ask: then what? Take a January 2014 article on Forbes’ website titled “Legal Heroin: Is Virtual Reality Our Next Hard Drug”. I assume the authors left out the question mark because they’d already made up their minds. The article lurches from irresponsible comparisons of the effects of VR immersion with middle America’s worst nightmares (“a rapid hit of speed, heroin, ecstasy, marijuana, and cocaine”) to singing the potential praises of VR as a learning tool.

If you’re up on media history, that kind of overblown language sounds mighty familiar. Looking back at media panics—from dime novels to train robbery flicks to horror comics to videogames—it’s easy to imagine how VR might be vulnerable to a similar wave of fear. Those previous panics look ridiculous in hindsight: In 1954, Fredric Wertham declared Batman and Robin secret lovers in front of a U.S. Senate subcommittee as part of his crusade against comic books. In the halcyon early '50s, when comic books overflowed on newsstands, Wertham's bullying played a key part in relegating sales to their current specialty market. Even pinball machines were banned in most major U.S. cities for four decades because of their alleged deleterious effects on the nation’s youth.

Looking back at media panics, it’s easy to imagine how VR might be vulnerable to a similar wave of fear 

There’s a definite cycle of mistrust, misunderstanding, and censorship by older generationsofthe new media embraced by youth. Dr. Dmitri Williams, a professor at the University of Southern California, calls this cycle the River City effect, referencing a tune from The Music Man. Williams describes three phases of reactions to new media used by kids: worries about the activities the media displaces, fears about health problems caused by the new tech, and opportunities to assign them blame for larger social problems. VR has some peculiarities that make it a fine target for all three phases.

What will be displaced? The physical world, for a start. Immersion in a virtual world is exclusion from the world around you. When a teenager is playing a console on the TV in her room, she can still turn her head to listen and look at her mom when she asks how her day has been or whether or not her homework is finished. That same mom might have a harder time accepting her offspring whiling away her free time behind goggles and headphones that completely block out the outside world. The inhuman, slightly sinister design of the headsets mocked up in prototypes and concept art isn’t doing the medium any favors: Headsets that obscure most of the user’s face transform a familiar daughter into a cyborg interloper occupying the living room couch. For bonus points, put a plastic gun in the player’s hands, or have them swing their fists to punch.

If game creators do their jobs right, the player’s mouth will be zombielike in slack-jawed ecstasy or twisted into a lunatic’s rageful maw, caught in the ecstasy of popping a cap into a noob or disemboweling an unfortunate goblin. Watching someone’s entire body react to a world you can’t see is deeply unsettling. Something about blocking out the outside world makes you forget to wear the mask of normalcy. Because VR seeks to mimic the senses by covering up the outside world with the virtual, it doesn’t allow passerby to become spectators with the same ease. The fertile imaginations of a fearful parent can populate what goes on beneath those goggles with their worst nightmares of debaucherous sex and violence.

Let’s say Mom wants to try out VR for herself and straps the goggles on. Ten minutes later, she’s staggering down the hallway to eject dinner into the toilet bowl Professor Williams’ health issues phase of the River City effect. “Simulator sickness” has been the bane of virtual world development for decades­—it’s something like motion sickness, but for virtual worlds, and while motion sickness usually requires long-term or extreme exposure, like a cruise or a roller-coaster ride, just being in VR makes some people sick. Maybe commercial hardware developers will be able to crack the simulator sickness problem where the military-industrial complex and its mountains of cash failed. But given that simulator sickness is caused by a variety of factors—including the design of the virtual environment and player movement—that manifest in varying ways, degrees, and populations, a certain percentage of people will likely never be able to experience VR without nausea.

Headsets that obscure most of the user’s face transform a familiar daughter into a cyborg interloper occupying the living room couch 

Still, most people develop a higher tolerance to simulator sickness over time through gradual exposure. So an excited teenager might get past some initial wooziness over the course of a few weeks, but a parent who wants to see what all the fuss is about could plunge into a virtual environment and end up severely disoriented. This problem isn’t going to be restricted to parents: If kids today remain anything like kids have always been, they’re probably going to overdo it on their new toys with marathon VR sessions. Christmas might get messy.

Making someone hurl isn’t generally considered a great first impression, and the cipher of VR is going to absorb all the perennial fears and complaints about young people. These complaints become a crisis of conscience when sharpened by links to the kind of horrific violence that has become all too routine in America, in tragedies like Columbine, Virginia Tech, and Sandy Hook. This is the final phase of Williams’ River City effect, where the most troubling issues facing a society are blamed on a younger generation’s embrace of new media. After the angry newspaper editorials, the outraged appearances on talk shows, and overplayed clips of the most extreme content VR has to offer run for days, what would the case for regulating and censoring the new medium look like?

Each new medium has to be explicitly approved by the justices as speech before it’s protected 

Some precedents have been set. In 2011, the United States Supreme Court declared videogames protected speech. Unfortunately, they’ve never gotten around to giving a definition of speech that has much predictive power: Each new medium has to be explicitly approved by the justices as speech before it’s protected. The first step for would-be goggle-grabbers will be to make the case that VR is a radically new medium and assert that its greater interactivity and realism are fundamentally different and more dangerous than what came before it. If they can manage that, the four decades of case law and precedent that established videogames as part of the First Amendment will be irrelevant.

The next step is to make the argument that VR isn’t speech at all; this is exactly what happened to videogames in 1982. By collapsing motion-based technology into the same category, legislators might make the argument that VR isn’t speech but action, which has a much lower barrier to regulation. When something is considered speech by the courts, any legislation regulating or censoring it must pass a very high standard of proof known as “strict scrutiny”, and the videogame speech case suggested that media effects research isn’t going to get there anytime soon. If it’s not speech, restrictive laws have to pass only the “rational basis” test: Basically, if the lawmakers give any justification at all, that’s good enough for the courts. That result could be disastrous, giving enterprising local, state, and federal politicians free reign to censor virtual reality for children or even adults. A promising new medium could be snuffed out under the onerous weight of regulation.

VR is still protean in its technical and cultural configuration: The future will bring many surprises and new wrinkles to an age-old contest between young and old. The best insurance against a future of censorship for VR is by making sure a generation gap doesn’t develop. The best way to do that is by showing people who wouldn’t otherwise put on a pair of goggles what VR is like, and by showing politicians that voters recognize the cycle of censorship and fear-mongering surrounding the birth of new media.

Most quoted quote.

TV advertising used to work like this: you sat on your sofa while creatives were paid to throw a bucket of shit in your face. Today you're expected to sit on the bucket, fill it with your own shit, and tip it over your head while filming yourself on your mobile. Then you upload the video to the creatives. You do the work; they still get paid.

Hail the rise of "loser-generated content"; commercials assembled from footage shot by members of the public coaxed into participating with the promise of TV glory. The advantages to the advertiser are obvious: it saves cash and makes your advert feel like part of some warm, communal celebration rather than the 30-second helping of underlit YouTube dog piss it is.

Charlie Brooker (2009) Charlie Brooker's Screen burn