Monday, July 31, 2006

If they buy in, are you selling out?

While at The Long Now, I had an interesting conversation with one of my colleagues there about the curious nature of the organisation's mission, which is to encourage long-term thinking and responsibility. There aren't too many places with such a bold, and yet abstract underlying goal. The long and the short of it was, we agreed, that we were ultimately in the business of marketing a way of thinking -- about time, change, history, responsibility, and other things.

Now, consider the conventional understanding of what marketing is and does. It tends to begin with a product (a dish soap, a pair of jeans, a drink, a make of car...) and the marketing aim for the company producing it is to figure out a strategy for associating a way of thinking, behaving, looking and so on, which taps into what people want, so they'll decide to buy the product as a result of their attraction to the associated image. This much is basic, even trite knowledge, in that field.

But what happens when you invert or reverse that process? When you start with a desire to promote a way of thinking, rather than a physical product? This is the situation at The Long Now Foundation (re. long-term thinking) and certainly, at the Hawaii Research Center for Futures Studies, we have a similar aim (re. pluralistic, alternative futures thinking). Here the challenge might be seen as how to embody that thinking in a product, how to make it tangible or real to people, how to give them something to buy, touch, do, experience -- which both expresses and elicits that kind of thinking. This is what the Clock of the Long Now does (as an article in today's San Francisco Chronicle explains). This is also what, in a sense, what we've been doing at HRCFS through our efforts to develop experiential scenarios -- watch this space for more on that -- and futures audio tours; trying to make futures thinking compelling to people, with something they can really get their teeth into. Concretising an abstraction, if you will, rather than abstracting from the concrete; or inventing a product to promote a message, instead of the other way around.

Now, in today's New York Times Magazine, there was an interesting article entitled "The Brand Underground", by Rob Walker, focusing on the current phenomenon of sometimes highly profitable and prestigious microbrands, typically founded by trendsetting twentysomethings who regard themselves as anti-corporate, even while embracing the practice of making and selling products (notably, but not exclusively, clever t-shirts).

These enterprises can be seen as Internet-commerce-era answers to the question, phrased by Walker: "How do I turn my lifestyle into a business?" Which we could easily paraphrase as, "How can I make a living doing what I care about?" or "How can a live a life in harmony with my values?" A central question for anyone who cares to examine the life they lead.

Some quotes from the article:

One reason an underground brand sounds nonsensical is that countercultures are supposed to oppose the mainstream, and nothing is more mainstream than consumerism. But we no longer live in a world of the Mainstream and the Counterculture. We live in a world of multiple mainstreams and countless counter-, sub- and counter-sub-cultures.

[M]ost of us think of branding as a thoroughly mainstream practice: huge companies buying advertising time during the Super Bowl to shout their trademarked names at us is pretty much the opposite of authentic or edgy expression. But branding is more complicated than that. It is really a process of attaching an idea to a product. Decades ago that idea might have been strictly utilitarian: trustworthy, effective, a bargain. Over time, the ideas attached to products have become more elaborate, ambitious and even emotional. This is why, for example, current branding campaigns for beer or fast food often seem to be making some sort of statement about the nature of contemporary manhood. If a product is successfully tied to an idea, branding persuades people — consciously or not — to consume the idea by consuming the product. Even companies like Apple and Nike, while celebrated for the tangible attributes of their products, work hard to associate themselves with abstract notions of nonconformity or achievement. A potent brand becomes a form of identity in shorthand.

Of course, companies don't go into business in order to express a particular worldview and then gin up a product to make their point. Corporate branding is a function of the profit motive.

Many of [the microbranders] clearly see what they are doing as not only noncorporate but also somehow anticorporate: making statements against the materialistic mainstream — but doing it with different forms of materialism. In other words, they see products and brands as viable forms of creative expression.

For them, there is something fully legitimate about taking the traditional sense of branding and reversing it: instead of dreaming up ideas to attach to products, they are starting with ideas and then dreaming up the products to express them.

Perhaps the first lesson of the brand underground is not that savvy young people will stop buying symbols of rebellion. It is that they have figured out that they can sell those symbols, too.

This is the quintessence of the postmodern brand rebel, hopscotching the minefield of creativity and commerce, recognizing the categorization, satirizing it, embracing it and commoditizing it all at once.

When [one such entrepreneur] declares that his project is part of a "revolution against branding," what he really means is not the snuffing out of commercial expression but an elevation of it.

Even in a world where the mainstream is less than monolithic, every subculture sooner or later has to reconcile itself with the larger cultural forces around it.


The author dwells on the apparent paradox of advancing a counterculture, or at any rate a self-consciously mainstream-rejecting subculture, through so thoroughly a conventional avenue as branding. And he makes a good point, for there is surely there a risk of compromising certain values the business ostensibly intends to represent. (That's got to be a risk, if you happen to regard corporate identity-building, and particularly, the commodification of dissent, as problems.)

And frankly, I'm not particularly interested in or compelled by the worldviews of a bunch of t-shirt jockeys. Although some of them may have really deep messages to convey, my feeling is that were he alive today, strangely enough the t-shirt might not have been Voltaire's chosen medium. The bigger question resonating beneath all this is, for me, where ought any serious challenger of the prevailing paradigm to begin -- completely outside the box, and let the box come to you; or is it better to work for your hoped-for changes from within the existing framework, using its structures, living on its dime? Here's a fundamental dilemma for anyone interested in promoting social change, or even thought about social change. Do you try to beat them outright, or do you join them, in the hope of beating them from the inside? To use the tools of a morally bankrupt money-driven corporate culture to comment subversively on that culture; is that knowingly postmodern ironic genius, or just selling out?

Depends on what you're trying to challenge, of course. But it's a basic dilemma if you're interested in having people entertain the possibility of deeper, longer stories of the future that go beyond corporate and capitalistic models concordant with prevailing assumptions and social arrangements. You don't have to have a single-minded prior ideological commitment to bringing down the system no matter what (I don't) in order to see the tremendous value in being capable of perceiving its dimensions and tendencies, and questioning its longevity, usefulness, and ultimate worth (I do). But who will pay you to think about problems, or solutions to problems, that don't yet exist, and might never exist? Who will pay today for you to entertain doubts about the "hard realities" of nation states, corporations, and other well established interests?

Such are the bigger questions about the commercial viability of "critical futures" that I've struggled with, and that this article touched off in my mind. For to speak of the commercial viability of an idea, is another way of talking about whether you can survive while living out its values.

A good friend of mine insists that money is the best measure of someone's commitment to something. Words are cheap, he would say, but when a person pays for something -- ah, you know they really value it, you find out what it's worth to them. His reasoning in this area leads him to the conclusion that an enterprise that can't turn a profit is, by definition, not valuable, not worthwhile.

I have a problem with that. It seems to me that the "messages" of foresight and long-term thinking being "marketed", so to speak, by organisations such as The Long Now Foundation and the Hawaii Research Center for Futures Studies, are something like the advice of a wise older relative to a headstrong teen. It may not be seen as useful (may in fact be largely perceived as irritating and irrelevant) at the time, and might take a while to sink in; but eventually its value will become apparent. The fact that long-term or divergent futures thinking is not universally recognised as useful right now does not preclude it from planting seeds that may germinate and bear fruit in due course.

(Incidentally, I would have liked to make the above point without the patronising tone suggested by this old-young analogy; but I can't. I think a culture, corporate or otherwise, that rejects foresight, is actually immature, and will probably have to learn the hard way that the assumptions propping up its current interests and successes are not set in stone.)

Meanwhile, we continue to search for ways to embody and concretise those messages, and to make the long-term value we're betting on, capable of being appreciated, and thus acted upon, and to some modest extent rewarded, today. Wisdom has probably always been a tough sell, but it's no less valuable because of that.

Tuesday, July 25, 2006

Towards a Sceptical Futuryst blogosophy, part III

History will be kind to me for I intend to write it.
~Winston Churchill

The future will be kind to me for I intend to create it.
~The Sceptical Futuryst

Sunday, July 23, 2006

Why the language of design must enter law and politics (part two)

/Continued from a previous post.../

In law school, the first class I ever took was called Torts and the Process of Law. We were introduced to the arcana of "legal storytelling" (my term, not theirs) in which episodes of people's lives -- usually horrific, depressing, or bizarre, or some interesting combination of those three -- were boiled down, and their legal essence extracted for infusion into the body of common law. These are anecdotes with teeth: the moral of every legal story is its ratio decidendi, whose binding authority carries on down through years, decades or centuries of later cases. The 01932 case, Donoghue v. Stevenson, is one that stands out in memory. It was a landmark in the development of the tort of negligence, about a ginger ale bottle with a snail in it, which was served to a woman in 01928 and led her to experience a nasty bout of gastroenteritis. The key legal concept here, and countless others since, was that of "duty of care": a person will owe a duty of care not to injure those who it can be reasonably foreseen would be affected by their acts or omissions.

Now, this principle is well and good for products, like snail-containing soda bottles, which have a specific provenance, because any adverse consequences are traceable back to the source. This legal mechanism, the tort of negligence, can serve as useful negative feedback to make amends for and rectify negligently designed products, or manufacturing processes. But what about at a larger scale? What about legal or political systems themselves, and their emergent properties, their systemic consequences? There, it's difficult to hold anyone accountable, even in theory, let alone in a court of law.

Take war, for example. Trying to hold a specific government to account for the massive damage sustained in a military encounter, or the specific consequences for one person, family or even a whole group of people, is one strategy. And it's a strategy that some, understandably, seem very keen to adopt. But, whatever the merits of the argument against (say, the US administration's present entanglements in Iraq) might be; through the macroscope, this rather appears to miss the point, like the drunk who lost his keys in the dark but searched for them under the streetlight because the visibility was better. This government, like many others, is simply acting out what it is in its nature to do. (Of course, the governments themselves in any given conflict blame each other; but it's the logic of the pattern that's the problem, not so much the logic of the participants, which takes on the aura of inevitability once it's in train.) The key is not, then, ultimately to be found under the light of apparent motives, it's in the shadows -- of structure and the broad sweep of history -- where we don't usually think to look. The proximate reasons (of contemporary circumstance) are less significant than the ultimate ones (of big-picture history).

Cambridge law professor Philip Allott once gave an address in 01999, during NATO's bout with Serbia over Kosovo. (The paper, which I believe was called "Kosovo: A psycho-philosophical deduction", was unpublished, and I'm relying on memory of a handout my colleague obtained at the event, so some of the details may be off.) In any case, Allott's analysis of the situation was sobering. Looking past the details of that particular conflict, he was able to see it as "merely" another tragic entry on the extensive, ever-growing list of destructive military engagements dating back centuries, in which the interests of people as people are subjugated to those of an abstract political entity or goal. In short, he regarded Kosovo as a manifest instance, or symptom, of an international system-wide case of macropsychotic behaviour. Now, this concept does not appear to have caught on (I've checked several times, and down to today no variation of this term has never returned a single google hit, though that really ought to change).

To illustrate; George Orwell, in a 01941 essay entitled "The Lion and the Unicorn: Socialism and the English Genius" (see this collection, p. 56), may be cited in support of Allott's view of this decidedly odd situation -- in which we humans repeatedly find ourselves -- wherein apparently rational individuals can make decisions which are rational in the narrow context of their circumstances, but whose net effect is a pattern of mutual destruction that is decidedly sick.

As I write, highly civilised human beings are flying overhead, trying to kill me.

They do not feel any enmity against me as an individual, nor I against them. They are "only doing their duty", as the saying goes. Most of them, I have no doubt, are kind-hearted law-abiding men who would never dream of committing murder in private life. On the other hand, if one of them succeeds in blowing me to pieces with a well-placed bomb, he will never sleep any the worse for it. He is serving his country, which has the power to absolve him from evil.

On cannot see the modern world as it is unless one recognises the overwhelming strength of patriotism, national loyalty.
Can we imagine designing a way out of this trap? Seems like a worthy project, I feel. In my opinion, Allott's take on this is compelling, and its implication for the discussion at hand is this: macropsychosis requires macrotherapy. What would that large-scale, systemic solution look like? I don't have a definitive answer to that, but I'm pretty sure it's the right kind of starting point.

Turning to a more current example, at John Rendon's talk for The Long Now Foundation on 15 July, "Long term policy to make the war on terror short", I posed the following question: At some point in the future, the concept of America's national interest, or indeed any national interest, evaporates into absurdity. How can we govern for that timeframe? Rendon's obtuse answer suggested either that what I had asked was unclear, or that he had missed the point (or both). In any case, this to me is one of the design questions that we would do well to apply in our rethinking of the political process; not just in one country, but in any, or all of them. How ought we to govern for the long term?

It is abundantly clear that current political arrangements virtually preclude the consideration of long term consequences. Rendon called it "the tyranny of real time". His presentation (controversial as it may have been, and whatever suspicions motivated vocal interruptions from a handful of audience members) was essentially a plea from a strategic insider for the audience members themselves to explore and encourage governance and policy for the long term, a mindspace that appeared all but impossible to broach, for him and his colleagues, and others like them. Now, the fact that he offered his time, and his neck, to the audience, was admirable -- he was bold and honest enough to identify and attempt to address this structural limitation -- although he may not have done so to the satisfaction of all concerned. I think it's this limitation which encapsulates the essential challenge for redesigning governance; that is, law and politics, in our time.

Stewart Brand, who moderated that discussion with John Rendon, once wrote what can be read as one of the founding statements for the Long Now:

Civilization is revving itself into a pathologically short attention span. The trend might be coming from the acceleration of technology, the short-horizon perspective of market-driven economics, the next-election perspective of democracies, or the distractions of personal multi-tasking. All are on the increase. Some sort of balancing corrective to the short-sightedness is needed-some mechanism or myth which encourages the long view and the taking of long-term responsibility, where 'long-term' is measured at least in centuries. Long Now proposes both a mechanism and a myth.

The Long Now's 10,000-year clock is the mechanism and myth at the heart of a mind bogglingly ambitious attempt to correct, or at least to begin to mitigate, a problem that is admittedly vast in scope -- it's civilisation-wide. It therefore represents the beginnings of a monumental effort to provide cultural reprogramming; macrotherapy. But at some point sooner or later, this species of inspirational culture-level intervention must, it seems to me, get down to brass tacks and lead to a tangible change in system parameters. A Clock of the Long Now is one thing. But can anyone imagine the institutional blueprint for a Politics of the Long Now?


/To be continued in part three.../

Friday, July 21, 2006

Why the language of design must enter law and politics (part one)

I'm a designer and I want to talk briefly about the concept of design itself. Design is the first signal of human intention. As we look around at the tragedies that we see in the making, we realize that we have to ask: Did we really intend for this to happen? Is this something we designed?
. . .
Once you realize that our culture has adopted strategies of tragedy, perhaps it's time to have strategies of change.

First we have to start with great humility. We don't know what to do. We have indigenous traditions we can draw from, but we don't know what to do. If anybody has any problem with the concept of design humility, reflect on the fact that it took us 5,000 years to put wheels on our luggage.

So as an assignment, let's design an industrial system for world culture that treats nature as an enemy to be evaded or controlled; that measures prosperity by how much natural capital you can cut down, dig up, bury, burn, or otherwise destroy; that measures productivity by how few people are working; that measures progress by the number of smokestacks (if you're especially proud, put your names on them). It is a system that destroys biological and cultural diversity at every turn with one-size-fits-all solutions, requires thousands of complex regulations to keep us from killing each other too quickly, and while you're at it, produces a few things so highly toxic that it will require thousands of generations to maintain constant vigilance while living in terror. Can you do this for me? Welcome to the first Industrial Revolution.

It's time for a new design assignment.

Extract from William McDonough, " Designing the Next Industrial Revolution", Timeline #58, July/August 02001.


Futures and design are, it seems to me, natural bedfellows.

1. Design is predicated on deliberate choice between options available. If there's only one way for something to be, there's no scope for design. Designs can be -- are made to be -- adapted, improved.

2. Consequently, a design approach to anything encourages the taking of responsibility for what we can change.

3. Good design has creative, imaginative elements. It has aesthetic (what pleases) as well as a functional (what works) dimension.

We'll come back to these.

So, McDonough's central insight here is that unintended outcomes, once their tragic nature is acknowledged and understood, are as good as intentional -- not retrospectively, of course, but from that point on. Failure to respond to the consequences of bad design makes it a case of negligence by omission. Another way to put it is that if unforeseen negative consequences emerge, the fact that they were unforeseen is no good as a defence for sticking with the original strategy.

What does all that have to do with law and politics (advertised in the title)?

Well, McDonough implies that there's a systemic problem with industry which leads to the destruction (substitute "compromised integrity" if "destruction" seems too shrill) of the environment, among other ills. I think that's a sound argument. But as a student of political science, and a reformed (and repentant) one of law, these two domains are foregrounded in my thinking as a locus of individual and collective action. Design is the first signal of human intention, says McDonough. But industry, as he uses the term, does not denote a strictly designed structure, so much as an emergent set of practices, many of them accidentally pernicious. In contrast, as I see it, at a macro-social level human intention is nowhere more clearly signalled than in the effort to shape society directly through these mechanisms: legal and political processes. They are the principal method by which we collectively design our world.

What this means is that, to whatever extent our systems of law and politics are failing us, we have not just the potential, but also the obligation to redesign them. Lest I should be misunderstood to suppose too much, I should clarify that it also means there's a concomitant responsibility to try to ascertain the extent, and the nature, of that failure, which is naturally a big and contentious debate.

Returning to the three points above, then.

1. Design is predicated on deliberate choice between options available.

Politics is also about choice. I mean this not as a shallow "three cheers for choice", but as an observation that its fundamental function is the process of negotiating what's important, who gets what, who's in charge and so on, from among competing possible outcomes.

2. A design approach to anything encourages the taking of responsibility for what we can change.

Jumping up one layer from choice within the political process, to choice about the shape of political process itself: this too can change. Cast as a design issue, it's not an ineluctable given, but a system that can be ignored, repaired or replaced at will. This mindshift is sorely needed when "politics" appears to be disparagingly thought of as simply the nasty way things are and ever will be -- as in the realist school of political philosophy, which designates the nation state as the natural unit of governance.

3. Good design has creative, imaginative elements.

The language of design helps frame political discussion in a way that appears useful for accommodating both affective and practical aspects. It is in that sense congenial to the discussion of values, which are one of the primary bases of political difference, but which somehow strike me as wearyingly dull when addressed in those terms (Let's talk about values! Yawn. Enter stage left pretension and absolutism. Let's talk instead about what you want, what you respond to, what you love. What your community can do for you, and what you can do for your community. How your ideal would look, and how we could approach it.) At any rate, what politics as such seems to lack almost entirely, in the main, is creativity and imagination. Bring in the designers. Bring on the artists. McDonough's right: it's time for a new design assigment. Any takers?

/To be continued.../

Wednesday, July 19, 2006

On the scalability of small opportunities

I have been thinking recently about the mindblowingly complex issue of how to usefully map possibility space (the hyperdimensional expanse representing all the theoretical states things could take -- the way things could be -- through which we weave an infinitesimally narrow thread of lived reality by a combination of chance events and decisions). Yes, it's a tricky one.

But an interesting insight has arisen around the way unexpected "windows of opportunity" are made and used through the confluence of emerging technological affordances and good old fashioned ingenuity.

There are two nice examples which might help firm up this rather woolly notion. Both revolve around the time-honoured obsession with getting rich.

There are, presumably, many ways to make a million bucks. I spend considerably less time than I probably should exploring what those ways are, and like most people, considerably more of my time blundering across far less lucrative terrain. (Being a malnourished student, in fact, it's arguably more a matter of progressively discovering just how broad, wide and apparently inescapable the intersecting possibility spaces of poverty, chastity, and despair can be.) But there are others who have been emboldened and empowered by the increasing, vastly interconnected pools of supply and demand online; to great effect.

So the first example is of a young man in England who seized upon the clever notion of selling one million pixels of advertising space at one dollar each, with a view to funding his undergraduate studies in business management. (As far as I can tell, this is his own original idea, and although one might imagine it would be difficult to pull off more than once, that has not apparently deterred others from borrowing the business model: google million pixels for other examples.) Aesthetically, the product of this effort comes off like a Las Vegas nightmare. Conceptually, from a business point of view, however, it's sheer brilliance. The minimum purchase is a 10x10 block, costing $100, which is certainly rather small but is enough visually to serve as a practical, iconic hyperlink to the advertiser's website. The novelty is sufficiently great that people -- such as myself at this very moment -- catch themselves propagating the story for its sheer audacity, directing a widening audience back to the site, and thereby vindicating the whole scheme on the basis of increased returns via the Internet attention economy.

The second example is of a guy from Canada who has, through a series of bartering encounters, traded up from a paperclip to -- so far -- a house in Saskatchewan, during a period of one year. This progressive scavenger-hunt up the value chain has a touch of inspired lunacy about it which appeals to me a great deal.

Both of these Internet microbusinesses depend on the playfulness and novelty of the ideas, and on the disproportionate levels of attention that they have consequently been able to garner. There's a kitsch, Warholesque "fifteen minutes of fame" aura around the tales of both these entrepreneurial individuals -- but they're getting where they want to go, and everyone seems to be enjoying it, so more power to them.

What's interesting here is the fact that from humble beginnings, these ambitious one-man business propositions, buttressed and fuelled by media attention, have been able to connect their founders to sufficiently large audiences that the former has sold a million in pixels to several hundred advertisers (perhaps several thousand, I didn't count), and the latter has substantially increased the return on his investment through a series of "right trades" upward along the value chain. (Lots of little payoffs: get rich slow.)

So yes, perhaps their time could have been better spent in some respects, but that's not the point: these stories are groaning with the freight of the zeitgeist of early 21st century Internet culture. Spotting the right opportunity has long been a key to wealth, fame and fortune. The echo chamber of new media opens up large-scale successes to small-scale ideas. Of course, the postmodern kicker in all this is that what they're really selling is their own unlikely success stories, and by attending to them we make them happen, like clockwork. Attention is where the money is: tell a compelling story -- the story of the story you're telling -- and your fifteen minutes of fame could reward you handsomely. Which, to me, underlines that perhaps the most powerful elements in navigating our possibility space are the (distinctly analog) stories we tell ourselves about where we've come from, where we are, and where we want to go next.

Monday, July 17, 2006

Time too short at Long Now

Today is my last day in San Francisco. After two and a half months of life in the Bay Area, on summer break from the University of Hawaii at Manoa, I'm flying back to Honolulu this afternoon, to prepare for the "Hawaii 2050" event we're planning for next month, and to resume my studies of alternative futures.

I have immensely enjoyed working with the Long Now Foundation, which during the short time I've been here has somehow turned a musty warehouse into a gleaming public exhibition space at Fort Mason; staged a group visit to the intended 10,000-year Clock site in the desert in Nevada; and hosted no fewer than three intriguing Seminars About Long-term Thinking (Chris Anderson & Will Hearst, Will Wright & Brian Eno, and, last Friday, John Rendon). Working on renovating Long Bets has been an educational and stimulating experience, and encouragingly, looks likely to bear fruit, in the form of an improved, deeper and more subtle design, before long. There are some other interesting projects in the pipeline which shall make an appearance here as their time comes. That mountain in Nevada has an uncanny track record of eliciting big ideas from visitors.

It has also been a very great pleasure to work, and socialise, with the exceptional Long Now posse led by Alexander Rose: Ben Keating, Chas Warner, Laura Buszard-Welcher, JD Ross Leahy, Simone Davalos, and others who have favoured the office with their presence from time to time. Presenting to the Board of LNF in mid-June was one of the very few official agenda-driven meetings that I have ever actually enjoyed. Their energy, alertness to connections, and quickness to alight on and explore new ideas was stunning. Some of the Board members I met in other settings, but they struck me in the same way. I look forward to interacting with these remarkable people again.

And so, while I'll continue to be involved with Long Now's work, and contribute from a distance across the Pacific, it's with mixed feelings that I leave these fine people to their 10,000-year devices. I'll be back to San Francisco at the first opportunity.

Meanwhile, thanks to all who made this the exceptional experience it has been, professionally as well as personally -- you know who you are.

Carpe Millennium, my friends.

Friday, July 14, 2006

Futures audio tours: Wrap your present in the future

I recently alluded to "future-oriented art/design/communication, a topic that has preoccupied me much of late but about which I haven't written a lot in this forum". One really interesting project along these lines has just taken a major step closer to being realised. Yesterday, Jake and I were delighted to learn that our proposal for immersive futures using audio (an experiment in what Jake likes to call "ambient foresight") was awarded a grant, enabling us to plan for a release of the first such effort early in 02007.

Below is a copy of the press release post we put on the Hawaii Futures website:

/Step into futures/

The Hawaii Research Center for Futures Studies is proud to announce that it has been awarded one of ten "Bright Ideas" grants to pursue an innovative experiment to encourage future-oriented community discussion. The successful proposal is for downloadable future audio tours, a concept developed by Jake Dunagan and Stuart Candy, graduate students at the University of Hawaii's "Manoa School" of futures in the Department of Political Science.

The "bright idea" in this case is to record guided audio tours of Honolulu's historic Chinatown, intended for playback on portable mp3 players. However, instead of the conventional emphasis on the past and present of landmarks and events in the area, these stories will highlight alternative possible futures. Participants will be able to guide themselves in an immersive experience that takes "the future" out of the abstract, and encourages foresight and responsibility toward future generations in Chinatown, Hawaii, and beyond. The hope is that, with the increasingly widespread use of portable audio devices, this creative use of the medium to provoke deeper thinking about future possibilities will spread to other communities, with tours being produced and translated into multiple languages, and other people being encouraged to generate their own "guerilla tours" of alternative futures.

Recipients of the "Bright Ideas" award, in the amount of $4000, were selected from a field of over 130 applications. The Hawaii Research Center for Futures Studies is also involved in coordinating the futures-oriented community event "Hawaii 2050" for the State Legislature on August 26 this year.

Vive les futuribles!

Wednesday, July 12, 2006

Nerve, the future issue

Nerve, published in New York, is a new magazine -- about sex, of all things -- which I encountered today for the first time. In the words of its founding editors:

We have created Nerve because we think sex is beautiful and absurd, remarkably fun and reliably trauma-inducing. In short, it is a subject in need of a fearless, intelligent forum for both genders.
. . .
Nerve intends to be more graphic, forthright, and topical than "erotica," but less blockheadedly masculine than "pornography." It's about sexual literature, art, and politics as well as about getting off — and we realize that these interests sometimes conflict.
. . .
This is why we think the subject of sex deserves a magazine of its own: less to celebrate the gymnastics of sex than to appreciate the way it humbles us, renders us blushing teenagers.

While I'm not yet familiar with the content, I do appreciate the sensibility.

The reason I bring it up is that it seems the Nerve "Future Issue" was recently released, dedicated to exploring, through short stories and the like, the social role of sex in 2033. Apparently the publication of articles online is staggered over a few months, so some of the material is already up, but further explorations in this vein are forthcoming. For example, a piece online from 21 July: "70 is the new 30" by Douglas Rushkoff... In 2033, old is the new young. A series of email messages from a 75-year-old reveals that this development comes with certain sexual and professional challenges.

This future-themed edition was sponsored by a Swedish vodka brand -- but using suitably futurised advertising ("voted #1 vodka of 2033"), which leapt out for me as a great example of future-oriented art/design/communication, a topic that has preoccupied me much of late but about which I haven't written a lot in this forum, yet. Still, I think we can expect to see a lot more of this type of playful, immersive use of future imagery as time goes on, and I'll elaborate on that with a post in the near future.

I also really like this, from the editorial introduction to the Future Issue:

If, in 1973, you'd asked a writer to envision pop culture circa 2006, would he have predicted online dating, tax-deductible $12 martinis, BlackBerry sex and Gawker? If Voltaire, the VH1 commentator of the 18th century, were alive today, would he declare that if Paris Hilton didn't exist, it would have been necessary to invent her?

If nothing else, they have certainly grasped the need to unchain imagination to come up with anything useful about the futures of culture. Extrapolate and perish!

Tuesday, July 11, 2006

"Western civilisation"? Mushroom for improvement

As part of his project of eviscerating complacently held conventions, especially those in the popular news, the late, maverick stand-up comic Bill Hicks (01961-01994) used to do a routine fantasising about what a true drug story might be like, if the media ever cared -- or dared -- to report one: "Today, a young man on acid realized that all matter is merely energy condensed to a slow vibration... that we are all one consciousness experiencing itself subjectively. There's no such thing as death, life is only a dream, and we're the imagination of ourselves. Here's Tom with the weather."

Seems improbable, doesn't it?

I was rather tickled, then, to read the following report today on so staid a newsmedia staple as the CNN website:

"Mystic mushrooms spawn magic event"
Tuesday, July 11, 2006, Posted: 12:57 a.m. EDT
CNN.com, Associated Press

A brief extract:

People who took an illegal drug made from mushrooms reported profound mystical experiences that led to behavior changes lasting for weeks -- all part of an experiment that recalls the psychedelic '60s.

Many of the 36 volunteers rated their reaction to a single dose of the drug, called psilocybin, as one of the most meaningful or spiritually significant experiences of their lives. Some compared it to the birth of a child or the death of a parent.

Such comments "just seemed unbelievable," said Roland Griffiths of the Johns Hopkins University School of Medicine in Baltimore, Maryland, the study's lead author.

Bill Hicks would surely have been as startled as I was to encounter an upbeat news report about psychedelic experience.

Now, there's an interesting tension in the fact that the highly subjective, personal, even spiritual, benefits of a substance like psilocybin have to pass muster in the scientific community in order to gain widespread recognition. No surprise, of course: the scientific method is the prevailing gold standard of intellectual credibility. But with or without science's imprimatur, those with first hand experience of the profound power of a mushroom trip are unlikely find these participants' responses at all difficult to believe. Indeed, since the "psychedelic '60s" a devoted subculture has persisted, or flowered if you will, for whom the potential learning, insight and pleasure offered by this and similar drugs has outweighed the possible legal penalties associated with them. The 90s rave scene, though probably more notorious for the mainstream advent of ecstasy, was also in part a locus of psychedelic revival, and though this has now gone off the boil, threads persist in culturally transformative projects such as the annual festival, Burning Man, about which I have heard much during my last two months in San Francisco. No, it will come as no shock at all to people who know something about this field that there may be positive, significant, sustainably life-enhancing possibilities inherent in the mushroom experience.

Of course, due to legal constraints, the type of research discussed in this article is very rare. A colleague of mine at the University of Hawaii, Dr Peter Miller, has undertaken a large project in this domain, without a laboratory element, but instead involving extensive surveys and interviews of people who use what he calls "psychoactive biotechnologies", which include magic mushrooms. I'm looking forward to the outcome of this research, and also my Hawaii futures colleague Jake Dunagan's thinking on the futures of the brain ("neurofutures"), which is likely to explore, among other things, the effects on political consciousness of psychedelic substances. In contemplating cultural change, there is much of interests for futurists to consider in the possibility of a significant revival or mainstreaming of hallucinogens.

It was the 01994 book Cyberia by cultural analyst Douglas Rushkoff that gave me an introduction to the intersection of psychedelic drugs and countercultural creativity. In it he documents the intertwining of psychedelia, electronic music, and the early Internet and hacker culture, focusing on the San Francisco Bay Area. While the new golden age that this work idealistically and tantalisingly seemed to portend has not quite materialised, the connections Rushkoff makes between the hallucinogen-facilitated exploration of inner space and the outward development of promising new avenues of creativity and cultural evolution remain valid, fascinating, and important. The recent book What the Dormouse Said (I acquired a copy a couple of months back, but haven't yet read it) explores the connection between San Francisco's infamous period as the epicentre of the 60s counterculture (viz. Tom Wolfe's essential account of the adventures of Ken Kesey's Merry Pranksters, The Electric Kool-Aid Acid Test ) and the subsequent blossoming of the computing and software industries down the road in Silicon Valley. Hallucinogens such as mushrooms are -- among other things, to be sure -- catalysts for personal, and by extension cultural, change. On acid, Stewart Brand famously envisaged the power of the view of the "whole earth" from space, and thereby helped push the environmental movement into the spotlight. Dreams have long been associated with invention and discovery (Mendeleyev's design for the periodic table of chemical elements, for example); at some level the hallucinogenic properties appear tap into a similar reservoir of subconscious wisdom and creativity.

We need not invoke anything supernatural, religious, mystical or transcendental here, although those aspects of the discussion are certainly worth close attention too -- the import of this news article is simply that certain mind-altering substances long regarded with suspicion and fear, for reasons that are not particularly well articulated, are now being found by scientists to be useful.

In the meantime, (illegal) drug use is virtually synonymous with decadence, social decay, escapism and irresponsibility. This is an ill informed, completely unhelpful, cultural prejudice. People that use drugs (illegal or otherwise) creatively, responsibly and productively are somehow eclipsed in the popular -- perhaps I should say political -- imagination by those who use them exploitatively or carelessly. It is utterly inaccurate to equate the use of hallucinogens with social parasitism; not everyone who drinks alcohol beats their wife, either; but this is not an argument anyone needs to make to justify having a beer. Convention is on the beer drinker's side.

A management consultant friend of mine, one of the most fastidious and relentlessly logical people I know, regularly uses psilocybin mushrooms to "rewire", generating and assimilating new insights, both intellectual and emotional. His is a deliberate and rigorous use of the drug's self-improvement potential, the antithesis of the starry-eyed hippie stereotype. For him, the trips are like a Pollock painting; not to be "read" too literally, but a densely packed message nonetheless. He won't accept every part of the mushroom experience for what it first appears to be, but carefully teases out signal from noise during later reflection.

Other hallucinogens, too, have much to offer, as recognised by various premodern cultures in a traditional and ritual context. There are tribes in Brazil which use a dimethyltryptamine-containing vine called ayahuasca as a sacrament in shamanic rituals. Peyote has long been used by native Americans as part of a spiritually intense rite of passage. Likewise for mushrooms in parts of Mexico. An episode of the riveting BBC TV documentary series called Tribe featured the intrepid Bruce Parry spending six weeks with the Babongo tribe in Gabon, West Africa, which culminated in a gruelling three-day induction ritual in which he ingested Iboga, a powerful naturally-occurring hallucinogen that prompts its user to experience the consequences for other people of their own past decisions. (I thought of this when I saw the film adaptation of The Hitchhiker's Guide to the Galaxy, which features the "point of view gun", which blasts people with insights into the thoughts and feelings of the person pulling the trigger. It might take a little while before the neuroscience gets there, but won't it be interesting when that particular gadget hits the shelves...) Interestingly, iboga is used as the basis for Ibogaine, which helps recovering heroin addicts and alcoholics.

I would argue that the judicious use of substances in this way is symbolic of cultures that may be, in some important respects, wiser and more mature than our own; cultures that value fearless introspection; intimate connection to both community and place; cultivation of awareness and pursuit of non-material progress, and the improvement in relationships that results from this. The ritual context represents a culturally evolved setting that renders them psychologically safer and helps interpret the inner journey. Our own four decades of reticence regarding hallucinogens, in contrast, might be seen as a collective fear of the transformation that can result. Not to downplay the potential medical harm that can arise from psychedelic experience, but clearly, this alone is not sufficient to explain their suppression. Cars, alcohol, electricity, tall buildings, cigarettes, hamburgers -- every commonplace of technological civilisation can be deadly; we accept the risks because we have bought into the advantages. The possible advantages of wider usage of mushrooms are almost too delightful to imagine; the risks are, like any risks, simply the price of admission, and with care, can be mitigated.

So, where might this lead? It is commonly recognised, by those who pay attention to such things, that Western culture's hostility to using certain drugs -- if only for pleasure, let alone more constructive, therapeutic purposes -- has a use-by date. English comedian Ben Elton wrote a novel a few years ago called High Society which roundly criticised the British policy on illegal drugs, when so many people, including those who maintain a public stance of censure and disapproval, privately use them. Traffic, Steven Soderbergh's superb film released in 02000, highlighted the contradictions and tragic consequences of the United States' "war on drugs", the effects of which inevitably span not just across social strata within a country, but across borders to the places which serve as centres of production and distribution. Tagline: "No One Gets Away Clean".

There is a sense here that something's got to give; and it seems to me that to the extent the contemporary passion for self-improvement and the mania against hallucinogenic experience are profoundly at odds with one another, it is the latter rather than the former which must eventually give way. My own position on the "drug issue" is simple, and in no way original: decriminalise it all. Educate rather than prohibit; regulate production so people know what they're getting; tax it to pay for the regulatory apparatus, and hold people accountable for any irresponsible behaviour indulged under the influence, exactly the same way we do for alcohol. Now, I realise that to lump vastly differing chemical compounds together in discussion, and frame the debate around a broad freedom to engage in recreational drug usage, both of which sins I've committed in the last couple of paragraphs, begins to take us away from the point at hand. Heroin and crack cocaine, for instance, have nothing whatsoever to do with psilocybin mushrooms, and the fact that they are casually drafted into the same regulatory universe implies not only ignorance but also laziness on the part of legislators. I have no personal experience with LSD, but on the few occasions I've taken mushrooms, they have been entirely beneficial, and have given rise to insights and understandings of incalculable personal value. We certainly ought to be able to differentiate between different kinds of "drugs", which is rendered rather tricky when they're prohibited, so that following the rules virtually ensures having no idea what you're talking about.

There is much more that could be said, but others have said it earlier and better than I. Further reading for the uninitiated might begin with Daniel Pinchbeck's investigative memoir Breaking Open the Head, and the late hallucinevangelist Terence McKenna's The Archaic Revival. Meanwhile, however, insofar as this unassuming CNN article may be read as an early signal of growing public recognition of the value of psychedelic exploration, it is a precious example of very, very good news.

Wednesday, July 05, 2006

Learning the trade: the futurist's apprentice

I'd like to propose an idea for the futures community. The idea is that each and every professional futurist consider taking on an apprentice to mentor through to independence.

Not a new concept, I hasten to add -- as far as employment is concerned, it's one of the oldest ones there is, dating back to the late middle ages. It is not even a new idea for the futures community. There seems to be plenty of talk about "mentoring" in a broad sense, and many of the futurists I know can name one or two people that taught them "everything they know" -- or at least, most of what they knew at one time -- and with that, the inspiration and example which allowed them to go on to carve their own niche. There may even be future-oriented organisations that have a more or less established mentoring or apprenticeship system. I can't name any, but such may exist, and if this is the case, more power to them.

What I'm suggesting is that this be developed as a more habitual, and more widely known, mechanism for getting into the field -- because assuredly, at this time, getting good training in futures is not easy. And it needs to be easier, in the sense of lowering barriers and providing opportunities, if it is to catalyse the culture-changing work that needs to be done.

I've been contemplating this in light of the recent discussion of "The future of futurism" which suggested to me both that serious futurists aren't getting their message across to journalists and, by extension, the general public as effectively as they might like; and that certain people who are marketing themselves (in some ways more effectively) as futurists are simply making it up as they go along. (The predictive mode is almost antithetical to learning, because if it were really about learning, one of the first things it would have learned is that prediction doesn't work very well.) The upshot is that mechanisms to teach and learn real futures still require our close attention; which is the proximate reason for this piece.

As suggested above, then, mentorship is already a key part of accumulating practical experience in futures. To the extent they're not self-taught (the default situation) many futures practitioners can trace the steepest part of the learning curve to university courses like those at Houston, Swinburne, or Hawaii. Some win their spurs in organisations like GBN, Futuribles, or Royal Dutch/Shell. Some just attend conferences, do a lot of reading, and figure the rest out for themselves. The optimal situation for the neophyte would seems to be to experience a mixture of these. But the sine qua non of doing futures well appears to me to be lots of experience (generating, elucidating and pursuing future possibilities), and lots of conversations with people who know what they're talking about.

This gestures back to the long-standing discussion over whether futures is more an art or a science. To my mind, there's no longer much usefully to say on this: it's a mixture -- try to fit it into one hole exclusively and you miss the other. Is it a profession or a trade? Again, it strikes me as a bit of both. (I'm doubtful that it's worth getting hung up on the distinctions, which can detract from actually doing anything useful. In this regard, I often think of Canadian futurist Ruben Nelson's observation at the 02005 WFSF conference in Hungary, to the effect that "future studies isn't 'the work'; improving the world is the work, and futures is a means to do that").

But the field, to the extent that it's still not entirely consensually clear to itself what kind of enterprise it is at heart, can certainly stand to learn from comparison to more established activities. Take the practice of law, for instance. In law, the principles and content (the stuff you can learn from books) are important, but simply to master this corpus at an academic or intellectual level does not a lawyer make. There's a certain emphasis on content mastery in law -- lawyers are in large measure the gatekeepers of a parallel universe (perhaps I mean a microcosm, although it sure can feel like a parallel universe) ... one of arcane language and complex procedure. But, where lawyers are expert tour guides, who interpose themselves between the client and render safe the incomprehensible-or-even-dangerous unfamiliar surroundings, futurists are more like expedition advisors, because they co-navigate and map with their clients terrain that no one has ever seen. Yet, if they do it well, in the process they'll impart some of their skills. It's more of a collaborative learning process. In law, the lawyer may learn, but as a content "expert" it generally behooves her to keep that ignorance a secret. Lawyers' repeat business is predicated on the unspoken notion that the law is far too complex, and the stakes too high, for regular people to be trusted to do anything much for themselves. Navigating change, while being both as complex and as high-stakes as anything imaginable, is however something that in my experience, many futurists would really like to teach the world to do. A highly desirable scenario for the future of futures may be, I think, one in which designated "futurists" are no longer necessary. (Come to that, so would a scenario in which lawyers are no longer necessary -- but that's another story.) A wise culture, or as Slaughter puts it, a "social capacity" for foresight, would obviate most of the need for a futurist specialisation: but that's some time off yet.

Meanwhile, my point is that aspiring lawyers, who have relatively more content-mastery emphasis than futurists (for whom process is at least as important), are required, in Australia and the UK at any rate, to undergo a period of traineeship or clerkship before they are licensed to practice in their own right.

Now, someone else might like to use this to make an argument for registration and regulation of the futures profession, but that's not my point. I'm suggesting this because it seems to me to be a rather crucial step in "learning the trade". If, as I would argue, futures is more about process than content, but structurally provides fewer opportunities for aspiring futurists to learn process (from example, on the job), then a major chance is being missed. On the other hand, if practising futurists as a group were to put this realisation into practice, the whole field could benefit from making that kind of semi-formal learning opportunity more common.

So why do I make this suggestion? My motivation is largely my own experience of trying to become a futurist. I first encountered the field formally at the World Futures Studies Federation biennial conference, in Brisbane in 01997. It was my final year of high school. I wound up studying for a BA (history and philosophy of science) and an LLB concurrently at the University of Melbourne. Richard Slaughter was unfortunately no longer there, and so there was no opportunity to study the field formally with an experienced academic futurist. In the final semester of my arts degree, I persuaded a lecturer with an interest in the sociology of technology to supervise me in directed reading on futures studies, however my efforts were rewarded with the lowest grade I received throughout the whole degree. It wasn't so much that my essay was awful, although it probably was -- but the lesson I gleaned was that working within a different paradigm is like swimming upstream. It's hard work. Support is as rare as it is valuable.

I attended another WFSF conference, in 02001, and Budapest Futures Courses in 01 and 03, and worked in two futures related projects in England the following year -- on the first, I was a futurist lone ranger in an education-oriented organisation; in the second, I had the privilege to be working alongside Wendy Schultz, an exceptional academic and consulting futurist whose mentorship during the project was invaluable.

It was not until last year that I had a chance to start learning "the content" of futures formally, in Hawaii (which was is no small measure due to Wendy's encouragement and support). But in fact it wasn't so much the content I came for; I had already discovered that this can be learned independently. What I came to Hawaii for was the chance to build on the kind of mentorship I'd briefly experienced in that role in England -- to spend time in a community of philosophically rigorous, alert futures-oriented people, with whom I wouldn't have to patiently explain the basics of futures, and field an hour of questions every time, before getting into deeper discussion. (Working with the unintiated is an indispensable part of the job, but it gets lonely.) So that's what I went to Hawaii for, and that is what, to my delight, I have found; not simply a community, but one interested in improving itself and delving more deeply into the many questions opened up by the unique Datorian perspective on the world.

At the same time, though, I have come to believe that by far the most important part of the learning process occurs in "doing" futures (rather than just reading or talking about it, though these are also essential); ideally under the supervision of someone who's been doing it for much longer. I've been fortunate to find that in Hawaii; but there are so few comparable opportunities, that a broader solution is required. Hence the apprenticeship suggestion. Of course, the problems to which I've alluded are well known, but it sometimes seems that everyone's so busy trying to make a living that they don't necessarily have much time left over to address the structural reasons that make it difficult for them to make a living!

The reasons I see for a more regular futurist apprenticeship scheme are, then, in sum:

- Academic courses don't necessarily involve hands-on work, which I see as self-evidently essential to being a useful, effective consulting/community futurist. (We're upscaling our activity in Hawaii, partly in recognition of the fact that this is the kind of experience students are hungry for).

- Apprenticeship is a sensible pedagogical arrangement, given the commercial in-confidence of much crucial futures work. With a junior futurist present to support the senior one, even in circumstances of confidentiality I believe "the futures field" would learn more effectively, retaining and transmitting "stories from the front" longer and better. Post-client debrief is also one of the best learning experiences for the junior futurist. Without one there, the opportunity to have that vital conversation is, quite simply, passed over.

- Futures consists of a relatively small number of workers, widely distributed around the world. Good quality futures courses (academic or otherwise) are few and far between. Many countries have no such thing, although most would at least have a few consulting futurists: why not capitalise on the learning opportunity that does exist, and thereby help compensate for the relative paucity of university futures programs? This is a sound response to the distribution problem.

- Practising futurists, especially those that don't teach, ought to find some way to at least "replace themselves", without which they may risk becoming an endangered species. The field needs them to do this, and so does the world in general.

- The apprentice would get a chance to absorb the philosophy and practice of a fully fledged futures practitioner, and in so doing, hone their own sense of the field, the dynamics of change, the methodologies, and grow their own priceless reservoir of applied experience.

- The reputation, work and personal qualities of the practitioner play a large part in this line of work (as Peter Hayward, Director of the Swinburne Masters' program in strategic foresight, pointed out in a futures salon given in Hawaii earlier this year). At this point in time, people don't tend to hire "a futurist" generically the way they might hire "an accountant" (nor am I implying that they should -- just observing that they don't). It's a rare enough position, and particular people are usually sought by name. Training with established individuals is a way for the field generally to leverage and broaden the "equity" in their work and reputation.

- It would help legitimate the field, and pass its learning down through time.

- From the senior futurist's perspective, an apprentice would be far more useful and rewarding than simply having a personal assistant. They would need to be paid, of course; but not a vast amount, on-the-job training being the focus of the relationship.

So, who could take it forward? This is an idea that might be benefit from the considered attention of the Association of Professional Futurists, but would undoubtedly touch on the interests of people in the World Futures Studies Federation, the Institute for the Future, the World Future Society, and so forth. Again, I'm not suggesting that there aren't elements or examples of this already afoot -- there surely are, which is a reason to consider making it more widespread and well known; a conventional mechanism to advance an unconventional field.

Details remain to be worked out (the "indentured servitude" model, for example, may not be quite so viable today as in times past; and the traditional seven year period seems like rather a long time) but the essential idea remains viable as it has been for centuries. I'd be interested to know what others make of this.

Monday, July 03, 2006

"The future of futurism"

"The future of futurism: Down with the techno-utopians! Up with the techno-realists!"
By Reihan Salam
Slate, Thursday, June 29, 2006

The following is an unedited copy of my response to the above article, posted as "There's more to futures than you think" to Slate's discussion board, "The Fray". Apologies for the overlap with content of recent posts here at The Sceptical Futuryst entitled "The meming of futures" and "Prediction will eat itself". Some of these points can't be reiterated often enough, it seems...

--

I enjoyed reading this article, but as a futurist in training it is disappointing to me -- and verging on extremely tiresome -- the way the futures field is constantly characterised as the preserve of predictors and pundits only.

The assumption made in this article is that all futurists are interested exclusively in peddling a singular image of the future. The images may differ from person to person, but in Salam's portrayal, every futurist has an agenda to convince as many people as possible that "the future will be x". This is certainly true of some self proclaimed futurists. But as a graduate student at the futures program within the University of Hawaii at Manoa, that has not been my experience at all.

For example, Professor Jim Dator, who founded and runs the futures program, which now sits within the Department of Political Science, taught what is thought to have been the first university level futures class back in 01967. He has also been Director of the Hawaii Research Center for Futures Studies since it was founded in 01971. Dator has built his career largely on the task of disabusing people of the mistaken impression that a predictive, singular attitude towards "The Future" is a viable and appropriate way to look at the subject. (Dator's so-called "first law of the future" is that "'The future' cannot be 'predicted' because 'the future' does not exist.") Indeed, the very reason for calling the field "futures studies" or simply "futures", is that, since we don't know what could happen, we ought to consider many possibilities (hence the plural "-s"). This is a much more useful way to think about things that haven't happened, because it provides a basis for meaningful choices between genuinely different outcomes. Not all "futurists" think this way, but it's important to realize that simply because some people attempt to make a name for themselves as supposed crystal ball-gazing gurus of the yet-to-be, not everyone with a future focus does the same. For the same reason that one doesn't cite Holocaust-denying "historian" David Irving as an exemplar of the study of history, we ought not to cite Nostradamus wannabes ("alternative-denyers"?) as if they represented futures studies as a whole. They simply don't.

Dator is not the only one who makes his living as a futurist by deepening people's appreciation of the plurality of possibilities, and inventing or pursuing their preferred options. There are hundreds of others, attached to such organisations as the World Futures Studies Federation (has an academic focus) and the Association of Professional Futurists (has a consulting focus).

I'd encourage the author of this Slate piece to check the recent article by Jamais Cascio listing "Twelve Things Journalists Need To Know to be Good Futurist/Foresight Reporters". The first item on Cascio's list: "Nobody can predict the future." Journalists are indulging bad thinking and perhaps intellectual fraud by pretending otherwise.

Salam rightly argues that "we need clear-eyed futurists more than ever", but he attacks one of his targets for evincing "an unrealistic, blue-sky vision that discredits futurism". Lack of realism isn't the issue: wasn't ending slavery a "blue-sky vision" once upon a time? And how realistic was the 9/11 scenario in most people's minds, until it actually happened? No, lack of realism isn't the problem here. A lack of alternatives, rigorously thought out, is the problem -- and the best, clearest-eyed futurists can both discern and communicate multiple futures, instead of rallying converts behind a single one.

The starting point for Salam's article, about how easy it is to discredit futurists, is of course true if we consider only those who make a living out of prediction. Of course they're mostly wrong! But there's a lot more to futures than that, and the sooner and more widely this realisation dawns, the more productive all of our futures conversations can become.

Saturday, July 01, 2006

Don't Panic

Someone I regret that I will never meet is the late, brilliant writer Douglas Adams, who died just over five years ago (11 May 02001). Adams was an extraordinarily original, funny and humane author best known for his Hitchhiker's Guide to the Galaxy trilogy -- which ultimately expanded to five parts ... or was it six? Either way, it was a helluva trilogy. Adams was at least as inventive and eclectic as Neil Gaiman is, but without being the slightest bit self-important (no offence to Gaiman, also an excellent writer, but his chosen mythological subject matter does present certain hazards).

Not having read Adams at all in the last year or two, this week at a great second hand bookstore just around the corner from Long Now, I picked up a copy of The Long Dark Tea-time of the Soul, featuring his marvellous character Dirk Gently, the holistic detective. Then today, serendipitously, I ran across an essay by Adams from 01999 entitled "How to stop worrying and learn to love the internet". It's terrific. I'll certainly look out for more of his non-fiction. An extract:

I suppose earlier generations had to sit through all this huffing and puffing with the invention of television, the phone, cinema, radio, the car, the bicycle, printing, the wheel and so on, but you would think we would learn the way these things work, which is this:

1) everything that's already in the world when you're born is just normal;

2) anything that gets invented between then and before you turn thirty is incredibly exciting and creative and with any luck you can make a career out of it;

3) anything that gets invented after you're thirty is against the natural order of things and the beginning of the end of civilisation as we know it until it's been around for about ten years when it gradually turns out to be alright really.

Apply this list to movies, rock music, word processors and mobile phones to work out how old you are.

Adams is highlighting the way that real change occurs between generations, as each successive one assimilates and builds on the inventions and achievements of the one before it. Shades of media theorist Marshall McLuhan: "We shape our tools, and thereafter our tools shape us." Thus does the history of technology compound into a shapeshifting cultural force, a medium for the human race to act upon and transform itself, and by extension the planet as a whole.

Now, I have a passing familiarity with the Strauss and Howe theory of generations, which argues that over centuries they cycle predictably from political conservatism to radicalism and back again, through a pattern in four stages. But without having any reliable way -- prospectively as opposed to historically -- to determine when a sociologically relevant "generation" ends and the next begins, or to determine what significant unpredictable events may prove to be formative of a cohort's experience of the world, even if the character of their reaction could be forecasted; as I see it, such an approach can't help but be redolent of horoscopes writ large. How, looking forward, can this afford anything useful in the way of concrete analytical application? (Not a rhetorical question; I'd welcome some responses on this.)

A more limited, but useful approach to analysing generational change is to consider how the presence of existing and emerging technologies may form part of what's taken for granted, and will be built upon, by the children of today. (Further out than that, any generation-based analysis seems highly unlikely to help, because there are basically no raw materials -- read: people with experiences -- to work with, as yet.)

In principle, what we can say is intriguing about this transgenerational, technological baton-passing, is the way that incrementally, through progressive naturalisation of layers of technology-driven change, society ends up in places far removed from what anyone ever imagined, let alone intended. In his 01984 book What Sort of People Should There Be?, an early and very thoughtful contribution to the debate on genetic engineering, the English ethicist Jonathan Glover points out that part of the problem with (what we'll call) compound technological change is that there are conceivable future worlds that would make folks today recoil, but which wouldn't seem at all objectionable to the people living in them; their values having adjusted by baby steps to accommodate most of the changes that got them there.

This is a fascinating philosophical point. It poses a serious problem for long range thinking, let alone actual planning, which is that if we won't have to live "then and there", then who are we to judge this or that scenario, and try to avoid or pursue it? (It's analogous to the ethical conundrum concerning "lives not worth living".) There's an implication here of some kind of inherent limit to the responsibility we can reasonably expect ourselves to take, perhaps mitigating what Jaron Lanier has called "karma vertigo", which unfortunately is a common affliction for long-term thinkers. (Long-term thinking itself, fortunately or not, is a far less common affliction.) And it may induce us zealous futurist types to put a little more stock in the ability of future generations to deal with their own era's problems in their own way, perhaps even accepting a zen-like element in our forward thinking that Douglas Adams surely would have endorsed: Don't Panic.

The meming of futures

The first novel by a former ad-man, James Othmer, has been published, with a soulless corporate bastard as the protagonist. It's called The Futurist. Here's a partial synopsis from a recent book review:

"At one time, Yates was an idealist, an optimist who really believed in the futures he spun for people. But he's become morally malleable and, under the surface, it's starting to bother him."
. . .
"Working the Futureworld conference in Johannesburg, Yates drains the hotel minibar and gives a speech in which he recklessly tells the truth, admitting that he doesn't know anything -- that we don't. He declares himself the founding member of the Coalition of the Clueless."

The Coalition of the Clueless? Sign me up.

To professional futurists, if this book has a broad impact, it may well be seen as largely negative, adding to what some experience as a serious drag of having constantly to reassert their legitimacy against the taint of charlatans and frauds using the same labels. So, two responses occur to me.

First, even if that's the case, there may be a silver lining here, in that the book has apparently been optioned to be made into a film, with screenwriting genius Charlie Kaufman (Being John Malkovich; Adaptation; Eternal Sunshine of the Spotless Mind) rumoured to be considering the project. Anything with Kaufman's name on it is bound to be worthwhile, even if it inadvertently were to rain disgrace on my chosen profession.

Second -- and for real now -- it opens up the mainly interesting, only slightly arcane debate ("latent tension" may be a more accurate phrase) in this field between those who embrace the term futurist, and those who shun it. For instance, Paul Saffo, with whom I discussed this as recently as yesterday, prefers the term "forecaster" for his own job title, because "futurist" to him implies advocacy of a particular outcome, which he wants to avoid. I'm sympathetic to this rationale, but have a different sense of the two terms. To me a futurist is someone engaged in thinking about futures broadly; while it's forecasters who have a special commitment to projecting "the right future".

Anyway, it's not that I don't understand or even share the strong impulse to dissociate oneself from self-appointed soothsayers and gurus, and to insist on quality and rigour in futures work. (Quite a few of the futurists I know actually err on the side of paranoia about being seen as careful thinkers.) Academic values are inseparable from how I have gone about learning the trade. Professor (of Foresight) Richard Slaughter's article concerning the three categories of depth in futures work, which are vastly different in scope and intention -- pop futures, problem-oriented futures, and critical/epistemological futures -- clearly underlines a need to look deeper than the advertised labels. Futures ain't futures.

Ultimately the qualities ascribed to or associated with the terms must depend to a fairly high degree upon the characteristics and qualities of the people using them. That's the main point: be sure you're doing good work. We'll come back to that. Because I'm not entirely indifferent on the labelling issue. Yes, the contents matter more than the label, the steak more than the sizzle, and the book more than its cover -- but in each case the latter is not so easily separated from the former, and nor should it be. I think it's not good enough for well-meaning "futurists" (whether they like that name or not) to abandon a swath of space -- words like "futures" and futurist" -- which is prime mental real estate in the popular discourse, to cynical corporate drones or syrupy seers. The world needs those words, because the ideas they represent belong to us all. In my opinion, it's incumbent upon thoughtful practitioners and advocates of foresight to take back these memes, refusing to let them be held hostage by whoever it is that they think is giving them a bad name.

I say, take back the meme "futurist", so that unless you're discussing 20th century painting, the meaning "professional futures thinker" eclipses any association with the defunct quasi-fascistic Italian art movement. I say, take the word "futures" back from the corporate world, so it comes to be more commonly associated with the systematic exploration of possible worlds to come, than with contracts to buy and sell non-existent commodities (this link is currently the top google hit for "futures").

Other contenders in the labelling of the field? "Foresight" is apparently becoming a more popular term, particularly in Europe; but I think the privileging of the sight metaphor may be a problem. There's more to the future than "vision"; we have five senses -- and that's not including humour (which frequently trumps the others on the count of usefulness, in futures as in other things). "Futures studies" is fine, but anything "studies" sounds a bit like a hodge podge or a plea for special interests, rather than the mature, coherent perspective in its own right, which futures certainly is if you know where to look. "Future studies"? Singular -- scratch that. It misses what to me is the main point of looking ahead; helping to make meaningful choices between alternatives. "Futuristics" and "futurology" are both so absurdly retro they're actually kind of cool. I might look for chances to engineer a comeback for them in the next few years. Meanwhile, though, my sense is that simply the term "futures" says pretty much everything that needs to be said in a name for the study, invention and design of alternative possibilities.

Now, as implied above, at one level what you call it all doesn't matter. The quality of the work -- including a distinct and deliberate lack of astrology, Nostradamus references, and overblown predictive pontification -- should speak for itself. The big challenge for those of us who would help create a wiser culture through working in this field, whatever you call it, lies partly in earning respect by doing good work. The other part of the puzzle is in lending that reputational capital to the (I trust we all agree) wrongfully and unfairly discredited enterprise of thinking ahead, by not avoiding the use of perfectly good words, like futurist. There seems to be a complex economy at work here, whereby the prestige and content of the name, and the work carried on under its auspices, are in tension. If being a "futurist" were universally regarded as vacuous and silly, then the charlatans wouldn't bother using the term. But to the extent that they do, it may deter good thinkers from referring to themselves that way. The meming of futures (whatever) remains unfinished, with perhaps frustrating results at times; but paradoxically it can be taken as an encouraging sign that it's resisted disciplinisation, ossification, and is to that extent still a conversation space worth exploring.

Zia Sardar (editor of the top scholarly journal Futures) observed at the WFSF conference in Budapest last year, that futures is much like philosophy, in that anyone can do it, and there's no formal patrolling of the label; yet there's evidently a world of difference between the best and the worst. You can't stop bad philosophers from using that job description, even as the geniuses and world-shaping intellectuals use it too. Language being as irreducibly treacherous as it is, the best we can do may be to ensure the quality of our work makes the case for us.

So, I look forward to a chance to read this new novel, and to the many future conversations that it is sure to feature in, ready or not, if a film adaptation gets made. And if it were to wind up promoting widespread contempt for self-proclaimed futurists; well then, misconceptions, unfortunate though they may seem in one respect, always present a great learning opportunity. At least it would make futures a more prominent subject of discussion. Oscar Wilde: "There is only one thing in the world worse than being talked about, and that is not being talked about."