Showing posts with label Douglas Adams. Show all posts
Showing posts with label Douglas Adams. Show all posts

Wednesday, March 04, 2009

Amazing = mundane

Last October, standup comic Louis C.K. appeared on Late Night with Conan O'Brien and did an inspired four-minute rant about the paradoxical state of the world. Titled after his central observation in the bit, "Everything is amazing, nobody is happy", the clip has recently gone viral (e.g. via Vanity Fair), and its several appearances at YouTube (1, 2, 3) have together attracted over one million views so far. This ballooning popularity is due, we may surmise, to two things. 1. It's hilarious. 2. His view resonates.



The human alchemy which so rapidly transforms technological wonder into ennui has several consequences.

First, it is part of the way our technology layers over the generations: each is born to a new normal (a process described beautifully by Douglas Adams).

Second, it helps explain how things can seem to have been "going to hell for as long as anyone can remember" while in many ways improving overall in the long run (Paul Saffo).

Third, it suggests that futures work that attempts to leverage the principle of dazzling people ("Flying cars! Underwater houses! Flying houses!") may be questing in the wrong emotional register.

Here we come to the core issue, from a futurist standpoint. As Jamais Open the Future Cascio wrote not long ago:

Changes rarely shock; more often, they startle or titillate, and very quickly get folded into the existing cultural momentum.
...
The folks in [a future] scenario don't just wake up one day to find their lives transformed; they live their lives to that point. They hear about new developments long before they encounter them, and know somebody who bought an Apple iLens or package of NuBacon before doing so themselves. The future creeps up on them, and infiltrates their lives; it becomes, for the people living there, the banal present.


So, rather than plundering the landscape of possible futures for their potential to startle, this line of thinking suggests that it may be truer to our subject matter if we try to convey the ordinary, quotidian quality of varied ways of being in the future (for which purpose, the already staggering variety of the past and present set a fine precedent: there are a million different ways to be bored). But there's a real art to this. Making the extraordinary seem ordinary is an uncommon feat.

The most successful science fiction films, in a narrative or artistic sense, tend to suffuse whatever novelties they introduce with a lived-in quality that lends the texture of truth. The first work to spring to my mind in this category is Alfonso Cuarón's masterful Children of Men, about which this writer has said:

The reality of the hypothesis, or put another way, the plausibility of the scenario (the mechanism of which is never properly explained in the film) was asserted with such fluidity, confidence, and integrity of detail -- just the way we encounter the real world, which is crammed full of people accepting complete absurdities as wallpaper -- that I found myself drawn in, having to meet the story on its own terms.


The study of futures provides valuable arguments and heuristics for both "making the strange familiar, and the familiar strange". (I don't know who first suggested his provocative formula, but it has relevance for many endeavours, not least art and anthropology.) Devising and communicating what we might call "everyday futures", is an example of the former operation, and I agree with Jamais about our collective room for improvement there.

The latter, however, is no less important. Louis C.K. is looking for laughs, not social analysis, but the insight works either way. He's right: Everything is amazing. To stand back from this every day -- to discern in it not only the banal, but at one and the same time the beautiful and the bizarre -- is to stand in awe.

We are told of a Chinese curse that says "May you live in interesting times". I wonder if it makes things better or worse not even to realise when this has come true.

Related posts:
> Don't panic
> In praise of Children of Men

(via The Long Now Blog)

Thursday, February 26, 2009

Thoughts about feelies


"Feelies" from Deadline, Infocom, 01982 | Images via Infocom Gallery

Here's a rather wonderful example of creating in-story artifacts to help augment immersion in a narrative. Worldbuilders, take note.

In the 01980s, computer software publisher Infocom produced works of interactive fiction, avoiding the primitive graphics of the day in favour of a text-based interface with a relatively sophisticated parser (grammar interpretation engine), enabling users to type in more complex, speechlike instructions. Says Wikipedia, an authority on such geeky arcana, "Whereas most game developers sold their games mainly in software stores, Infocom also distributed their games via bookstores." We also learn that "Three components proved key to Infocom's success: marketing strategy, rich storytelling and feelies."

Wait a minute -- feelies?

It seems that one of the most distinctive and clever elements of Infocom's narrative strategy was to include fragments of the game world -- that is, props from within the story's universe -- inside the box with each game. A presentation by USC GamePipe Labs instructor Victor LaCour on the history of early videogames gives some examples (slide #25): these "feelies" included such objects as the protagonist's diary (Planetfall), the menu of an in-game restaurant (Ballyhoo), and a scratch-n-sniff card (Leather Goddesses of Phobos). In addition to upping the tangibility ante for the narrative world, they would also serve as an elegant form of copy protection (a disincentive for software pirates) since certain puzzles in the game could not be solved without those items.

A detailed history of the rise and fall of Infocom was produced in 02000 by some MIT students (appropriately enough, since the company was an MIT spinoff to begin with), and although it doesn't mention feelies by that name, the paper captures a sense of the games' successful storytelling strategy in this era of early computer-based entertainment (p. 21):

The attraction of Infocom games was multi-faceted. At times, the games could bring the simple pleasure of reading a light, fast-paced novel, whose course could be affected by the reader. Other times, the games provided the intellectual satisfaction of solving a complicated logic puzzle. Without an image of the protagonist, players could identify with the main character and even imagining [sic] themselves in the role. A typical Infocom game allowed the user to feel as though he or she were living the life of a police detective, medieval hero, or space ranger.
...
Infocom’s games were extremely well written, and they provided uses [sic] with hours of enjoyment. But to claim this was the only reason for the success of their games is to tell only half of the story. The other half of the story lies in just how Infocom got people to buy their games in the first place: Infocom’s unique publishing and marketing strategies were crucial factors in the success of their games.

According to the company's marketing director, Mike Dornbrook (quoted in an Infocom FAQ -- thanks, Internet Archive -- cited by the MIT paper, pp. 23-24, FN 32-33), the story behind the feelies runs as follows:

The first exotic package was for Deadline (the third game, after Zork I and II). It was created because Marc Blank couldn't fit all the information he wanted to include into the 80K game size. Marc and the ad agency, Giardini/Russel (G/R), co-created the police dossier which included photos, interrogation reports, lab reports and pills found near the body. [See images at the top of this post.] The result was phenomenally successful, and Infocom decided to make all subsequent packages truly special (a big benefit was the reduction in piracy, which was rampant at the time).

The first 16 packages were done in collaboration with G/R.
...
We were spending a fortune on package design ($60,000 each on average in 1984 - just for design!), so we eventually decided to bring it in-house. I hired an Art Director, Carl Genatossio, a writer, a typesetting/layout person, and someone to manage all () "feelies" in the packages.
...
An unsung heroine of Infocom was our Production Manager, Angela Crews. She was responsible for acquiring the scratch-n-sniff cards, ancient Zorkmid coins, glow-in-the-dark stones, etc. which made the packages so distinctive. It was often an incredibly difficult task.

As for who oversaw all of this, again, there were many responsible.
...
I would estimate that each Infocom package had 1.5 man-years of effort invested in its creation.

For those curious not only to read about, but also to see, some marvellous feelies (actually feeling them online is however, technologically, still a bit of a stretch), the miraculously still-kicking Infocom Gallery (last updated in 02004) comes to the rescue with an archive of the company's releases that contains game descriptions and images of the artifacts included with each. It jumps out at me that two of Infocom's games were worked on by the brilliant British humorist Douglas Adams; the first being an adaptation of his famous Hitchhiker's Guide to the Galaxy, feelies for which include a microscopic space fleet:


....an order for the demolition of Arthur Dent's house:


...and a parallel order for the demolition of planet Earth:


The other Adams game is the intriguing Bureaucracy, which sounds like Kafka meets Monty Python:

When the bank refuses to acknowledge your change-of-address form, you'll find yourself entangled in a series of bureaucratic mishaps that take you from the feeding trough of a greedy llama to the lofty branches of a tree deep in the Zalagasan jungle.

This could be the most fun anyone will ever have with a tale of bureaucratic hell (outside of Gilliam's Brazil, perhaps). Accompanying feelies included a letter from your new employer, Happitec:


...and a copy of Popular Paranoia magazine:


Now, while the purposes may be different, I trust the creative parallels between feelies on the one hand, and "evidencing", reality prototyping, and future artifact creation on the other, are obvious. All are about concretely manifesting the paraphernalia of an otherwise imaginary, absent world, the better to inhabit it -- or at least to meet it on its own terms -- for a while.

There's much more in the Infocom back catalogue that we could explore here, but already it becomes clear how these games might have garnered a loyal following, and there's something inescapably sad that they seem to have died such a sudden death. As the Douglas Adams website puts it:

[G]raphics games came along and the computer using portion of the human race forgot all about 500,000 years of language evolution and went straight back to the electronic equivalent of banging rocks together - the point'n'click game. Infocom and most of its competitors went to the wall - signaling the arrival of the post-literate society.

(It makes me pleased to have been able to attend the Long Now's Funeral for Analog Television last week in Berkeley -- the passing of our once-cherished media should not simply go unnoticed.)

However, the Adams site goes on to note that with the Internet, "People have learned to type again and are taking an interest in interacting, via their computers, with other people and with content." Moreover, a dead genre turns out not to be entirely dead, but rather to have retired to remote corners of the Web like this (Infocom games still playable online include the Hitchhiker's Guide).

Even if the golden age of "interactive fiction" has passed, and its feelies are now the glorious preserve of only the most committed boffins, I can't shake the feeling that feelies have a future, too. It's curious that seeking antecedents to the future artifacts meme takes us down an overgrown path into the not-too-distant past, there to find that feelies -- tangible auxiliaries to a cutting-edge storytelling technology, concessions to meatspace -- may have a transreality staying power that as a practice, seems timeless, compared to the wonderfully quaint electronic games they were created merely to supplement.

Friday, February 23, 2007

Dawkins at Manoa

This week, evolutionary biologist, science advocate and atheism activist Richard Dawkins was a guest speaker at the UH-Manoa Distinguished Lecture Series. Dawkins gave presentations on two different topics; "Queerer than we can suppose: The strangeness of science" on Tuesday evening, and "Is evolution predictable?" Wednesday afternoon. Both were oversubscribed, with standing room only in the afternoon session, and hundreds turned away in the evening.

An Oxford professor as well as a prolific author with a wide readership, Dawkins is one of a handful of popular science writers -- also including Stephen Jay Gould and Paul Davies -- whose work led me, in my mid-teens, to contemplate science journalism as a career. As it worked out, my first degree was in the history and philosophy of science (HPS). So, like many others, I owe an intellectual debt to Dawkins, whose crystal-clear use of analogical thinking helps make him as capable an explicator of complex scientific ideas as I've ever found.

Some of what he said on Tuesday reminded me of a fascinating class I took as an undergraduate, the University of Melbourne's HPS offering called "Science, Life and Mind", which dealt with the various psychological pitfalls that affect the scientist's mission of apprehending the world rationally (such as the biases and heuristics research of Tversky and Kahneman). Focusing not on the cognitive details but instead on the broad-brush evolutionary limitations hemming in human perception, Dawkins made the point that, depending on the ecological niche of the world they inhabit (particularly the scale, from microscopic life to megafauna), different animals operate different "world-representing software". As a result, the common sense view of reality that evolution afforded us to navigate the "Middle World" (neither microscopic nor macroscopic in level) where we humans live, does not necessarily correspond to the nature of things in any absolute sense. One example is the fact that the spectrum of light visible to us, which presents itself, so to speak, as everything there is to see, in fact comprises merely a fraction of the "larger rainbow". Another illusion born of our position in the scale of things is the perceived solidity of matter, because at a subatomic level "matter" turns out to be mostly empty space. In this way Dawkins, borrowing J.B.S. Haldane's turn of phrase, elaborated the idea of science being "queerer than we can suppose".

He went on to add that in Middle World, "evolution has not equipped us to handle very improbable events". Things at the "very unlikely" end of the probability spectrum are by definition rare creatures, although they are sighted from time to time. What some regard as "miracles" are, says Dawkins, nothing other than highly improbable, but nonetheless possible -- very occasionally actual -- events. The spontaneous arising of a self-replicating molecule in the universe, that is, the advent of life, illustrates the point. Now, as you may be able to tell from the above, if you didn't know already, Dawkins expends a good deal of his professional effort advocating scientific epistemology, and simultaneously debunking religious belief (see his current book, entitled The God Delusion). I haven't read that one yet, although I read and learned a lot from Sam Harris's excellent The End of Faith, another recent work which makes a similar appeal to replace religion-motivated wishful thinking with reason. We'll come back to this point.

Meanwhile, let's stay with the interesting line of thought mentioned above, that what can seem utterly outlandish within our narrow frame of reference becomes plausible in the big picture. Accordingly, it seems we might expect to discern more clearly the horizons of possibility if we take a broader look. So let's consider this idea in relation to temporality: it's not difficult to see how this notion illuminates the value of thinking very long term. To do so expands our sample of possibility space to encompass much more than just the "visible spectrum" of contemporary human experience. The practice of history (and other fields of study, for that matter) makes it plain that what we perceive immediately around us is far from an exhaustive representation of the range of possible ways to communicate, organise our societies, be human, and live life. The exploration of possible events yet unseen, and the imagining of ways of doing things yet untried, may therefore be important avenues of inquiry.

But I wonder how "scientific" they are? Let's run a thought experiment where we rewind this universe's story back to the (undoubtedly rather dark and lonely) era before the advent of life. Surveying the dark and inhospitable landscape of a still cooling earth, how foreseeable would that sudden eruption, that spilling forth of self-replicating matter into an otherwise apparently dead universe, have been? With, by definition, no precedent -- not a shred of evidence "on the record"; but without any need to invoke the actual existence of such a record, nor imply the existence of anyone at the time to monitor it -- I'd suggest it might not have been on the cosmic radar at all. If things can and do happen in this universe for which there is no evidence that they can or will happen, then we're in an even stranger position than Dawkins suggests. We're inhabitants of a place which -- in principle and hence irretrievably -- must remain, in part, beyond the grasp of the best scientific thought.

Now, let's come back to the other point, concerning "The God Delusion". Here I probably need to emphasise that I subscribe to no religious view that causes me any difficulty with the thesis that God is a delusion, and I'm quick to agree that there's any amount of historical evidence to buttress an argument condemning the tragic consequences of dogmatic indulgence of that genre of belief.

But I am not convinced that putting this delusion to rest is necessarily as important, or even desirable, as Dawkins seems to think. When I asked him, following the second presentation, to describe the sort of problems that he hopes or believes might be resolved if his admonitions against religious belief were properly heeded, he took the opportunity to ridicule Creationists -- the intellectual equivalent of shooting fish in a barrel -- and then alluded to the wondrously deepened appreciation of the world that would be afforded these converts to science (my term, not his).

This is, I'm sorry to say, utterly inadequate. I'm sure there are other arguments he could have made, examining the deleterious social consequences of monotheism, he may have developed these elsewhere (I look forward to acquainting myself with them); but the superior aesthetic value of science-approved truth is deeply questionable, and given that the most perilous problems of our age are deeply entwined with not only the ill logic of religious fanatics, but also the material products of the scientific revolution and its heirs, the epistemological basis of the latter ought also to be questioned as critically as the former. (Coming soon -- salvation! Oddly enough, from the worldview that brought you global warming, nuclear waste, and the hydrogen bomb...)

I can't imagine what they'd be, but I think we need new myths, not a priesthood of scientists telling people what they should and should not accept as true. Even if they're "right", my point is to say we should beware an elitist and monopolistic politics of knowledge. It's unfortunate that this excellent scholar appears to be as prone to dogmatism and closed-mindedness as some of his opponents, which makes him hard to agree with on the grounds mentioned already, as well as unlikely -- practically speaking -- actually to win people over. If, as it seems, certain of their objections are beyond logic, then we can expect those objections to remain impervious to even the most astute and comprehensive rational argumentation.

Which implies, as I tried to suggest to Dawkins, a case for a more fundamental intervention to our "world-representing software". Since we didn't seem to get anywhere with that in the difficult forum of a public lecture Q&A session, I'd be interested to think with readers of this blog about what forms that intervention could, or ought to, take. I'll rephrase: do we, and can we, in some specific cultural or cognitive ways, need to "become posthuman" in order to rectify the ways in which we misapprehend the world? (In my darker moments I wonder if ego consciousness as manifested in humanity isn't an evolutionary dead-end in the making. Any thoughts?)

At one point Dawkins remarked, I believe partly in tribute to Douglas Adams (to whom his latest book is dedicated) and in tune with the title of his first presentation: "I think revelling in the absurd is something that scientists must learn to do". I certainly think so too. But I don't see much of Adams's marvellous capacity for savouring life's absurdity reflected in Dawkins' demagoguery -- except perhaps performatively, in the irony that the acute intellect of Richard Dawkins is deployed in a contradiction, pointing out the limitations of scientific knowledge (evidence-based knowing) on one hand, but hawking it as an epistemological silver bullet on the other.

The uncompromising approach he takes is all the more unfair if we consider what I understand to be his own contention, that belief in (some version of) God is an artifact of an idiosyncratic psychological arrangement, a side-effect or glitch in our world-representing software. Given that such belief has apparently afflicted a large percentage of the people that have ever lived, he seems a bit too quick to cast it as evidence of pitable, abject stupidity, a "cockeyed" world view; rather than -- like it or not -- part of the human condition. I say this even if it's a part of our software we'd be better off "recoding". A scientific prescription for how the world ought to be apprehended needs to be careful not to fall into a hubristic trap as insidious as the supposed consequences of pre-scientific ignorance.

Here I'm invoking a pragmatic question about the observed, and potential, effects in the world of subscribing to or proselytising for various beliefs, rather than just the comparative merits of the truth claims they make. None of this should be regarded as a defence of those who ignore scientific evidence in favour of literal belief in biblical accounts of how the world was made. But there are different kinds of truth and, to my mind at least, what ought to be believed is not as self-evident as Dawkins appears to suggest. In any case, as the nascent thought-technology of a handful of inhabitants of Middle World, human science should stay humble. Reason has its limits -- just ask a gnat.

If the most important developments in the unfolding of the universe can happen without evidence, we ought indeed to learn to appreciate and enjoy the absurd; and I'd include under that rubric the unknown, the ridiculous, and the highly improbable (once again, see Dator's second law.) What I'm not clear on is how a prescription for universal adoption of scientific epistemology would necessarily or conclusively help with that. Science as we know it, or at least as exemplified by Richard Dawkins, seems much more predisposed to patrolling and regimenting knowledge than to encouraging genuine exploration. If, as Dawkins suggests, scientists themselves still need to learn to revel in the absurd, then how can they hope to teach that to the rest of us?

Saturday, July 01, 2006

Don't Panic

Someone I regret that I will never meet is the late, brilliant writer Douglas Adams, who died just over five years ago (11 May 02001). Adams was an extraordinarily original, funny and humane author best known for his Hitchhiker's Guide to the Galaxy trilogy -- which ultimately expanded to five parts ... or was it six? Either way, it was a helluva trilogy. Adams was at least as inventive and eclectic as Neil Gaiman is, but without being the slightest bit self-important (no offence to Gaiman, also an excellent writer, but his chosen mythological subject matter does present certain hazards).

Not having read Adams at all in the last year or two, this week at a great second hand bookstore just around the corner from Long Now, I picked up a copy of The Long Dark Tea-time of the Soul, featuring his marvellous character Dirk Gently, the holistic detective. Then today, serendipitously, I ran across an essay by Adams from 01999 entitled "How to stop worrying and learn to love the internet". It's terrific. I'll certainly look out for more of his non-fiction. An extract:

I suppose earlier generations had to sit through all this huffing and puffing with the invention of television, the phone, cinema, radio, the car, the bicycle, printing, the wheel and so on, but you would think we would learn the way these things work, which is this:

1) everything that's already in the world when you're born is just normal;

2) anything that gets invented between then and before you turn thirty is incredibly exciting and creative and with any luck you can make a career out of it;

3) anything that gets invented after you're thirty is against the natural order of things and the beginning of the end of civilisation as we know it until it's been around for about ten years when it gradually turns out to be alright really.

Apply this list to movies, rock music, word processors and mobile phones to work out how old you are.

Adams is highlighting the way that real change occurs between generations, as each successive one assimilates and builds on the inventions and achievements of the one before it. Shades of media theorist Marshall McLuhan: "We shape our tools, and thereafter our tools shape us." Thus does the history of technology compound into a shapeshifting cultural force, a medium for the human race to act upon and transform itself, and by extension the planet as a whole.

Now, I have a passing familiarity with the Strauss and Howe theory of generations, which argues that over centuries they cycle predictably from political conservatism to radicalism and back again, through a pattern in four stages. But without having any reliable way -- prospectively as opposed to historically -- to determine when a sociologically relevant "generation" ends and the next begins, or to determine what significant unpredictable events may prove to be formative of a cohort's experience of the world, even if the character of their reaction could be forecasted; as I see it, such an approach can't help but be redolent of horoscopes writ large. How, looking forward, can this afford anything useful in the way of concrete analytical application? (Not a rhetorical question; I'd welcome some responses on this.)

A more limited, but useful approach to analysing generational change is to consider how the presence of existing and emerging technologies may form part of what's taken for granted, and will be built upon, by the children of today. (Further out than that, any generation-based analysis seems highly unlikely to help, because there are basically no raw materials -- read: people with experiences -- to work with, as yet.)

In principle, what we can say is intriguing about this transgenerational, technological baton-passing, is the way that incrementally, through progressive naturalisation of layers of technology-driven change, society ends up in places far removed from what anyone ever imagined, let alone intended. In his 01984 book What Sort of People Should There Be?, an early and very thoughtful contribution to the debate on genetic engineering, the English ethicist Jonathan Glover points out that part of the problem with (what we'll call) compound technological change is that there are conceivable future worlds that would make folks today recoil, but which wouldn't seem at all objectionable to the people living in them; their values having adjusted by baby steps to accommodate most of the changes that got them there.

This is a fascinating philosophical point. It poses a serious problem for long range thinking, let alone actual planning, which is that if we won't have to live "then and there", then who are we to judge this or that scenario, and try to avoid or pursue it? (It's analogous to the ethical conundrum concerning "lives not worth living".) There's an implication here of some kind of inherent limit to the responsibility we can reasonably expect ourselves to take, perhaps mitigating what Jaron Lanier has called "karma vertigo", which unfortunately is a common affliction for long-term thinkers. (Long-term thinking itself, fortunately or not, is a far less common affliction.) And it may induce us zealous futurist types to put a little more stock in the ability of future generations to deal with their own era's problems in their own way, perhaps even accepting a zen-like element in our forward thinking that Douglas Adams surely would have endorsed: Don't Panic.