A roundup of the year's most interesting ideas in The New York Times Magazine at the end of 02006 included an entry on "hyperopia: an excess of farsightedness".
The piece referenced the research of two scholars at Columbia University, whose findings appeared in the Journal of Consumer Research, in an article entitled "Repenting Hyperopia" (available as a pdf here).
Whether the rationale of forgoing pleasure today (for what reasons, the study doesn't discuss) retrospectively contributes to net happiness is a very interesting question. According to these authors, it doesn't. In their analysis, which entailed quizzing subjects about a range of decisions they'd made, the guilt that can result from "myopia" (making decisions for indulgence today) tends to fade over time, whereas the regret associated with missing out due to "hyperopia" (here identified with ascetic choices, described as "virtuous") tended to endure. This, say the authors, contradicts the "classic literature on self-control" which "assumes that consumers regret yielding to hedonic temptations".
An important reason why this could be the case, it seems to me, is because the study contains nothing linking decisions made to consequences experienced. It mainly illuminates the pattern that guilt feelings don't appear to last as long as than the regrets of missed opportunity, which relates to the ways those different types of emotions play out (the former is "hot" and dissipates quickly, the latter "cool" and slow to leave). By contrast, it would be interesting to see a similar study of, say, the retrospective feelings of cakelovers whose repeated choices to eat cake either did (in one group) or did not (in another) lead to obesity and/or heart disease. Similarly, smokers whose daily habit culminated in chronic emphysema might express a bit more indulgence remorse than the one-off decisions made by the groups surveyed in the studies in question. If the incidence rates illuminating the risk associated with "myopic" behaviours (smoking/emphysema; poor diet/obesity) were evaluated against the rate and degree of regret experienced, we might expect to come away with a better sense of how serious a problem "hyperopia" really is.
Also, to bring out explicitly an important point that's implied in my remarks above, one slice of chocolate cake, or one bowl of fruit salad, is hardly grounds for getting worked up. The feelings over time that we experience about single decisions to eat some cake, attend a party, or spend money on winter break (for such is the subject matter of this research) are relatively trivial compared to repeat instances or patterns of behaviour. By and large, we might hypothesise, it's not a one-time indulgence that you'll regret, but the lifestyle. The temporal measure is only superficially about the passage of time per se, what's actually at issue is the accumulated consequences of lots of little decisions.
Turning to the macro scale now; we can see the point more clearly. Global warming is not the result of one car trip. The devastation of rainforest is not caused by eating one hamburger. But multiply these things out by decades, and millions of people, and we all have ample reason to regret those little indulgences. Let's turn it around now. Is it hyperopic not to drive to work on one day because you're concerned about global warming? How about not eating that one hamburger because of the forest?
Agonising over a these minor decisions on a one-off basis might be a symptom of hyperopia. On the other hand, deciding to ride a bicycle to work instead of drive an SUV, or deciding to become vegetarian to avoid the resource intensiveness of remaining omnivorous -- though both evince highly farsighted modes of thought, I would argue they aren't "hyperopic" in a pejorative sense.
In any case, people seem to have some psychological difficulty accepting the cumulative responsibility for incremental negative effects of their decisions; so, research that evaluated the level of their regret as it mapped onto scaled-up consequences (for their health, or that of the planet) could be very interesting. A key issue seems to be whether decision-making effort is expended according to the magnitude of the stakes in question. And that could lead to better measures of myopia and hyperopia -- the extent to which people devote too little, or too much, deliberative effort to the actual risks associated with their behaviour.
One final point, then: in light of the above it's unfortunate that in this research the term hyperopia (literally, "farsightedness") is used here to denote excessive delayed gratification, because it conflates foresightful decision-making (a genus of decision that is made with a view to its potential long-term, including cumulative, consequences) with the problem of overly zealous ascetiscism (a species of largely symbolic rationale with ideological-religious undertones). Their conclusion that "myopia may be farsighted after all" seems particularly susceptible to abuse by anyone who might like to argue that we're better off not thinking ahead, an analogous claim to the one that the market will take care of everything, so there's no need to worry about policy to prevent long-term problems emerging from our collective economic activity. I'm not disputing the research findings as such, but simply pointing out the way they're framed seems consistent with dismissing the value of thinking ahead. That conclusion again: "myopia may be farsighted after all". But surely by definition, myopia is as bad as -- albeit different from -- hyperopia?
So, this research shows that sometimes we're better off, subjectively speaking, indulging ourselves today. I have no problem with that; it certainly comports with my own experience in certain areas (e.g. spending money while travelling). To be able to take that very finding into account, enhances our capacity for looking ahead (as an avenue to choosing wisely), which is in my view an undervalued good in the world of decision-making, and ought to be thoughtfully encouraged and developed. Another term, for the paralysis that can result from thinking too far out -- taking too much long-range responsibility -- is "karma vertigo" (coined by Jaron Lanier); and that's certainly a syndrome we'd do well to avoid in thinking ahead. But I fear that farsightedness is currently in too fragile a state to be given a bad name by dint of semantic carelessness on the part of these researchers. Ironically, this swipe at hyperopia could really exacerbate (what I regard as) already myopic tendencies in society.
Perhaps the problem is simply that their analysis of future orientation in decision-making is missing a category, which is implied but not discussed; the sweet spot between the extremes of excessive farsightedness and shortsightedness: foresight.
Wednesday, February 28, 2007
Friday, February 23, 2007
Dawkins at Manoa
This week, evolutionary biologist, science advocate and atheism activist Richard Dawkins was a guest speaker at the UH-Manoa Distinguished Lecture Series. Dawkins gave presentations on two different topics; "Queerer than we can suppose: The strangeness of science" on Tuesday evening, and "Is evolution predictable?" Wednesday afternoon. Both were oversubscribed, with standing room only in the afternoon session, and hundreds turned away in the evening.
An Oxford professor as well as a prolific author with a wide readership, Dawkins is one of a handful of popular science writers -- also including Stephen Jay Gould and Paul Davies -- whose work led me, in my mid-teens, to contemplate science journalism as a career. As it worked out, my first degree was in the history and philosophy of science (HPS). So, like many others, I owe an intellectual debt to Dawkins, whose crystal-clear use of analogical thinking helps make him as capable an explicator of complex scientific ideas as I've ever found.
Some of what he said on Tuesday reminded me of a fascinating class I took as an undergraduate, the University of Melbourne's HPS offering called "Science, Life and Mind", which dealt with the various psychological pitfalls that affect the scientist's mission of apprehending the world rationally (such as the biases and heuristics research of Tversky and Kahneman). Focusing not on the cognitive details but instead on the broad-brush evolutionary limitations hemming in human perception, Dawkins made the point that, depending on the ecological niche of the world they inhabit (particularly the scale, from microscopic life to megafauna), different animals operate different "world-representing software". As a result, the common sense view of reality that evolution afforded us to navigate the "Middle World" (neither microscopic nor macroscopic in level) where we humans live, does not necessarily correspond to the nature of things in any absolute sense. One example is the fact that the spectrum of light visible to us, which presents itself, so to speak, as everything there is to see, in fact comprises merely a fraction of the "larger rainbow". Another illusion born of our position in the scale of things is the perceived solidity of matter, because at a subatomic level "matter" turns out to be mostly empty space. In this way Dawkins, borrowing J.B.S. Haldane's turn of phrase, elaborated the idea of science being "queerer than we can suppose".
He went on to add that in Middle World, "evolution has not equipped us to handle very improbable events". Things at the "very unlikely" end of the probability spectrum are by definition rare creatures, although they are sighted from time to time. What some regard as "miracles" are, says Dawkins, nothing other than highly improbable, but nonetheless possible -- very occasionally actual -- events. The spontaneous arising of a self-replicating molecule in the universe, that is, the advent of life, illustrates the point. Now, as you may be able to tell from the above, if you didn't know already, Dawkins expends a good deal of his professional effort advocating scientific epistemology, and simultaneously debunking religious belief (see his current book, entitled The God Delusion). I haven't read that one yet, although I read and learned a lot from Sam Harris's excellent The End of Faith, another recent work which makes a similar appeal to replace religion-motivated wishful thinking with reason. We'll come back to this point.
Meanwhile, let's stay with the interesting line of thought mentioned above, that what can seem utterly outlandish within our narrow frame of reference becomes plausible in the big picture. Accordingly, it seems we might expect to discern more clearly the horizons of possibility if we take a broader look. So let's consider this idea in relation to temporality: it's not difficult to see how this notion illuminates the value of thinking very long term. To do so expands our sample of possibility space to encompass much more than just the "visible spectrum" of contemporary human experience. The practice of history (and other fields of study, for that matter) makes it plain that what we perceive immediately around us is far from an exhaustive representation of the range of possible ways to communicate, organise our societies, be human, and live life. The exploration of possible events yet unseen, and the imagining of ways of doing things yet untried, may therefore be important avenues of inquiry.
But I wonder how "scientific" they are? Let's run a thought experiment where we rewind this universe's story back to the (undoubtedly rather dark and lonely) era before the advent of life. Surveying the dark and inhospitable landscape of a still cooling earth, how foreseeable would that sudden eruption, that spilling forth of self-replicating matter into an otherwise apparently dead universe, have been? With, by definition, no precedent -- not a shred of evidence "on the record"; but without any need to invoke the actual existence of such a record, nor imply the existence of anyone at the time to monitor it -- I'd suggest it might not have been on the cosmic radar at all. If things can and do happen in this universe for which there is no evidence that they can or will happen, then we're in an even stranger position than Dawkins suggests. We're inhabitants of a place which -- in principle and hence irretrievably -- must remain, in part, beyond the grasp of the best scientific thought.
Now, let's come back to the other point, concerning "The God Delusion". Here I probably need to emphasise that I subscribe to no religious view that causes me any difficulty with the thesis that God is a delusion, and I'm quick to agree that there's any amount of historical evidence to buttress an argument condemning the tragic consequences of dogmatic indulgence of that genre of belief.
But I am not convinced that putting this delusion to rest is necessarily as important, or even desirable, as Dawkins seems to think. When I asked him, following the second presentation, to describe the sort of problems that he hopes or believes might be resolved if his admonitions against religious belief were properly heeded, he took the opportunity to ridicule Creationists -- the intellectual equivalent of shooting fish in a barrel -- and then alluded to the wondrously deepened appreciation of the world that would be afforded these converts to science (my term, not his).
This is, I'm sorry to say, utterly inadequate. I'm sure there are other arguments he could have made, examining the deleterious social consequences of monotheism, he may have developed these elsewhere (I look forward to acquainting myself with them); but the superior aesthetic value of science-approved truth is deeply questionable, and given that the most perilous problems of our age are deeply entwined with not only the ill logic of religious fanatics, but also the material products of the scientific revolution and its heirs, the epistemological basis of the latter ought also to be questioned as critically as the former. (Coming soon -- salvation! Oddly enough, from the worldview that brought you global warming, nuclear waste, and the hydrogen bomb...)
I can't imagine what they'd be, but I think we need new myths, not a priesthood of scientists telling people what they should and should not accept as true. Even if they're "right", my point is to say we should beware an elitist and monopolistic politics of knowledge. It's unfortunate that this excellent scholar appears to be as prone to dogmatism and closed-mindedness as some of his opponents, which makes him hard to agree with on the grounds mentioned already, as well as unlikely -- practically speaking -- actually to win people over. If, as it seems, certain of their objections are beyond logic, then we can expect those objections to remain impervious to even the most astute and comprehensive rational argumentation.
Which implies, as I tried to suggest to Dawkins, a case for a more fundamental intervention to our "world-representing software". Since we didn't seem to get anywhere with that in the difficult forum of a public lecture Q&A session, I'd be interested to think with readers of this blog about what forms that intervention could, or ought to, take. I'll rephrase: do we, and can we, in some specific cultural or cognitive ways, need to "become posthuman" in order to rectify the ways in which we misapprehend the world? (In my darker moments I wonder if ego consciousness as manifested in humanity isn't an evolutionary dead-end in the making. Any thoughts?)
At one point Dawkins remarked, I believe partly in tribute to Douglas Adams (to whom his latest book is dedicated) and in tune with the title of his first presentation: "I think revelling in the absurd is something that scientists must learn to do". I certainly think so too. But I don't see much of Adams's marvellous capacity for savouring life's absurdity reflected in Dawkins' demagoguery -- except perhaps performatively, in the irony that the acute intellect of Richard Dawkins is deployed in a contradiction, pointing out the limitations of scientific knowledge (evidence-based knowing) on one hand, but hawking it as an epistemological silver bullet on the other.
The uncompromising approach he takes is all the more unfair if we consider what I understand to be his own contention, that belief in (some version of) God is an artifact of an idiosyncratic psychological arrangement, a side-effect or glitch in our world-representing software. Given that such belief has apparently afflicted a large percentage of the people that have ever lived, he seems a bit too quick to cast it as evidence of pitable, abject stupidity, a "cockeyed" world view; rather than -- like it or not -- part of the human condition. I say this even if it's a part of our software we'd be better off "recoding". A scientific prescription for how the world ought to be apprehended needs to be careful not to fall into a hubristic trap as insidious as the supposed consequences of pre-scientific ignorance.
Here I'm invoking a pragmatic question about the observed, and potential, effects in the world of subscribing to or proselytising for various beliefs, rather than just the comparative merits of the truth claims they make. None of this should be regarded as a defence of those who ignore scientific evidence in favour of literal belief in biblical accounts of how the world was made. But there are different kinds of truth and, to my mind at least, what ought to be believed is not as self-evident as Dawkins appears to suggest. In any case, as the nascent thought-technology of a handful of inhabitants of Middle World, human science should stay humble. Reason has its limits -- just ask a gnat.
If the most important developments in the unfolding of the universe can happen without evidence, we ought indeed to learn to appreciate and enjoy the absurd; and I'd include under that rubric the unknown, the ridiculous, and the highly improbable (once again, see Dator's second law.) What I'm not clear on is how a prescription for universal adoption of scientific epistemology would necessarily or conclusively help with that. Science as we know it, or at least as exemplified by Richard Dawkins, seems much more predisposed to patrolling and regimenting knowledge than to encouraging genuine exploration. If, as Dawkins suggests, scientists themselves still need to learn to revel in the absurd, then how can they hope to teach that to the rest of us?
An Oxford professor as well as a prolific author with a wide readership, Dawkins is one of a handful of popular science writers -- also including Stephen Jay Gould and Paul Davies -- whose work led me, in my mid-teens, to contemplate science journalism as a career. As it worked out, my first degree was in the history and philosophy of science (HPS). So, like many others, I owe an intellectual debt to Dawkins, whose crystal-clear use of analogical thinking helps make him as capable an explicator of complex scientific ideas as I've ever found.
Some of what he said on Tuesday reminded me of a fascinating class I took as an undergraduate, the University of Melbourne's HPS offering called "Science, Life and Mind", which dealt with the various psychological pitfalls that affect the scientist's mission of apprehending the world rationally (such as the biases and heuristics research of Tversky and Kahneman). Focusing not on the cognitive details but instead on the broad-brush evolutionary limitations hemming in human perception, Dawkins made the point that, depending on the ecological niche of the world they inhabit (particularly the scale, from microscopic life to megafauna), different animals operate different "world-representing software". As a result, the common sense view of reality that evolution afforded us to navigate the "Middle World" (neither microscopic nor macroscopic in level) where we humans live, does not necessarily correspond to the nature of things in any absolute sense. One example is the fact that the spectrum of light visible to us, which presents itself, so to speak, as everything there is to see, in fact comprises merely a fraction of the "larger rainbow". Another illusion born of our position in the scale of things is the perceived solidity of matter, because at a subatomic level "matter" turns out to be mostly empty space. In this way Dawkins, borrowing J.B.S. Haldane's turn of phrase, elaborated the idea of science being "queerer than we can suppose".
He went on to add that in Middle World, "evolution has not equipped us to handle very improbable events". Things at the "very unlikely" end of the probability spectrum are by definition rare creatures, although they are sighted from time to time. What some regard as "miracles" are, says Dawkins, nothing other than highly improbable, but nonetheless possible -- very occasionally actual -- events. The spontaneous arising of a self-replicating molecule in the universe, that is, the advent of life, illustrates the point. Now, as you may be able to tell from the above, if you didn't know already, Dawkins expends a good deal of his professional effort advocating scientific epistemology, and simultaneously debunking religious belief (see his current book, entitled The God Delusion). I haven't read that one yet, although I read and learned a lot from Sam Harris's excellent The End of Faith, another recent work which makes a similar appeal to replace religion-motivated wishful thinking with reason. We'll come back to this point.
Meanwhile, let's stay with the interesting line of thought mentioned above, that what can seem utterly outlandish within our narrow frame of reference becomes plausible in the big picture. Accordingly, it seems we might expect to discern more clearly the horizons of possibility if we take a broader look. So let's consider this idea in relation to temporality: it's not difficult to see how this notion illuminates the value of thinking very long term. To do so expands our sample of possibility space to encompass much more than just the "visible spectrum" of contemporary human experience. The practice of history (and other fields of study, for that matter) makes it plain that what we perceive immediately around us is far from an exhaustive representation of the range of possible ways to communicate, organise our societies, be human, and live life. The exploration of possible events yet unseen, and the imagining of ways of doing things yet untried, may therefore be important avenues of inquiry.
But I wonder how "scientific" they are? Let's run a thought experiment where we rewind this universe's story back to the (undoubtedly rather dark and lonely) era before the advent of life. Surveying the dark and inhospitable landscape of a still cooling earth, how foreseeable would that sudden eruption, that spilling forth of self-replicating matter into an otherwise apparently dead universe, have been? With, by definition, no precedent -- not a shred of evidence "on the record"; but without any need to invoke the actual existence of such a record, nor imply the existence of anyone at the time to monitor it -- I'd suggest it might not have been on the cosmic radar at all. If things can and do happen in this universe for which there is no evidence that they can or will happen, then we're in an even stranger position than Dawkins suggests. We're inhabitants of a place which -- in principle and hence irretrievably -- must remain, in part, beyond the grasp of the best scientific thought.
Now, let's come back to the other point, concerning "The God Delusion". Here I probably need to emphasise that I subscribe to no religious view that causes me any difficulty with the thesis that God is a delusion, and I'm quick to agree that there's any amount of historical evidence to buttress an argument condemning the tragic consequences of dogmatic indulgence of that genre of belief.
But I am not convinced that putting this delusion to rest is necessarily as important, or even desirable, as Dawkins seems to think. When I asked him, following the second presentation, to describe the sort of problems that he hopes or believes might be resolved if his admonitions against religious belief were properly heeded, he took the opportunity to ridicule Creationists -- the intellectual equivalent of shooting fish in a barrel -- and then alluded to the wondrously deepened appreciation of the world that would be afforded these converts to science (my term, not his).
This is, I'm sorry to say, utterly inadequate. I'm sure there are other arguments he could have made, examining the deleterious social consequences of monotheism, he may have developed these elsewhere (I look forward to acquainting myself with them); but the superior aesthetic value of science-approved truth is deeply questionable, and given that the most perilous problems of our age are deeply entwined with not only the ill logic of religious fanatics, but also the material products of the scientific revolution and its heirs, the epistemological basis of the latter ought also to be questioned as critically as the former. (Coming soon -- salvation! Oddly enough, from the worldview that brought you global warming, nuclear waste, and the hydrogen bomb...)
I can't imagine what they'd be, but I think we need new myths, not a priesthood of scientists telling people what they should and should not accept as true. Even if they're "right", my point is to say we should beware an elitist and monopolistic politics of knowledge. It's unfortunate that this excellent scholar appears to be as prone to dogmatism and closed-mindedness as some of his opponents, which makes him hard to agree with on the grounds mentioned already, as well as unlikely -- practically speaking -- actually to win people over. If, as it seems, certain of their objections are beyond logic, then we can expect those objections to remain impervious to even the most astute and comprehensive rational argumentation.
Which implies, as I tried to suggest to Dawkins, a case for a more fundamental intervention to our "world-representing software". Since we didn't seem to get anywhere with that in the difficult forum of a public lecture Q&A session, I'd be interested to think with readers of this blog about what forms that intervention could, or ought to, take. I'll rephrase: do we, and can we, in some specific cultural or cognitive ways, need to "become posthuman" in order to rectify the ways in which we misapprehend the world? (In my darker moments I wonder if ego consciousness as manifested in humanity isn't an evolutionary dead-end in the making. Any thoughts?)
At one point Dawkins remarked, I believe partly in tribute to Douglas Adams (to whom his latest book is dedicated) and in tune with the title of his first presentation: "I think revelling in the absurd is something that scientists must learn to do". I certainly think so too. But I don't see much of Adams's marvellous capacity for savouring life's absurdity reflected in Dawkins' demagoguery -- except perhaps performatively, in the irony that the acute intellect of Richard Dawkins is deployed in a contradiction, pointing out the limitations of scientific knowledge (evidence-based knowing) on one hand, but hawking it as an epistemological silver bullet on the other.
The uncompromising approach he takes is all the more unfair if we consider what I understand to be his own contention, that belief in (some version of) God is an artifact of an idiosyncratic psychological arrangement, a side-effect or glitch in our world-representing software. Given that such belief has apparently afflicted a large percentage of the people that have ever lived, he seems a bit too quick to cast it as evidence of pitable, abject stupidity, a "cockeyed" world view; rather than -- like it or not -- part of the human condition. I say this even if it's a part of our software we'd be better off "recoding". A scientific prescription for how the world ought to be apprehended needs to be careful not to fall into a hubristic trap as insidious as the supposed consequences of pre-scientific ignorance.
Here I'm invoking a pragmatic question about the observed, and potential, effects in the world of subscribing to or proselytising for various beliefs, rather than just the comparative merits of the truth claims they make. None of this should be regarded as a defence of those who ignore scientific evidence in favour of literal belief in biblical accounts of how the world was made. But there are different kinds of truth and, to my mind at least, what ought to be believed is not as self-evident as Dawkins appears to suggest. In any case, as the nascent thought-technology of a handful of inhabitants of Middle World, human science should stay humble. Reason has its limits -- just ask a gnat.
If the most important developments in the unfolding of the universe can happen without evidence, we ought indeed to learn to appreciate and enjoy the absurd; and I'd include under that rubric the unknown, the ridiculous, and the highly improbable (once again, see Dator's second law.) What I'm not clear on is how a prescription for universal adoption of scientific epistemology would necessarily or conclusively help with that. Science as we know it, or at least as exemplified by Richard Dawkins, seems much more predisposed to patrolling and regimenting knowledge than to encouraging genuine exploration. If, as Dawkins suggests, scientists themselves still need to learn to revel in the absurd, then how can they hope to teach that to the rest of us?