Two Pairs of Novels (Part 2)

Commuting days until retirement: 202

The Tortoise and the HareI’m not sure exactly what originally put me on to the first one of this pair; but I do remember reading someone’s opinion that this was the ‘perfect novel’. Looking around, there are encomiums everywhere. Carmen Callil, the Virago publisher, republished the book in 1983 (it originally appeared in 1954), calling it ‘one of my favourite classics’. Jilly Cooper considers it ‘my best book of almost all time’. The author, Elizabeth Jenkins, died only recently, in 2010, at the age of 104. Her Guardian obituary mentions the book as ‘one of the outstanding novels of the postwar period’. The publisher has given it an introduction by none other than Hilary Mantel – a writer whose abilities I respect, having read and enjoyed the first two books in her Thomas Cromwell trilogy. She writes: ‘I have admired this exquisitely written novel for many years…’

So why, having read it, did I feel distinctly underwhelmed? Its title is The Tortoise and the Hare, and it tells the story of a somewhat troubled marriage. The husband is an eminent barrister, and the wife is – well, the wife of an eminent barrister. (We are talking about the early fifties here, after all.) And indeed, she seems like what we might nowadays call a trophy wife – the attractive and desirable spoils of her husband’s success. The novel is narrated in omniscient mode, but centred around the point of view of Imogen, the wife. She watches with concern her husband’s burgeoning friendship with Blanche Silcox, a worthy spinster of their village: no scarlet woman, but a capable and organising, but also middle-aged and tweedy one – the tortoise who threatens to snatch the prize from the hare. At the opening of the novel Imogen is with her husband in a bric-a-brac shop, tempted by a delicately decorated china mug. However he only sees where it has been damaged; he has the buying power and she feels unable to challenge him, even though she knows he would give in to her wishes if she were to insist.

So we see the ground rules established. Imogen’s inability to step outside the limits of the role forced upon her by the conventions of the time is in one way an exemplar of what is interesting about the book – as Mantel puts it, ‘its focus on a fascinating and lost social milieu’. But this also seems to me to be part of its weakness. The customs of the time overwhelm the characters to such an extent that they seem to lose any idiosyncrasy or inner volition; we are left with a sense of inert waxworks being trundled about on a stage, fixed expressions on their faces, and their limbs in stiffly adopted poses. True, there are one or two minor characters who break through the unruffled surface of stifling convention: an avant garde architect and his pretentious wife; her sister Zenobia, the ‘famous beauty’. Then there are some who seem to stretch convention almost too far, the like the cerebrally challenged snob of a mother who sends her room daughters to a school where they have ‘a particularly nice class of girl’. “I’m not crazy about examinations. They won’t have to earn their living, but they will have to keep their end up in society.”  But the characterisation seems to me hardly any better here; they are all caricatures – another collection of waxworks, this time with crudely drawn masks fastened over their faces.

The other novel of my pair seemed to invite comparison because it is by another woman novelist writing at the same time, and for the trivial reason that she is also called Elizabeth. She wrote a dozen or so novels between 1945 and her death in 1975. Her reputation has probably suffered because it has been partly blotted out by that of the Hollywood film star of the same name, Elizabeth Taylor. In a comment on the back cover of another of her books, yet another Elizabeth-named novelist, Elizabeth Jane Howard, writes: ‘How deeply I envy any reader coming to her for the first time!’

Well I am one of these lucky readers; and I also noticed that a fellow WordPress blogger recently praised her in Vulpes Libris. And indeed, I in looking this post up I also found an earlier one on this same author. This latter entry is about The Soul of Kindness, the novel I first read and was going to use for the comparison. However it struck me that it was published in 1964 – only ten years after Jenkins’ The Tortoise and the Hare, but a decade in which there was enormous social change. The Taylor novel struck me as outclassing the Jenkins one in all sorts of ways, and but I felt I should try to make a fairer comparison, and so selected one which was almost contemporaneous – The Sleeping Beauty, published in 1953.

The Sleeping BeautyStraight away we find a cast of characters which have all the idiosyncrasies, mixed motives and secrets that you’d expect in authentic, three-dimensional people. There’s the odd household consisting of two sisters – one damaged in a car accident, and one widowed, who fretfully runs the household, which includes a guest-house – and her daughter who has learning difficulties, as they wouldn’t have been called then. Alongside them is another widow, whose MP husband has recently drowned in a boating accident which their traumatised son survived, and the friend with whom she distracts herself in various random enthusiasms, including covert betting on horses. Into this circle is introduced Vinny, the slightly mysterious single man (some of whose mysteries are revealed later) and his insufferable headstrong mother. (“At least you know where you are with her, ” comments one character. “But you don’t want to be there, ” retorts another.)

So here we are, with real people in a real world. Jenkins’ characters seemed to me to be entirely defined by the conventions and the medium in which they move. They can only exist within it, like those floating, translucent sea creatures, which, taken out of water, collapse into nothing. Taylor’s characters, on the other hand, have a fish-like solidity; if taken out of the milieu in which they live they might gasp and flap, but would still be recognisably themselves. They are for all time, and indeed, once you are involved in the action of the novel and the interactions of the characters, the period background becomes just that – background. Only occasionally does a detail give you a sudden reminder of the time you’re in: someone is standing on a railway platform and is startled out of their thoughts by the clatter of a descending signal.

Taylor has a particular gift for sharp description and telling details. For example, a here’s our first sight of that guest-house with the odd menage:

At the top of the cliff, but mostly hidden in trees, he could see a gabled Victorian house of tremendous ugliness, ivy over its dark walls and one upstairs window glinting evilly in the sunset.

This is intensely visual, and and wonderfully suggestive at the same time. Compare a typical descriptive passage from Jenkins, as two characters go for a walk together:

The trees were of unusual height. Against the pale blue sky their myriad leaves, now grey-green, now silver, shivered and whispered. Beneath, the river slid on, dark and clear, till it rounded over the weir in a glassy, greenish curve, then splintered into flakes, tresses, sheaves of foam that poured, thundering, to gush into the stream below.

This is all very well; evocative and elegantly written, pleasant to read, but oh, so conventional; and it doesn’t really get us anywhere. There’s nothing to lift the scene out of the ordinary and give it a purpose, or jolt us with a little shock and portend what is to come. We learn of nothing more than the pleasant surroundings in which the book’s characters live. Taylor’s descriptions, by contrast, are spare with well chosen detail, apt complements to what is happening in the novel.

On the basis of what I have read, I don’t think I’m being unfair to Jenkins. It’s true she had a Cambridge education at a time when that was still unconventional for women, and a life full of literary connections, knowing Virginia Woolf and Edith Sitwell. She wrote biographies as well as fiction. According to the Guardian piece I mentioned earlier, The Tortoise and the Hare is based on her own experience, with herself as the spurned lover – although she never married. But from what I have seen, novel writing was not her forte, while Elizabeth Taylor seems to me to have been a much greater talent. I look forward to reading more of her.

Two Pairs of Novels (Part 1)

Commuting days until retirement: 220

Why two pairs? Well, of my recent fiction reading on the train, I found that four novels fell naturally into pairs which invited comparison with one another. I was drawn to each of the novels by their reputations, and warm praise coming from various reviewers. In each case I found that the reputation of one of the pair seemed to me better deserved than the other.

I haven’t finished writing about the second pair, so rather than holding everything up I’ll publish what I’ve written about the first pair now. What was originally to be a single post seems to have fragmented, with the previous one, into three.

My first pair are both American and 20th century − but the similarities go beyond that. Both are what you would call campus novels, in that the action is centred around the life of a university. Both give you a strong idea on the first page of what is to come.

The first is Donna Tartt’s The Secret History − a debut novel that was an outstanding success in terms of sales when it first appeared in 1992. The setting is informed by Tartt’s experience as a student of classics in an Eastern American university. The second has also been popular, but in quite a different way. Its quiet, cerebral author died in 1994 and this particular book, Stoner by John Williams, attracted only modest attention on its publication in 1965, but was reissued in 2003 and proceeded to enjoy a huge boom on both sides of the Atlantic. Again it derives from the writer’s university experience, but this time as a teacher.

The Secret History’s opening doesn’t waste any time. From the first page we have:

The snow in the mountains had been melting and Bunny had been dead for several weeks before we came to understand the gravity of our situation. He’d been dead for ten days before they found him, you know. It was one of the biggest manhunts in Vermont history – state troopers, the FBI, even an army helicopter… It is difficult to believe that Henry’s modest plan could have worked so well…

The Secret HistoryWell, full marks for grabbing the reader’s attention. The author proceeds to go back and trace how this central event came about, and later its consequences. And trace it she does, in very great detail − sometimes, I found, in rather too much detail for me. Her account certainly has longeurs, as the movement of the characters from one encounter to another is carefully choreographed and each event constructed − at times it’s as if the stage directions are visible. And while from time to time you feel twinges of sympathy for the narrator Richard Papen, he’s hard to like. Clearly Tartt intends this, but 600 or so pages is a long time to spend with a slightly irksome companion.

On the positive side however, Tartt’s major characters − a little outlandish but for the most part just about believable − are well handled, I felt, as is particularly the way they coalesce − or not − as a group. We are introduced to them as a tightly bound coterie centred around a charismatic teacher of Greek, Julian Morrow. We are given to understand that, while employed as a teacher in a minor north-eastern American university and devoting his efforts to a very small circle of hand-picked students, this man is immensely cultured and has been on intimate terms with many major twentieth century figures from show business to high culture. He remains elusive and morally ambiguous; and for me he never seemed to escape from the page as a rounded character, but remained a rather improbable collection of attributes assembled by the author.

However the group of students at the centre of the story did achieve life of a sort in my mind. As Richard, uncomfortably conscious of his working class small-town roots in the West, slowly succeeds in working himself into the circle, what appears at first as an impenetrable, other-worldly group bound together by its eccentricity is slowly teased apart as the foibles of its individual members and the tensions between them become visible.

There’s Henry, the dominant member, perhaps also the most intellectual and serious. Improbably, as we learn at the start, he is the prime mover in the murder that takes place. Bunny, the murder victim, seems the polar opposite of Henry, raffish and unpredictable, yet they appear to have a mysteriously close, troubled – but not sexual – relationship with one another. There is a gay member, Francis, another rootless soul; and the group is made up by the twins Charles and Camilla (did that pair of names have the connotations it does now when the book was written? − an unlucky coincidence perhaps). They are inseparable for much of the novel, and a faint suggestion of incest hangs about them. Collectively, the combination of a devotion to high classical culture and the dependence of nearly all the characters on alcohol and/or drugs seemed somewhat incongruous to me. And I was a little bored at times by the endless passages in which people are shuffled between social events and each other’s rooms; it’s rather like reading a play with an excess of stage directions.

The author does nevertheless exploit well the volatile blend of character and circumstance she creates. But the problem with a spectacularly eventful plot is that you are necessarily placing your characters much closer to the dangerous cliff-edge of credibility. Tartt’s novel, it seems to me, like her unfortunate character Bunny, falls victim to this brinkmanship.

That’s not a charge you could make about Stoner. Here’s the equivalent passage on the first page that gives us our first sense of its flavour:

William Stoner entered the University of Missouri as a freshman in the year 1910, at the age of nineteen. Eight years later, during the height of World War I, he received his Doctor of Philosophy degree and accepted an instructorship at the same University, where he taught until his death in 1956. He did not rise above the rank of assistant professor, and few students remembered him with any sharpness after they had taken his courses. When he died his colleagues made a memorial contribution of a medieval manuscript to the University library. This manuscript may still be found in the Rare Books Collection, bearing the inscription: “Presented to the Library of the University of Missouri, in memory of William Stoner, Department of English. By his colleagues.”

An occasional student who comes upon the name may wonder idly who William Stoner was, but he seldom pursues his curiosity beyond a casual question. Stoner’s colleagues, who held him in no particular esteem when he was alive, speak of him rarely now; to the older ones, his name is a reminder of the end that awaits them all, and to the younger ones it is merely a sound which evokes no sense of the past and no identity with which they can associate themselves or their careers.

StonerPerhaps fittingly for a novel written by a professor about the life of a character whose adulthood is entirely spent teaching at a university, this introductory passage seems almost like an abstract at the top of an academic paper, in the way that it presents us with the boiled down essence of the narrative. Stoner’s life is viewed as if from a distance, so that the details can’t be made out; shrunk in this way it becomes insignificant, an impression which is heightened in the second paragraph. The tone is simultaneously flat and suggestive − inviting rather than compelling our sympathy and attention.

We then begin at the beginning and advance into the detail of Stoner’s life, and as we share his successes and failures, his moments of joy, of disappointment and of frustration, the distant and uninvolved perspective of the introduction stays in the back of our minds. I found this to be a strangely effective way of eliciting my sympathy. The plain, unshowy and dispassionate third person description seen in the introduction continues throughout the book, almost entirely from Stoner’s point of view. This sustained understatement reflects his own earnest, workmanlike character, and leaves us space to feel the emotional effects of his personal and professional high points and, rather more often, low ones. Stoner and his family, lovers and colleagues arise from the page fully formed; I was never aware of the mechanics by which they were created, or the stage directions. While, unlike Tartt’s novel, the story’s events are all too mundane and believable, I was gripped throughout by an emotional power that The Secret History never managed to invoke.

I wondered whether I was a little unfair in pitching the work of a young author against a more mature one; but checking up I find the difference is not that great: Tartt was 28 when hers was published, and Williams 43. We can perhaps point to the fact that The Secret History was a first novel, and Stoner Williams’ third. Either way, for me, Williams wins hands down in this comparison. In a word, I’d say that Stoner had soul − I felt a depth of integrity that was missing from The Secret History.

The next post will deal with my second pair – both English novels.

In Transit

Commuting days until retirement: 230

Among what would probably nowadays be called the commuting community − if it weren’t so hard to say, that is − things have changed over the years. In my previous incarnation as a commuter, before I spent a period working from home (something like 20 years ago) the range of train-bound activities was different. Most newspaper readers struggled with big broadsheets, trying vainly to avoid irritating their neighbours by obscuring their vision or tickling their ears as they turned the pages. Now they will either skim through a free tabloid (for London commuters like me there’s the Metro in the morning or the Standard on the way home) or skip nimbly through the contents of a morning daily on their tablet. If any were using mobile phones they could only at that time have been either texting or talking on them. The rumbling of the train was punctuated by the raucous piping of that horrible Nokia tune, or some other crude electronic noise. Reading on my journey now, I’m often startled by a sudden explosion of pop music, a brass band, a morning chorus of twittering birds or a concussion of breaking glass. It takes a few moments of perplexity before you come to and realise it’s just another ring tone.

Laptop computers in those earlier times were still sufficiently rare that one man I worked with avoided using one on the train because he was afraid everyone would think he was showing off. In the intervening years they multiplied rapidly for a while, only to have their entertainment functions largely replaced by tablets and smartphones. Of course there are still the laptop diehards, mostly using the train as an extension of the office. A discreet glimpse over one of their shoulders usually shows them to be working on a financial spreadsheet, or bent over some programming code. Such dedication! It may not save me from being a dull boy, but I do take care to keep my commuting time free of work − of that sort of work, anyway.

Many of course just lose themselves in music on their noise-cancelling headphones. There’s one man on my train every day, in both directions, whom I have never actually seen without a large pair of headphones on − and that includes his walk to and from the station. I often wonder whether they have been surgically implanted. And a whole new range of activities has sprung up that were not possible formerly, the most popular of all being just to play with your smartphone. That’s what I’m doing now, in writing this, if it counts as playing, which it probably does. From where I’m sitting now, on the morning train, just at my end of the carriage I can count four Metros, one Kindle, one book, one laptop, two phone-fiddlers (including me) and one handheld game console, right next to me. The iPads seem unusually rare today − although I have just spotted one up the other end of the carriage. That leaves two deep in conversation, one asleep, and two looking out of the window − earnest students of misty early morning fields, grazing cows, builders yards and back gardens ranging from the fanatically neat to the chaotically neglected.

And so the rows of heads comprising this very twenty-first century aggregation of social chit-chat, dreams, electronic jitterbugging and imaginary digital worlds sway in synchrony with the jolting of the train as it rattles towards the capital. In the meantime I am realising that this piece, intended as an introduction to a review of some of my recent fiction reading on the train, has outgrown its purpose and will have to be a post on its own. The reviewing will be coming next.

iPhobia

Commuting days until retirement: 238

If you have ever spoken at any length to someone who is suffering with a diagnosed mental illness − depression, say, or obsessive compulsive disorder − you may have come to feel that what they are experiencing differs only in degree from your own mental life, rather than being something fundamentally different (assuming, of course, that you are lucky enough not to have been similarly ill yourself). It’s as if mental illness, for the most part, is not something entirely alien to the ‘normal’ life of the mind, but just a distortion of it. Rather than the presence of a new unwelcome intruder, it’s more that the familiar elements of mental functioning have lost their usual proportion to one another. If you spoke to someone who was suffering from paranoid feelings of persecution, you might just feel an echo of them in the back of your own mind: those faint impulses that are immediately squashed by the power of your ability to draw logical common-sense conclusions from what you see about you. Or perhaps you might encounter someone who compulsively and repeatedly checks that they are safe from intrusion; but we all sometimes experience that need to reassure ourselves that a door is locked, when we know perfectly well that it is really.

That uncomfortably close affinity between true mental illness and everyday neurotic tics is nowhere more obvious than with phobias. A phobia serious enough to be clinically significant can make it impossible for the sufferer to cope with everyday situations; while on the other hand nearly every family has a member (usually female, but not always) who can’t go near the bath with a spider in it, as well as a member (usually male, but not always) who nonchalantly picks the creature up and ejects it from the house. (I remember that my own parents went against these sexual stereotypes.) But the phobias I want to focus on here are those two familiar opposites − claustrophobia and agoraphobia.

We are all phobics

In some degree, virtually all of us suffer from them, and perfectly rationally so. Anyone would fear, say, being buried alive, or, at the other extreme, being launched into some limitless space without hand or foothold, or any point of reference. And between the extremes, most of us have some degree of bias one way or the other. Especially so − and this is the central point of my post − in an intellectual sense. I want to suggest that there is such a phenomenon as an intellectual phobia: let’s call it an iphobia. My meaning is not, as the Urban Dictionary would have it, an extreme hatred of Apple products, or a morbid fear of breaking your iPhone. Rather, I want to suggest that there are two species of thinkers: iagorophobes and iclaustrophobes, if you’ll allow me such ugly words.

A typical iagorophobe will in most cases cleave to scientific orthodoxy. Not for her the wide open spaces of uncontrolled, rudderless, speculative thinking. She’s reassured by a rigid theoretical framework, comforted by predictability; any unexplained phenomenon demands to be brought into the fold of existing theory, for any other way, it seems to her, lies madness. But for the iclaustrophobe, on the other hand, it’s intolerable to be caged inside that inflexible framework. Telepathy? Precognition? Significant coincidence? Of course they exist; there is ample anecdotal evidence. If scientific orthodoxy can’t embrace them, then so much the worse for it − the incompatibility merely reflects our ignorance. To this the iagorophobe would retort that we have no logical grounds whatever for such beliefs. If we have nothing but anecdotal evidence, we have no predictability; and phenomena that can’t be predicted can’t therefore be falsified, so any such beliefs fall foul of the Popperian criterion of scientific validity. But why, asks the iclaustrophobe, do we have to be constrained by some arbitrary set of rules? These things are out there − they happen. Deal with it. And so the debate goes.

Archetypal iPhobics

Widening the arena more than somewhat, perhaps the archetypal iclaustrophobe was Plato. For him, the notion that what we see was all we would ever get was anathema – and he eloquently expressed his iclaustrophobic response to it in his parable of the cave. For him true reality was immeasurably greater than the world of our everyday existence. And of course he is often contrasted with his pupil Aristotle, for whom what we can see is, in itself, an inexhaustibly fascinating guide to the nature of our world − no further reality need be posited. And Aristotle, of course, is the progenitor of the syllogism and deductive logic. In Raphael’s famous fresco The School of Athens, the relevant detail of which you see below, Plato, on the left, indicates his world of forms beyond our immediate reality by pointing heavenward, while Aristotle’s gesture emphasises the earth, and the here and now. Raphael has them exchanging disputatious glances, which for me express the hostility that exists between the opposed iphobic world-views to this day.

School of Athens

Detail from Raphael’s School of Athens in the Vatican, Rome (Wikimedia Commons)

iPhobia today

It’s not surprising that there is such hostility; I want to suggest that we are talking not of a mere intellectual disagreement, but a situation where each side insists on a reality to which the other has a strong (i)phobic reaction. Let’s look at a specific present-day example, from within the WordPress forums. There’s a blog called Why Evolution is True, which I’d recommend as a good read. It’s written by Jerry Coyne, a distinguished American professor of biology. His title is obviously aimed principally at the flourishing belief in creationism which exists in the US − Coyne has extensively criticised the so-called Intelligent Design theory. (In in my view, that controversy is not a dispute between the two iphobias I have described, but between two forms of iagoraphobia. The creationists, I would contend, are locked up in an intellectual ghetto of their own making, since venturing outside it would fatally threaten their grip on their frenziedly held, narrowly based faith.)

Jerry Coyne

Jerry Coyne (Zooterkin/Wikimedia Commons)

But I want to focus on another issue highlighted in the blog, which in this case is a conflict between the two phobias. A year or so ago Coyne took issue with the fact that the maverick scientist Rupert Sheldrake was given a platform to explain his ideas in the TED forum. Note Coyne’s use of the hate word ‘woo’, often used by the orthodox in science as an insulting reference to the unorthodox. They would defend it, mostly with justification, as characterising what is mystical or wildly speculative, and without evidential basis − but I’d claim there’s more to it than that: it’s also the iagorophobe’s cry of revulsion.

Rupert Sheldrake

Rupert Sheldrake (Zereshk/Wikimedia Commons)

Coyne has strongly attacked Sheldrake on more than one occasion: is there anything that can be said in Sheldrake’s defence? As a scientist he has an impeccable pedigree, having a Cambridge doctorate and fellowship in biology. It seems that he developed his unorthodox ideas early on in his career, central among which is his notion of ‘morphic resonance’, whereby animal and human behaviour, and much else besides, is influenced by previous similar behaviour. It’s an idea that I’ve always found interesting to speculate about − but it’s obviously also a red rag to the iagorophobic bull. We can also mention that he has been careful to describe how his theories can be experimentally confirmed or falsified, thus claiming scientific status for them. He also invokes his ideas to explain aspects of the formation of organisms that, in to date, haven’t been explained by the action of DNA. But increasing knowledge of the significance of what was formerly thought of as ‘junk DNA’ is going a long way to filling these explanatory gaps, so Sheldrake’s position looks particularly weak here. And in his TED talks he not only defends his own ideas, but attacks many of the accepted tenets of current scientific theory.

However, I’d like to return to the debate over whether Sheldrake should be denied his TED platform. Coyne’s comments led to a reconsideration of the matter by the TED editors, who opened a public forum for discussion on the matter. The ultimate, not unreasonable, decision was that the talks were kept available, but separately from the mainstream content. Coyne said he was surprised by the level of invective arising from the discussion; but I’d say this is because we have here a direct confrontation between iclaustrophobes and iagorophobes − not merely a polite debate, but a forum where each side taunts the other with notions for which the opponents have a visceral revulsion. And it has always been so; for me the iphobia concept explains the rampant hostility which always characterises debates of this type − as if the participants are not merely facing opposed ideas, but respective visions which invoke in each a deeply rooted fear.

I should say at this point that I don’t claim any godlike objectivity in this matter; I’m happy to come out of the closet as an iclaustrophobe myself. This doesn’t mean in my case that I take on board any amount of New Age mumbo-jumbo; I try to exercise rational scepticism where it’s called for. But as an example, let’s go back to Sheldrake: he’s written a book about the observation that housebound dogs sometimes appear to show marked  excitement at the moment that their distant owner sets off to return home, although there’s no way they could have knowledge of the owner’s actions at that moment. I have no idea whether there’s anything in this − but the fact is that if it were shown to be true nothing would give me greater pleasure. I love mystery and inexplicable facts, and for me they make the world a more intriguing and stimulating place. But of course Coyne isn’t the only commentator who has dismissed the theory out of hand as intolerable woo. I don’t expect this matter to be settled in the foreseeable future, if only because it would be career suicide for any mainstream scientist to investigate it.

Science and iPhobia

Why should such a course of action be so damaging to an investigator? Let’s start by putting the argument that it’s a desirable state of affairs that such research should be eschewed by the mainstream. The success of the scientific enterprise is largely due to the rigorous methodology it has developed; progress has resulted from successive, well-founded steps of theorising and experimental testing. If scientists were to spend their time investigating every wild theory that was proposed their efforts would become undirected and diffuse, and progress would be stalled. I can see the sense in this, and any self-respecting iagorophobe would endorse it. But against this, we can argue that progress in science often results from bold, unexpected ideas that come out of the blue (some examples in a moment). While this more restrictive outlook lends coherence to the scientific agenda, it can, just occasionally, exclude valuable insights. To explain why the restrictive approach holds sway I would look at the how a person’s psychological make-up might influence their career choice. Most iagorophobes are likely to be attracted to the logical, internally consistent framework they would be working with as part of a scientific career; while those of an iclaustrophobic profile might be attracted in an artistic direction. Hence science’s inbuilt resistance to out-of-the-blue ideas.

Albert Einstein

Albert Einstein (Wikimedia Commons)

I may come from the iclaustrophobe camp, but I don’t want to claim that only people of that profile are responsible for great scientific innovations. Take Einstein, who may have had an early fantasy of riding on a light beam, but it was one which led him through rigorous mathematical steps to a vastly coherent and revolutionary conception. His essential iagorophbia is seen in his revulsion from the notion of quantum indeterminacy − his ‘God does not play dice’. Relativity, despite being wholly novel in its time, is often spoken of as a ‘classical’ theory, in the sense that it retains the mathematical precision and predictability of the Newtonian schema which preceded it.

Niels Bohr

Niels Bohr (Wikimedia Commons)

There was a long-standing debate between him and Niels Bohr, the progenitor of the so-called Copenhagen interpretation of quantum theory, which held that different sub-atomic scenarios coexisted in ‘superposition’ until an observation was made and the wave function collapsed. Bohr, it seems to me, with his willingness to entertain wildly counter-intuitive ideas, was a good example of an iclaustrophobe; so it’s hardly surprising that the debate between him and Einstein was so irreconcilable − although it’s to the credit of both that their mutual respect never faltered..

Over to you

Are you an iclaustrophobe or an iagorophobe? A Plato or an Aristotle? A Sheldrake or a Coyne? A Bohr or an Einstein? Or perhaps not particularly either? I’d welcome comments from either side, or neither.

The Vault of Heaven

Commuting days until retirement: 250

Exeter Cathedral roof

The roof of Exeter Cathedral (Wanner-Laufer, Wikimedia Commons)

Thoughts are sometimes generated out of random conjunctions in time between otherwise unrelated events. Last week we were on holiday in Dorset, and depressing weather for the first couple of days drove us into the nearest city – Exeter, where we visited the cathedral. I had never seen it before and was more struck than I had expected to be. Stone and wood carvings created over the past 600 years decorate thrones, choir stalls and tombs, the latter bearing epitaphs ranging in tone from the stern to the whimsical. All this lies beneath the marvellous fifteeenth century vaulted roof – the most extensive known of the period, I learnt. Looking at this, and the cathedral’s astronomical clock dating from the same century, I imagined myself seeing them as a contemporary member of the congregation would have, and tried to share the medieval conception of the universe above that roof, reflected in the dial of the clock.

Astronomical Clock

The Astronomical Clock at Exeter Cathedral (Wikimedia Commons)

The other source of these thoughts was the book I happened to have finished that day: Max Tegmark’s Our Mathematical Universe*. He’s an MIT physics professor who puts forward the view (previously also hinted at in this blog) that reality is at bottom simply a mathematical object. He admits that it’s a minority view, scoffed at by many of his colleagues – but I have long felt a strong affinity for the idea. I have reservations about some aspects of the Tegmark view of reality, but not one of its central planks – the belief that we live in one universe among a host of others. Probably to most people the thought is just a piece of science fiction fantasy – and has certainly been exploited for all it’s worth by fiction authors in recent years. But in fact it is steadily gaining traction among professional scientists and philosophers as a true description of the universe – or rather multiverse, as it’s usually called in this context.

Nowadays there is a whole raft of differing notions of a multiverse, each deriving from separate theoretical considerations. Tegmark combines four different ones in the synthesis he presents in the book. But I think I am right in saying that the first time such an idea appeared in anything like a mainstream scientific context was the PhD thesis of a 1950s student at Princeton in the USA – Hugh Everett.

The thesis appeared in 1957; its purpose was to present an alternative treatment of the quantum phenomenon known as the collapse of the wave function. A combination of theoretical and experimental results had come together to suggest that subatomic particles (or waves – the duality was a central idea here) existed as a cloud of possibilities, until interacted with, or observed. The position of an electron, for example could be defined with a mathematical function – the wave function of Schrodinger – which assigned only a probability to each putative location. If, however, we were to put this to the test – to measure its location in practice, we would have to do this by means of some interaction, and the answer that would come back would be one specific position among the cloud of possibilities. But by carrying out such procedures repeatedly, it was shown that the probability of any specific result was given by the wave function. The approach to these results which became most widely accepted was the so-called ‘Copenhagen interpreatation’ of Bohr and others, which held that all the possible locations co-existed in ‘superposition’ until the measurement was made and the wave function ‘collapsed’. Hence some of the more famous statements about the quantum world: Einstein’s dissatisfaction with the idea that ‘God plays dice’; and Schrodinger’s well-known thought experiment aimed to test the Copenhagen interpretation to destruction – the cat which is presumed to be simultaneously dead and alive until its containing box is opened and the result determined.

Everett proposed that there was no such thing as the collapse of the wave function. Rather, each of the possible outcomes was represented in one real universe; it was as if the universe ‘branched’ into a number of equally real versions, and you, the observer, found yourself in just one of them. Of course, it followed that many copies of you each found themselves in slightly different circumstances, unlike the unfortunate cat which presumably only experienced those universes in which it lived. Needless to say, although Everett’s ideas were encouraged at the time by a handful of colleagues (Bryce DeWitt, John Wheeler) they were regarded for many years as a scientific curiosity and not taken further. Everett himself moved away from theoretical physics, and involved himself in practical technology, later developing an enjoyment of programming. He smoked and drank heavily and became obese, dying at the age of 51. Tegmark implies that this was at least partly a result of his neglect by the theoretical physics community – but there’s also evidence that his choices of career path and lifestyle derived from his natural inclinations.

During the last two decades of the 20th century, however, the multiverse idea began to be taken more seriously, and had some enthusiastic proponents such as the British theorist David Deutsch and indeed Tegmark himself.  In his book, Tegmark cites a couple of straw polls he took among theoretical physicists attending talks he gave, in 1997 and again in 2010. In the first case, out of a response of 48, 13 endorse the Copenhagen interpretation, and 8 the multiverse idea. (The remainder are mostly undecided, with a few endorsing alternative approaches). In 2010 there are 35 respondents, of whom none at all go for Copenhagen, and 16 for the multiverse. (Undecideds remain about the same – to 16 from 18). This seems to show a decisive rise in support for multiple universes; although I do wonder whether it also reflects which physicists who were prepared to attend Tegmark’s talks, his views having become more well known by 2010. It so happens that the drop in the respondent numbers – 13 – is the same as the disappearing support for the Copenhagen interpreation.

Nevertheless, it’s fair to say that the notion of a multiple universe as a reality has now entered the mainstream of theoretical science in a way that it had not done half a century ago. There’s an argument, I thought as I looked at that cathedral roof, that cosmology has been transformed even more radically in my lifetime than it had been in the preceding 500 years. The skill of the medieval stonemasons as they constructed the multiple rib vaults, and the wonder of the medieval congregation as they marvelled at the completed roof, were consciously directed to the higher vault of heaven that overarched the world of their time. Today those repeated radiating patterns might be seen as a metaphor for the multiple worlds that we are, perhaps, beginning dimly to discern.


*Tegmark, Max, Our Mathematical Universe: My Quest for the Ultimate Nature of Reality. Allen Lane/Penguin, 2014

Jam Tomorrow

Commuting days until retirement: 255

George Osborne

The saintly Mr Osborne

My unlikely hero of the hour – well, of last week, anyway – is the gentleman whose title derives from a medieval tablecloth bearing a chessboard pattern: the Chancellor of the Exchequer, the Right Honourable George Gideon Oliver Osborne M.P. I’m as cynical about politicians as most people, but just occasionally one of them does something that seems so aligned with my own selfsh interests that I develop something that looks fearfully like a soft spot for them.

In this case, of course, it was the recent budget. Before the budget, the number of commuting days you see at the top of this post was in danger of being at least doubled. It works like this: as you might not know if you’re very much younger than me, the money you are putting aside as a pension throughout your working life (or not as the case may be) is not taxed by the government; but the downside of this has always been that you were not allowed to do whatever you liked with it when you retired, however sensible your plans might be. Your only option was to use it to buy an annuity, which would supply you with an income in retirement. You could shop around for the best deal, but once you had signed up for one, that was it; you were in it for life.

With all the recent economic shenanigans, and the record low interest rates, annuity rates have been flatlining. I was increasingly aware that if I had retired in May 2015 at the age of 67,as I had planned, the income I’d be stuck with wouldn’t be what I had once expected. My best option was to hang on grimly for another year or two in the reasonable hope that annuity rates would look up a bit.

But since last week Mr. Osborne has unexpectedly changed all the rules. Now when people retire they will be able to grab it all and run – or perhaps set off in a Lamborghini or on a world cruise, as the media have not been slow to point out. I’m fairly hopeless with money, but not that hopeless; and at least there’s no need to wait around for things to look up before retiring. I can choose whatever investment option looks best for my income, and then update it as often as I want.

It still means halving my income – this is the time if life when you wish you had thought more about boring things like pensions when you were younger. But of course they were only relevant to old people, that curious species that had nothing to do with you. I was only guided into saving what I did by various advisers whose wisdom I belatedly recognise; otherwise I would probably have left it until it was much too late.

But for me, it’s worth it to sacrifice a comfortable income for the incalculable improvement in quality of life that consists in having the time and freedom to develop my own thoughts, rather than being harnessed much of the time in the traces of my employer’s narrow aspirations. (Necessarily so, but narrow nonetheless: you see why this blog is anonymous.) And so the figure at the top of this post is safe for now, and I’m starting to salivate at the prospect of jam tomorrow; maybe not today, but definitely sooner than the day after tomorrow. The jam in question may be rather abstract – not the sort of item to appear on an accountant’s balance sheet – but it’ll taste good to me. Mr. Osborne has obliged me by giving the top of the jar its first vigorous opening twist.

Sharp Compassion

Commuting days until retirement: 260

The wounded surgeon plies the steel
That questions the distempered part;
Beneath the bleeding hands we feel
The sharp compassion of the healer’s art
Resolving the enigma of the fever chart.
   (from T.S.Eliot – East Coker)

Do No HarmA book review prompted this post: the book in question is Do No Harm: Stories of Life, Death and Brain Surgery, by Henry Marsh. I’ve read some more reviews since, and heard the author on the radio. I think I shall be devoting some commuting time to his book in the near future. It’s a painfully honest account of his life as brain surgeon, a calling from which he’s about to retire. Painful, in that he is searchingly candid aboout failures as well as successess, and about the self-doubt which he has experienced throughout his career. In fact his first chapter begins – rather alarmingly, after a life in the profession: ‘I often have to cut into the brain and it is something I hate doing.’

Some examples: we hear what it is like to explain what has happened to a woman patient in whom one of the mishaps which is always a risk in brain surgery has left her paralysed down one side. He tells her he knows from experience that there is a good chance it will improve.

‘I trusted you before the operation,’ she said. ‘Why should I trust you now?’
I had no immediate reply to this and stared uncomfortably at my feet.

He describes visiting a Catholic nursing home which cares for patients with serious brain damage. He recognises some of the names on the doors, and realises them to be former patients of his own for whom things didn’t go as well as he would have hoped.

Marsh’s title Do No Harm is of course an ironic reference to the central tenet of the Hippocratic Oath; but we know that in modern medicine things aren’t as simple as that, and that every operation should only be undergone after a weighing up of the risks and likely downside against the benefits. These are never so stark as in neurosurgery, and, not surprisingly, its sheer stressfulness ranks above that of intervention in other parts of the body. Marsh rather disarmingly says, against the popular conception, that it mostly isn’t as technically complex as many other kinds of surgery – it’s just that there is so much more at stake when errors occur. He quotes an orthopaedic surgeon who attends Marsh himself for a broken leg, and hearing what he does, clearly feels much happier being in orthopaedics. “Neurosurgery,” he says, “is all doom and gloom.”

I have often imagine what it must be like to be wielding a scalpel, poised over a patient’s skin, and then to be cutting into a stomach or a leg – never mind a brain. Of course a long training, learning from the experts and eventually taking your own first steps would give you the confidence to be able to cope with this; but it’s to Marsh’s credit that he has retained such a degree of self-doubt. Indeed, he speculates on the personalities of some of the great neurosurgeons of the past, who didn’t have the technical facilities of today. Advances could often only be made by taking enormous risks, and Marsh imagines that it would sometimes have been necessary for them to insulate themselves against concern for the patient to a point that marks them as having not just overweening self-confidence, but poitively psychopathic personalities.

I’m lucky not to have been under the knife myself very often, and never, thankfully, for the brain. But what I have experienced of the medicine has shown me that personalities which, with their unusual degree of arrogance, verge on the psychopathic, are all too common in the higher reaches of the profession. One who sticks in my memory wasn’t actually treating me: he had been commissioned to supply case histories for a medical computer program that I was involved in producing. This was not neurosurgery, but another area of medicine, in which I was aware that he was pre-eminent. We went through these case histories, some of them very complex, which he explained. “I diagnosed that,” he would say, visibly preening himself. On the whole he was perfectly good to work with, provided you showed a decent amount of deference. But at one point he moved beyond the subject matter, to lay down how he thought the program should work. This was my area, and I could see some problems with what he was saying, and explained politely. There was an icy pause. Although never less than polite, he proceeded to make it clear that nobody, least of all a cypher like me, was authorised to challenge an opinion of his, on anything. (Later I went ahead and did it my way in any case.) This was back in the eighties, nearly 30 years ago. Thinking about him before writing this, I googled him, and sure enough, he’s still out there, working at a major hospital and with a Harley Street practice. He must indeed have had enormous ability to have been at the top of his area for so long; but as a practitioner of the healing art he was, like, I suspect, many others, definitely not cuddly.

On the radio programme I heard, Henry Marsh remarked that, if you accept praise for your successes, you must accept blame for the failures, and that there are the medical practitioners who don’t follow that precept. I suspect that my friend described above would have been one of them; I have never encountered a level of self-regard that was so tangibly enormous. Perhaps we should be thankful that there are those who harness supreme technical skill to such overweening confidence – but you shudder a little at the thought of their scalpel in your brain. If this was to befall me, I’d much rather the knife was wielded by someone of Marsh’s cast of mind, rather than an equally skilful surgeon with all the humility of a bulldozer.

Read All About It (Part 2)

Commuting days until retirement: 285

You’ll remember, if you have paid me the compliment of reading my previous post, that we started with that crumbling copy of the works of Shakespeare, incongruously finding itself on the moon. I diverged from the debate that I had inherited from my brother and sister-in-law, to discuss what this suggested regarding ‘aboutness’, or intentionality. But now I’m going to get back to what their disagreement was. The specific question at issue was this: was the value – the intrinsic merit we ascribe to the contents of that book – going to be locked within it for all time and all places, or would its value perish with the human race, or indeed wither away as a result of its remote location? More broadly, is value of this sort – literary merit – something absolute and unchangeable, or a quality which exists only in relation to the opinion of certain people?

I went on to distinguish between ‘book’ as physical object in time and space, and ‘book’ regarded as a collection of ideas and their expression in language, and not therefore entirely rooted in any particular spatial or temporal location. It’s the latter, the abstract creation, which we ascribe value to. So immediately it looks as if the location of this particular object is neither here nor there, and and the belief in absolutism gains support. If a work we admire is great regardless of where it is in time or space, and then surely it is great for all times and all places?

But then, in looking at the quality of ‘aboutness’, or intentionality, we concluded that nothing possessed it except by virtue of being created – or understood by – a conscious being such as a human. So, if it can derive intentionality only through the cognition of human beings, it looks as if the same is true for literary merit, and we seem to have landed up in a relativist position. On this view, to assert that something has a certain value is only to express an opinion, my opinion; if you like, it’s more a statement about me than about the work in question. Any idea of absolute literary merit dissolves away, to be replaced by a multitude of statements reflecting only the dispositions of individuals. And of course there may be as many opinions of a piece of work as readers or viewers – and perhaps more, given changes over time. Which isn’t to mention the creator herself or himself; anyone who has ever attempted to write anything with more pretensions than an email or a postcard will know how a writer’s opinion of their own work ricochets feverishly up and down from self-satisfaction to despair.

The dilemma: absolute or relative?

How do we reconcile these two opposed positions, each of which seems to flow from one of the conclusions in Part 1? I want to try and approach this question by way of a small example; I’m going to retrieve our Shakespeare from the moon and pick out a small passage. This is from near the start of Hamlet; it’s the ghost of Hamlet’s father speaking, starting to convey to his son a flavour of the evil that has been done:

I could a tale unfold whose lightest word
Would harrow up thy soul, freeze thy young blood,
Make thy two eyes, like stars, start from their spheres,
Thy knotted and combined locks to part
And each particular hair to stand on end,
Like quills upon the fretful porpentine.

This conveys a message very similar to something you’ll have heard quite often if you watch TV news:

This report contains scenes which some viewers may find upsetting.

So which of these two quotes has more literary value? Obviously a somewhat absurd example, since one is a piece of poetry that’s alive with fizzing imagery, and the other a plain statement with no poetic pretensions at all (although I would find it very gratifying if BBC newsreaders tried using the former). The point I want to make is that, in the first place, a passage will qualify as poetry through its use of the techniques we see here – imagery contributing to the subtle rhythm and shape of the passage, culminating in the completely unexpected and almost comical image of the porcupine.

Of course much poetry will try to use these techniques, and opinion will usually vary on how successful it is – on whether the poetry is good, bad or indifferent. And of course each opinion will depend on its owner’s prejudices and previous experiences; there’s a big helping of relativism here. But when it happens that a body of work, like the one I have taken my example from, becomes revered throughout a culture over a long period of time – well, it looks like as if we have something like an absolute quality here. Particularly so, given that the plays have long been popular, even in translation, across many cultures.

Britain’s Royal Shakespeare company has recently been introducing his work to primary school children from the age of five or so, and have found that they respond to it well, despite (or maybe because of) the complex language (a report here). I can vouch for this: one of the reasons I chose the passage I did was that I can remember quoting it to my son when he was around that age, and and he loved it, being particularly taken with the ‘porpentine’.

So when something appeals to young, unprejudiced children, there’s certainly a case for claiming that it reflects the absolute truth about some set of qualities possessed by our race. You may object that I am missing the point of consigning Shakespeare to the moon – that it would be nothing more than a puzzle to some future civilisation, human-descended or otherwise, and therefore of only relative value. Well, in the last post I brought in the example of the forty thousand year old Spanish cave art, which I’ve reproduced again here.

Cave painting

A 40,000 year old cave painting in the El Castillo Cave in Puente Viesgo, Spain (www.spain.info)

In looking at this, we are in very much the same position as those future beings who are ignorant of Shakespeare. Here’s something whose meaning is opaque to us, and if we saw it transcribed on to paper we might dismiss it as the random doodlings of a child. But I argued before that there are reasons to suppose it was of immense significance to its creators. And if so, it may represent some absolute truth about them. It’s valuable to us as it was valuable to them – but in admittedly in our case for rather different reasons. But there’s a link – we value it, I’d argue, because they did.  The fact that we are ignorant of what it meant to them does not render it of purely relative value; it goes without saying that there are many absolute truths about the universe of which we are ignorant. And one of them is the significance of that painting for its creators.

We live in a disputatious age, and people are now much more likely to argue that any opinion, however widely held, is merely relative. (Although the view that any opinion is relative sounds suspiciously absolute).  The BBC has a long-running radio programme of which most people will be aware, called Desert Island Discs. After choosing the eight records they would want to have with them on a lonely desert island, and they are invited to select a single book, “apart from Shakespeare and the Bible, which are already provided”. Given this permanent provision, many people find the programme rather quaint and out of touch with the modern age. But of course when the programme began, even more people than now would have chosen one of those items if it were not provided. They have been, if you like, the sacred texts of Western culture, our myths.

A myth, as is often pointed out, is not simply an untrue story, but expresses truth on a deeper level than its surface meaning. Many of Shakespeare’s plots are derived from traditional, myth-like stories, and I don’t need to rehearse here any of what has been said about the truth content of the Bible. It will be objected, of course, that since fewer people would now want these works for their desert island, that there is a strong case for believing that the sacred, or not-so-sacred, status of the works is a purely relative matter. Yes – but only to an extent. There’s no escaping their central position in the history and origins of our culture. Thinking of that crumbling book, as it nestles in the lunar dust, it seems to me that the truths it contains possess – if in a rather different way – some of the absolute truths about the universe that are also to be found in the chemical composition of the dust around it. Maybe those future discoverers will be able to decode one but not the other; but that is a fact about them, and not about the Shakespeare.

(Any comments supporting either absolutism or relativism welcome.)

Read All About It (part 1)

Commuting days until retirement: 300

Imagine a book. It’s a thick, heavy, distinguished looking book, with an impressive tooled leather binding, gilt-trimmed, and it has marbled page edges. A glance at the spine shows it to be a copy of Shakespeare’s complete works. It must be like many such books to be found on the shelves of libraries or well-to-do homes around the world at the present time, although it is not well preserved. The binding is starting to crumble, and much of the gilt lettering can no longer be made out. There’s also something particularly unexpected about this book, which accounts for the deterioration.  Let your mental picture zoom out, and you see, not a set of book-laden shelves, or a polished wood table bearing other books and papers, but an expanse of greyish dust, bathed in bright, harsh light. The lower cover is half buried in this dust, to a depth of an inch or so, and some is strewn across the front, as if ithe book had been dropped or thrown down. Zoom out some more, and you see a rocky expanse of ground, stretching away to what seems like a rather close, sharply defined horizon, separating this desolate landscape from a dark sky.

Yes, this book is on the moon, and it has been the focus of a long standing debate between my brother and sister-in-law. I had vaguely remembered one of them mentioning this some years back, and thought it would be a way in to this piece on intentionality, a topic I have been circling around warily in previous posts. To clarify: books are about things – in fact our moon-bound book is about most of the perennial concerns of human beings. What is it that gives books this quality of ‘aboutness’ – or intentionality? When all’s said and done our book boils down to a set of inert ink marks on paper. Placing it on the moon, spatially distant, and and perhaps temporally distant, from human activity, leaves us with the puzzle as to how those ink marks reach out across time and space to hook themselves into that human world. And if it had been a book ‘about’, say, physics or astronomy, that reach would have been, at least in one sense, wider.

Which problem?

Well, I thought that was what my brother and sister-in-law had been debating when I first heard about it; but when I asked them it turned out that their what they’d been arguing about was the question of literary merit, or more generally, intrinsic value. The book contains material that has been held in high regard by most of humanity (except perhaps GCSE students) for hundreds of years. At some distant point in space and time, perhaps after humanity has disappeared, does that value survive, contained within it, or is it entirely dependent upon who perceives and interprets it?

Two questions, then – let’s refer to them as the ‘aboutness’ question and the ‘value’ question. Although the value question wasn’t originally within the intended scope of this post, it might be worth trying to  tease out how far each question might shed light on the other.

What is a book?

First, an important consideration which I think has a bearing on both questions – and which may have occurred to you already. The term ‘book’ has at least two meanings. “Give me those books” – the speaker refers to physical objects, of the kind I began the post with. “He’s written two books” – there may of course be millions of copies of each, but these two books are abstract entities which may or may not have been published. Some years back I worked for a small media company whose director was wildly enthusiastic about the possibilities of IT (that was my function), but somehow he could never get his head around the concepts involved. When we discussed some notional project, he would ask, with an air of addressing the crucial point, “So will it be a floppy disk, or a CD-ROM?” (I said it was a long time ago.) In vain I tried to get it across to him that the physical instantiation, or the storage medium, was a very secondary matter. But he had a need to imagine himself clutching some physical object, or the idea would not fly in his mind. (I should have tried to explain by using the book example, but never thought of it at the time.)

So with this in mind, we can see that the moon-bound Shakespeare is what is sometimes called in philosophy an ‘intuition pump’ – an example intended to get us thinking in a certain way, but perhaps misleadingly so. This has particular importance for the value question, it seems to me: what we value is set of ideas and modes of expression, not some object. And so its physical, or temporal, location is not really relevant. We could object that there are cases where this doesn’t apply – what about works of art? An original Rembrandt canvas is a revered object; but if it were to be lost it would live on in its reproductions, and, crucially, in people’s minds. Its loss would be sharply regretted – but so, to an extent, would the loss of a first folio edition of Shakespeare. The difference is that for the Rembrandt, direct viewing is the essence of its appreciation, while we lose nothing from Shakespeare when watching, listening or reading, if we are not in the presence of some original artefact.

Value, we might say, does not simply travel around embedded in physical objects, but depends upon the existence of appreciating minds. This gives us a route into examination of the value question – but I’m going to put that aside for the moment and return to good old ‘aboutness’ – since these thoughts also give us  some leverage for developing our ideas there.

…and what is meaning?

So are we to conclude that our copy of Shakespeare itself, as it lies on the moon, has no intrinsic connection with anything of concern or meaning to us? Imagine that some disaster eliminated human life from the earth. Would the book’s links to the world beyond be destroyed at the same time, the print on its pages suddenly reduced to meaningless squiggles?  This is perhaps another way in which we are misled by the imaginary book.

Cave painting

A 40,000 year old cave painting in the El Castillo Cave in Puente Viesgo, Spain (www.spain.info)

Think of prehistoric cave paintings which have persisted, unseen, thousands of years after the deaths of those for whom they were particularly meaningful. Eventually they are found by modern men who rediscover some meaning in them. Many of them depict recognisable animals – perhaps a food source for the people of the time; and as representational images their central meaning is clear to us. But of course we can only make educated guesses at the cloud of associations they would have had for their creators, and their full significance in their culture. And other ancient cave wall markings have been discovered which are still harder to interpret – strange abstract patterns of dots and lines (see above). What’s interesting is that we can sense that there seems to have been some sort of purpose in their creation, without having any idea what it might have been.

Luttrell Psalter

A detail from the Luttrell Psalter (Bristish Library)

Let’s look at a more recent example: the marvellous illuminated script of the Luttrell Psalter, the 14th century illuminated manuscript, now in the British Library. (you can view it in wonderful detail by going to the British Library’s Turning the Pages application.) It’s a psalter, written in Latin, and so the subject matter is still accessible to us. Of more interest are the illustrations around the text – images showing a whole range of activities we can recognise, but as they were carried on in the medieval world. This of course is a wonderful primary historical source, but it’s also more than that. Alongside the depiction of these activities is a wealth of decoration, ranging from simple flourishes to all sorts of fantastical creatures and human-animal hybrids. Some may be symbols which no longer have meaning in today’s culture, and others perhaps just jeux d’esprit on the part of the artist. It’s mostly impossible now for us to distinguish between these.

Think also of the ‘authenticity’ debate in early music that I mentioned in Words and Music a couple of posts back. The full, authentic effect of a piece of music composed some hundreds of years ago, so one argument goes, could only affect an audience as the composer intended if the audience were also of his time. Indeed, even today’s music, of any genre, will have different associations for, and effects on, a listener depending on their background and experience. And indeed, it’s quite common now for artists, conceptual or otherwise, to eschew any overriding purpose as to the meaning of their work, but to intend each person to interpret it in his or her own idiosyncratic way.

Rather too many examples, perhaps, to illustrate the somewhat obvious point that meaning is not an intrinsic property of inert symbols, such as the printed words in our lunar Shakespeare. In transmitting their sense and associations from writer to reader the symbols depend upon shared knowledge, cultural assumptions and habits of thought; something about the symbols, or images, must be recognisable by both creator and consumer. When this is not the case we are just left with a curious feeling, as when looking at that abstract cave art. We get a a strong sense of meaning and intention, but the content of the thoughts behind it are entirely unknown to us. Perhaps some unthinkably different aliens will have the same feeling on finding the Voyager robot spacecraft, which was sent on its way with some basic information about the human race and our location in the galaxy. Looking at the cave patterns we can detect that information is present – but meaning is more than just information. Symbols comprise the latter without intrinsically containing the former, otherwise we’d be able to know what those cave patterns signified.

Physical signs can’t embody meaning of themselves,  apart from the creator and the consumer, any more than a saw can cut wood without a carpenter to wield it. Tool use, indeed, in early man or advanced animals, is an indicator of intentionality – the ability to form abstract ‘what if’ concepts about what might be done, before going ahead and doing it. A certain cinematic moment comes to mind: in Kubrick’s 2001: A Space Odyssey, where the bone wielded as a tool by the primate creature in the distant past is thrown into the air, and cross-fades into a space ship in the 21st century.

Here be dragons

Information theory developed during the 20th century, and is behind all the advances of the period in computing and communications. Computers are like the examples of symbols we have looked at: the states of their circuits and storage media contain symbolic information but are innocent of meaning. Which thought, it seems to me, it leads us to the heart of the perplexity around the notion of aboutness, or intentionality. Brains are commonly thought of as sophisticated computers of a sort, which to some extent at least they must be. So how come that when, in a similar sort of way, information is encoded in the neurochemical states of our brains, it is magically invested with meaning? In his well-known book A Brief History of Time, Stephen Hawking uses a compelling phrase when reflecting on the possibility of a universal theory. Such a theory would be “just a set of rules and equations”. But, he asks,

What is it that breathes fire into the equations and makes a universe for them to describe?

I think that, in a similar spirit, we have to ask: what breathes fire into our brain circuits to add meaning to their information content?

The Chinese Room

If you’re interested enough to have come this far with me, you will probably know about a famous philosophical thought experiment which serves to support the belief that my question is indeed a meaningful and legitimate one – John Searle’s ‘Chinese Room’ argument. But I’ll explain it briefly anyway; skip the next paragraph if you don’t need the explanation.

Chinese Room

A visualisation of John Searle inside the Chinese Room

Searle imagines himself cooped up in a rather bizarre room where he can only communicate with the outside world by passing and receiving notes through an aperture. Within the room he is equipped only with an enormous card filing system containing a set of Chinese characters and rules for manipulating them. He has Chinese interlocutors outside the room, who pass in pieces of paper bearing messages in Chinese. Unable to understand Chinese, he goes through a cumbersome process of matching and manipulating the Chinese symbols using his filing system. Eventually this process yields a series of characters as an answer, which are transcribed on to another piece of paper and passed back out. The people outside (if they are patient enough) get the impression that they are having a conversation with someone inside the room who understands and responds to their messages. But, as Searle says, no understanding is taking place inside the room. As he puts it, it deals with syntax, not semantics, or in the terms we have been using, symbols, not meaning. Searle’s purpose is to demolish the claims of what he calls ‘strong AI’ – the claim that a computer system with this sort of capability could truly understand what we tell it, as judged from its ability to respond and converse. The Chinese Room could be functionally identical to such a system (only much slower) but Searle is demonstrating that is is devoid of anything that we could call understanding.

If you have an iPhone you’ll probably have used an app called ‘Siri’ which has just this sort of capability – and there are equivalents on other types of phone. When combined with the remote server that it communicates with, it can come up with useful and intelligent answers to questions. In fact, you don’t have to try very hard to make it come up with bizarre or useless answers, or flatly fail. But that’s just a question of degree – no doubt future versions will be more sophisticated. We might loosely say that Siri ‘understands’ us – but of course it’s really just a rather more efficient Chinese Room. Needless to say, Searle’s argument has generated years of controversy. I’m not going to enter into that debate, but will just say that I find the argument convincing; I don’t think that Siri can ‘understand’ me.

So if we think of understanding as the ‘fire’ that’s breathed into our brain circuits, where does it come from? Think of the experience of reading a gripping novel. You may be physically reading the words, but you’re not aware of it. ‘Understanding’ is hardly an issue, in that it goes without saying. More than understanding, you are living the events of the novel, with a succession of vivid mental images. Another scenario: you are a parent, and your child comes home from school to tell you breathlessly about some playground encounter that day – maybe positive or negative. You are immediately captivated, visualising the scene, maybe informed by memories of you own school experiences. In both of these cases, what you are doing is not really to do with processing information – that’s just the stimulus that starts it all off. You are experiencing – the information you recognise has kicked off conscious experiences; and yes, we are back with our old friend consciousness.

Understanding and consciousness

Searle also links understanding to consciousness; his position, as I understand it, is that consciousness is a specifically biological function, not to be found in clever artefacts such as computers. But he insists that it’s purely a function of physical processes nontheless – and I find it difficult to understand this view. If biologically evolved creatures can produce consciousness as a by-product of their physical functioning, how can he be so sure that computers cannot? He could be right, but it seems to be a mere dogmatic assertion. I agree with him that you can’t have meaning – and hence intentionality – without consciousness. For sure, although he denies it, he leaves open the possibility that a computer (and thus, presumably, the Chinese Room as a whole) could be conscious. But he does have going for him the immense implausibility of that idea.

Dog

How much intentionality?

So does consciousness automatically bring intentionality with it? In my last post I referred to a dog’s inability to understand or recognise a pointing gesture. We assume that dogs have consciousness of some sort – in a simpler form, they have some of the characteristics which lead us to assume that other humans like ourselves have it. But try thinking yourself for a moment into what it might be to inhabit the mind of a dog. Your experiences consist of the here and now (as ours do) but probably not a lot more. There’s no evidence that a dog’s awareness of the past consists of more than simple learned associations of a Pavlovian kind. They can recognise ‘walkies’, but it seems a mere trigger for a state of excitement, rather than a gateway to a rich store of memories. And they don’t have the brain power to anticipate the future. I know some dog owners might dispute these points – but even if a dog’s awareness extends beyond ‘is’ to ‘was’ and ‘will be’, it surely doesn’t include ‘might be’ or ‘could have been’. Add to this the dog’s inability to use offered information to infer that the mind of another individual contains a truth about the world that hitherto has not been in your own mind (i.e. the ability to understand pointing – see the previous post) and it starts to become clearer what is involved in intentionality. Mere unreflective experiencing of the present moment doesn’t lead to the notion of the objects of your thought, as disticnct from the thought itself. I don’t want to offend dog-owners – maybe their pets’ abilites extend beyond that; but there are certainly other creatures – conscious ones, we assume – who have no such capacity.

So intentionality requires consciousness, but isn’t synonymous with it: in the jargon, consciousness is necessary but not sufficient for intentionality. As hinted earlier, the use of tools is perhaps the simplest indicator of what is sufficient – the ability to imagine how something could be done, and then to take action to make it a reality. And the earliest surviving evidence from prehistory of something resembling a culture is taken to be the remains of ancient graves, where objects surrounding a body indicate that thought was given to the body’s destiny – in other words, there was a concept of what may or may not happen in the future. It’s with these capabilities, we assume, that consciousness started to co-exist with the mental capacity which made intentionality possible.

So some future civilisation, alien or otherwise, finding that Shakespeare volume on the moon, will have similar thoughts to those that we would have on discovering the painted patterns in the cave. They’ll conclude that there were beings in our era who possessed the capacity for intentionality, but they won’t have the shared experience which enables them to deduce what the printed symbols are about. And, unless they have come to understand better than we do what the nature of consciousness is, they won’t have any better idea what the ultimate nature of intentionality is.

The value of what they would find is another question, which I said I would return to – and will. But this post is already long enough, and it’s too long since I last published one – so I’ll deal with that topic next time.

A Few Pointers

Commuting days until retirement: 342

Michaelangelo's finger

The act of creation, in the detail from the Sistine Chapel ceiling which provides Tallis’s title

After looking in the previous post at how certain human artistic activities map on to the world at large, let’s move our attention to something that seems much more primitive. Primitive, at any rate, in the sense that most small children become adept at it before they develop any articulate speech. This post is prompted by a characteristically original book by Raymond Tallis I read a few years back – Michaelangelo’s Finger. Tallis shows how pointing is a quintessentially human activity, depending on a whole range of capabilities that are exclusive to humans. In the first place, it could be thought of as a language in itself – Pointish, as Tallis calls it. But aren’t pointing fingers, or arrows, obvious in their meaning, and capable of only one interpretation? I’ve thought of a couple of examples to muddy the waters a little.

Pointing – but which way?

TV aerial

Which way does it point?

The first is perhaps a little trivial, even silly. Look at this picture of a TV aerial. If asked where it is pointing, you would say the TV transmitter, which will be in the direction of the thin end of the aerial. But if we turn it sideways, as I’ve done underneath, we find what we would naturally interpret as an arrow pointing in the opposite direction. It seems that our basic arrow understanding is weakened by the aerial’s appearance and overlaid by other considerations, such as a sense of how TV aerials work.

My second example is something I heard about which is far more profound and interesting, and deliciously counter-intuitive. It has to do with the stages by which a child learns language, and also with signing, as used by deaf people. Two facts are needed to explain the context: first is that, as you may know, sign language is not a mere substitute for language, but is itself a language in every sense. This can be demonstrated in numerous ways: for example, conversing in sign has been shown to use exactly the same area of the brain as does the use of spoken language. And, compared with those rare and tragic cases where a child is not exposed to language in early life, and consequently never develops a proper linguistic capability, young children using only sign language at this age are not similarly handicapped. Generally, for most features of spoken language, equivalents can be found in signing. (To explore this further, you could try Oliver Sacks’ book Seeing Voices.) The second fact concerns conventional language development: at a certain stage, many children, hearing themselves referred to as ‘you’, come to think of ‘you’ as a name for themselves, and start to call themselves ‘you’; I remember both my children doing this.

And so here’s the payoff: in most forms of sign language, the word for ‘you’ is to simply point at the person one is speaking to. But children who are learning signing as a first language will make exactly the same mistake as their hearing counterparts, pointing at the person they are addressing in order to refer to themselves. We could say, perhaps, that they are still learning the vocabulary of Pointish. The aerial example didn’t seem very important, as it merely involved a pointing action that we ascribe to a physical object. Of course the object itself can’t have an intention; it’s only a human interpretation we are considering, which can work either way. This sign language example is more surprising because the action of pointing – the intention – is a human one, and in thinking of it we implicitly transfer our consciousness into the mind of the pointer, and attempt to get our head around how they can make a sign whose meaning is intuitively obvious to us, but intend it in exactly the opposite sense.

What’s involved in pointing?

Tallis teases out how pointing relies on a far more sophisticated set of mental functions than it might seem to involve at first sight. As a first stab at demonstrating this, there is the fact that pointing, either the action or the understanding of it, appears to be absent in animals – Tallis devotes a chapter to this. He describes a slightly odd-feeling experience which I have also had, when throwing a stick for a dog to retrieve. The animal is often in a high state of excitement and distraction at this point, and dogs do not have very keen sight. Consequently it often fails to notice that you have actually thrown the stick, and continues to stare at you expectantly. You point vigorously with an outstretched arm: “It’s over there!” Intuitively, you feel the dog should respond to that, but of course it just continues to watch you even more intensely, and you realise that it simply has no notion of the meaning of the gesture – no notion, in fact, of ‘meaning’ at all. You may object that there is a breed of dog called a Pointer, because it does just that – points. But let’s just examine for a moment what pointing involves.

Primarily, in most cases, the the key concept is attention: you may want to draw the attention of another to something,. Or maybe, if you are creating a sign with an arrow, you may be indicating by proxy where others should go, on the assumption that they have a certain objective. Attention, objective: these are mental entities which we can only ascribe to others if we first have a theory of mind – that is, if we have already achieved the sophisticated ability to infer that others have minds, and and a private world, like our own. Young children will normally start to point before they have very much speech (as opposed to language – understanding develops in advance of expression). It’s significant that autistic children usually don’t show any pointing behaviour at this stage. Lack of insight into the minds of others – an under-developed theory of mind – is a defining characteristic of autism.

So, returning to the example of the dog, we can take it that for an animal to show genuine pointing behaviour, it must have a developed notion of other minds, and which seems unlikely. The action of the Pointer dog looks more like instinctive behaviour, evolved through the cooperation of packs and accentuated by selective breeding. There are other examples of instinctive pointing in animal species: that of bees is particularly interesting, with the worker ‘dance’ that communicates to the hive where a food source is. This, however, can be analysed down into a sequence of instinctive automatic responses which will always take the same form in the same circumstances, showing no sign of intelligent variation. Chimpanzees can be trained to point, and show some capacity for imitating humans, but there are no known examples of their use of pointing in the wild.

But there is some recent research which suggests a counter-example to Tallis’s assertion that pointing is unknown in animals. This shows elephants responding to human pointing gestures, and it seems there is a possibility that they point spontaneously with their trunks. This rather fits with other human-like behaviour that has been observed in elephants, such as apparently grieving for their dead. Grieving, it seems to me, has something in common with pointing, in that it also implies a theory of mind; the death of another individual is not just a neutral change in the shape and pattern of your world, but the loss of another mind. It’s not surprising that, in investigating ancient remains, we take signs of burial ritual to be a potent indicator of the emergence of a sophisticated civilisation of people who are able to recognise and communicate with minds other than their own – probably the emergence of language, in fact.

Pointing in philosophy

We have looked at the emergence of pointing and language in young children; and the relation between the two has an important place in the history of philosophy. There’s a simple, but intuitive notion that language is taught to a child by pointing to objects and saying the word for them – so-called ostensive definition. And it can’t be denied that this has a place. I can remember both of my children taking obvious pleasure in what was, to them, a discovery – each time they pointed to something they could elicit a name for it from their parent. In a famous passage at the start of Philosophical Investigations, Wittgenstein identifies this notion – of ostensive definition as the cornerstone of language learning – in a passage from the writings of St Augustine, and takes him to task over it. Wittgenstein goes on to show, with numerous examples, how dynamic and varied an activity the use of language is, in contrast to the monolithic and static picture suggested by Augustine (and indeed by Wittgenstein himself in his earlier incarnation). We already have our own example in the curious and unique way in which the word ‘you’ and its derivatives are used, and a sense of the stages by which children develop the ability to use it correctly.

The Boyhood of Raleigh

Perhaps the second most famous pointing finger in art: Millais’ The Boyhood of Raleigh

The passage of Augustine also suggests a notion of pointing as a primitive,  primary action, needing no further explanation. However, we’ve seen how it relies on a prior set of sophisticated abilities: having the notion that oneself is distinct from the world – a world that contains other minds like one’s own, whose attention may have different contents from one’s own; that it’s possible to communicate meaning by gestures to modify those contents; an idea of how these gestures can be ‘about’ objects within the world; and that there needs to be agreement on how to interpret the gestures, which aren’t always as intuitive and unambiguous as we may imagine. As Tallis rather nicely puts it, the arch of ostensive definition is constructed from these building bricks, with the pointing action as the coping stone which completes it.

The theme underlying both this and my previous post is the notion of how one thing can be ‘about’ another – the notion of intentionality. This idea is presented to us in an especially stark way when it comes to the action of pointing. In the next post I intend to approach that more general theme head-on.