iPhobia

Commuting days until retirement: 238

If you have ever spoken at any length to someone who is suffering with a diagnosed mental illness − depression, say, or obsessive compulsive disorder − you may have come to feel that what they are experiencing differs only in degree from your own mental life, rather than being something fundamentally different (assuming, of course, that you are lucky enough not to have been similarly ill yourself). It’s as if mental illness, for the most part, is not something entirely alien to the ‘normal’ life of the mind, but just a distortion of it. Rather than the presence of a new unwelcome intruder, it’s more that the familiar elements of mental functioning have lost their usual proportion to one another. If you spoke to someone who was suffering from paranoid feelings of persecution, you might just feel an echo of them in the back of your own mind: those faint impulses that are immediately squashed by the power of your ability to draw logical common-sense conclusions from what you see about you. Or perhaps you might encounter someone who compulsively and repeatedly checks that they are safe from intrusion; but we all sometimes experience that need to reassure ourselves that a door is locked, when we know perfectly well that it is really.

That uncomfortably close affinity between true mental illness and everyday neurotic tics is nowhere more obvious than with phobias. A phobia serious enough to be clinically significant can make it impossible for the sufferer to cope with everyday situations; while on the other hand nearly every family has a member (usually female, but not always) who can’t go near the bath with a spider in it, as well as a member (usually male, but not always) who nonchalantly picks the creature up and ejects it from the house. (I remember that my own parents went against these sexual stereotypes.) But the phobias I want to focus on here are those two familiar opposites − claustrophobia and agoraphobia.

We are all phobics

In some degree, virtually all of us suffer from them, and perfectly rationally so. Anyone would fear, say, being buried alive, or, at the other extreme, being launched into some limitless space without hand or foothold, or any point of reference. And between the extremes, most of us have some degree of bias one way or the other. Especially so − and this is the central point of my post − in an intellectual sense. I want to suggest that there is such a phenomenon as an intellectual phobia: let’s call it an iphobia. My meaning is not, as the Urban Dictionary would have it, an extreme hatred of Apple products, or a morbid fear of breaking your iPhone. Rather, I want to suggest that there are two species of thinkers: iagorophobes and iclaustrophobes, if you’ll allow me such ugly words.

A typical iagorophobe will in most cases cleave to scientific orthodoxy. Not for her the wide open spaces of uncontrolled, rudderless, speculative thinking. She’s reassured by a rigid theoretical framework, comforted by predictability; any unexplained phenomenon demands to be brought into the fold of existing theory, for any other way, it seems to her, lies madness. But for the iclaustrophobe, on the other hand, it’s intolerable to be caged inside that inflexible framework. Telepathy? Precognition? Significant coincidence? Of course they exist; there is ample anecdotal evidence. If scientific orthodoxy can’t embrace them, then so much the worse for it − the incompatibility merely reflects our ignorance. To this the iagorophobe would retort that we have no logical grounds whatever for such beliefs. If we have nothing but anecdotal evidence, we have no predictability; and phenomena that can’t be predicted can’t therefore be falsified, so any such beliefs fall foul of the Popperian criterion of scientific validity. But why, asks the iclaustrophobe, do we have to be constrained by some arbitrary set of rules? These things are out there − they happen. Deal with it. And so the debate goes.

Archetypal iPhobics

Widening the arena more than somewhat, perhaps the archetypal iclaustrophobe was Plato. For him, the notion that what we see was all we would ever get was anathema – and he eloquently expressed his iclaustrophobic response to it in his parable of the cave. For him true reality was immeasurably greater than the world of our everyday existence. And of course he is often contrasted with his pupil Aristotle, for whom what we can see is, in itself, an inexhaustibly fascinating guide to the nature of our world − no further reality need be posited. And Aristotle, of course, is the progenitor of the syllogism and deductive logic. In Raphael’s famous fresco The School of Athens, the relevant detail of which you see below, Plato, on the left, indicates his world of forms beyond our immediate reality by pointing heavenward, while Aristotle’s gesture emphasises the earth, and the here and now. Raphael has them exchanging disputatious glances, which for me express the hostility that exists between the opposed iphobic world-views to this day.

School of Athens

Detail from Raphael’s School of Athens in the Vatican, Rome (Wikimedia Commons)

iPhobia today

It’s not surprising that there is such hostility; I want to suggest that we are talking not of a mere intellectual disagreement, but a situation where each side insists on a reality to which the other has a strong (i)phobic reaction. Let’s look at a specific present-day example, from within the WordPress forums. There’s a blog called Why Evolution is True, which I’d recommend as a good read. It’s written by Jerry Coyne, a distinguished American professor of biology. His title is obviously aimed principally at the flourishing belief in creationism which exists in the US − Coyne has extensively criticised the so-called Intelligent Design theory. (In in my view, that controversy is not a dispute between the two iphobias I have described, but between two forms of iagoraphobia. The creationists, I would contend, are locked up in an intellectual ghetto of their own making, since venturing outside it would fatally threaten their grip on their frenziedly held, narrowly based faith.)

Jerry Coyne

Jerry Coyne (Zooterkin/Wikimedia Commons)

But I want to focus on another issue highlighted in the blog, which in this case is a conflict between the two phobias. A year or so ago Coyne took issue with the fact that the maverick scientist Rupert Sheldrake was given a platform to explain his ideas in the TED forum. Note Coyne’s use of the hate word ‘woo’, often used by the orthodox in science as an insulting reference to the unorthodox. They would defend it, mostly with justification, as characterising what is mystical or wildly speculative, and without evidential basis − but I’d claim there’s more to it than that: it’s also the iagorophobe’s cry of revulsion.

Rupert Sheldrake

Rupert Sheldrake (Zereshk/Wikimedia Commons)

Coyne has strongly attacked Sheldrake on more than one occasion: is there anything that can be said in Sheldrake’s defence? As a scientist he has an impeccable pedigree, having a Cambridge doctorate and fellowship in biology. It seems that he developed his unorthodox ideas early on in his career, central among which is his notion of ‘morphic resonance’, whereby animal and human behaviour, and much else besides, is influenced by previous similar behaviour. It’s an idea that I’ve always found interesting to speculate about − but it’s obviously also a red rag to the iagorophobic bull. We can also mention that he has been careful to describe how his theories can be experimentally confirmed or falsified, thus claiming scientific status for them. He also invokes his ideas to explain aspects of the formation of organisms that, in to date, haven’t been explained by the action of DNA. But increasing knowledge of the significance of what was formerly thought of as ‘junk DNA’ is going a long way to filling these explanatory gaps, so Sheldrake’s position looks particularly weak here. And in his TED talks he not only defends his own ideas, but attacks many of the accepted tenets of current scientific theory.

However, I’d like to return to the debate over whether Sheldrake should be denied his TED platform. Coyne’s comments led to a reconsideration of the matter by the TED editors, who opened a public forum for discussion on the matter. The ultimate, not unreasonable, decision was that the talks were kept available, but separately from the mainstream content. Coyne said he was surprised by the level of invective arising from the discussion; but I’d say this is because we have here a direct confrontation between iclaustrophobes and iagorophobes − not merely a polite debate, but a forum where each side taunts the other with notions for which the opponents have a visceral revulsion. And it has always been so; for me the iphobia concept explains the rampant hostility which always characterises debates of this type − as if the participants are not merely facing opposed ideas, but respective visions which invoke in each a deeply rooted fear.

I should say at this point that I don’t claim any godlike objectivity in this matter; I’m happy to come out of the closet as an iclaustrophobe myself. This doesn’t mean in my case that I take on board any amount of New Age mumbo-jumbo; I try to exercise rational scepticism where it’s called for. But as an example, let’s go back to Sheldrake: he’s written a book about the observation that housebound dogs sometimes appear to show marked  excitement at the moment that their distant owner sets off to return home, although there’s no way they could have knowledge of the owner’s actions at that moment. I have no idea whether there’s anything in this − but the fact is that if it were shown to be true nothing would give me greater pleasure. I love mystery and inexplicable facts, and for me they make the world a more intriguing and stimulating place. But of course Coyne isn’t the only commentator who has dismissed the theory out of hand as intolerable woo. I don’t expect this matter to be settled in the foreseeable future, if only because it would be career suicide for any mainstream scientist to investigate it.

Science and iPhobia

Why should such a course of action be so damaging to an investigator? Let’s start by putting the argument that it’s a desirable state of affairs that such research should be eschewed by the mainstream. The success of the scientific enterprise is largely due to the rigorous methodology it has developed; progress has resulted from successive, well-founded steps of theorising and experimental testing. If scientists were to spend their time investigating every wild theory that was proposed their efforts would become undirected and diffuse, and progress would be stalled. I can see the sense in this, and any self-respecting iagorophobe would endorse it. But against this, we can argue that progress in science often results from bold, unexpected ideas that come out of the blue (some examples in a moment). While this more restrictive outlook lends coherence to the scientific agenda, it can, just occasionally, exclude valuable insights. To explain why the restrictive approach holds sway I would look at the how a person’s psychological make-up might influence their career choice. Most iagorophobes are likely to be attracted to the logical, internally consistent framework they would be working with as part of a scientific career; while those of an iclaustrophobic profile might be attracted in an artistic direction. Hence science’s inbuilt resistance to out-of-the-blue ideas.

Albert Einstein

Albert Einstein (Wikimedia Commons)

I may come from the iclaustrophobe camp, but I don’t want to claim that only people of that profile are responsible for great scientific innovations. Take Einstein, who may have had an early fantasy of riding on a light beam, but it was one which led him through rigorous mathematical steps to a vastly coherent and revolutionary conception. His essential iagorophbia is seen in his revulsion from the notion of quantum indeterminacy − his ‘God does not play dice’. Relativity, despite being wholly novel in its time, is often spoken of as a ‘classical’ theory, in the sense that it retains the mathematical precision and predictability of the Newtonian schema which preceded it.

Niels Bohr

Niels Bohr (Wikimedia Commons)

There was a long-standing debate between him and Niels Bohr, the progenitor of the so-called Copenhagen interpretation of quantum theory, which held that different sub-atomic scenarios coexisted in ‘superposition’ until an observation was made and the wave function collapsed. Bohr, it seems to me, with his willingness to entertain wildly counter-intuitive ideas, was a good example of an iclaustrophobe; so it’s hardly surprising that the debate between him and Einstein was so irreconcilable − although it’s to the credit of both that their mutual respect never faltered..

Over to you

Are you an iclaustrophobe or an iagorophobe? A Plato or an Aristotle? A Sheldrake or a Coyne? A Bohr or an Einstein? Or perhaps not particularly either? I’d welcome comments from either side, or neither.

The Vault of Heaven

Commuting days until retirement: 250

Exeter Cathedral roof

The roof of Exeter Cathedral (Wanner-Laufer, Wikimedia Commons)

Thoughts are sometimes generated out of random conjunctions in time between otherwise unrelated events. Last week we were on holiday in Dorset, and depressing weather for the first couple of days drove us into the nearest city – Exeter, where we visited the cathedral. I had never seen it before and was more struck than I had expected to be. Stone and wood carvings created over the past 600 years decorate thrones, choir stalls and tombs, the latter bearing epitaphs ranging in tone from the stern to the whimsical. All this lies beneath the marvellous fifteeenth century vaulted roof – the most extensive known of the period, I learnt. Looking at this, and the cathedral’s astronomical clock dating from the same century, I imagined myself seeing them as a contemporary member of the congregation would have, and tried to share the medieval conception of the universe above that roof, reflected in the dial of the clock.

Astronomical Clock

The Astronomical Clock at Exeter Cathedral (Wikimedia Commons)

The other source of these thoughts was the book I happened to have finished that day: Max Tegmark’s Our Mathematical Universe*. He’s an MIT physics professor who puts forward the view (previously also hinted at in this blog) that reality is at bottom simply a mathematical object. He admits that it’s a minority view, scoffed at by many of his colleagues – but I have long felt a strong affinity for the idea. I have reservations about some aspects of the Tegmark view of reality, but not one of its central planks – the belief that we live in one universe among a host of others. Probably to most people the thought is just a piece of science fiction fantasy – and has certainly been exploited for all it’s worth by fiction authors in recent years. But in fact it is steadily gaining traction among professional scientists and philosophers as a true description of the universe – or rather multiverse, as it’s usually called in this context.

Nowadays there is a whole raft of differing notions of a multiverse, each deriving from separate theoretical considerations. Tegmark combines four different ones in the synthesis he presents in the book. But I think I am right in saying that the first time such an idea appeared in anything like a mainstream scientific context was the PhD thesis of a 1950s student at Princeton in the USA – Hugh Everett.

The thesis appeared in 1957; its purpose was to present an alternative treatment of the quantum phenomenon known as the collapse of the wave function. A combination of theoretical and experimental results had come together to suggest that subatomic particles (or waves – the duality was a central idea here) existed as a cloud of possibilities, until interacted with, or observed. The position of an electron, for example could be defined with a mathematical function – the wave function of Schrodinger – which assigned only a probability to each putative location. If, however, we were to put this to the test – to measure its location in practice, we would have to do this by means of some interaction, and the answer that would come back would be one specific position among the cloud of possibilities. But by carrying out such procedures repeatedly, it was shown that the probability of any specific result was given by the wave function. The approach to these results which became most widely accepted was the so-called ‘Copenhagen interpreatation’ of Bohr and others, which held that all the possible locations co-existed in ‘superposition’ until the measurement was made and the wave function ‘collapsed’. Hence some of the more famous statements about the quantum world: Einstein’s dissatisfaction with the idea that ‘God plays dice’; and Schrodinger’s well-known thought experiment aimed to test the Copenhagen interpretation to destruction – the cat which is presumed to be simultaneously dead and alive until its containing box is opened and the result determined.

Everett proposed that there was no such thing as the collapse of the wave function. Rather, each of the possible outcomes was represented in one real universe; it was as if the universe ‘branched’ into a number of equally real versions, and you, the observer, found yourself in just one of them. Of course, it followed that many copies of you each found themselves in slightly different circumstances, unlike the unfortunate cat which presumably only experienced those universes in which it lived. Needless to say, although Everett’s ideas were encouraged at the time by a handful of colleagues (Bryce DeWitt, John Wheeler) they were regarded for many years as a scientific curiosity and not taken further. Everett himself moved away from theoretical physics, and involved himself in practical technology, later developing an enjoyment of programming. He smoked and drank heavily and became obese, dying at the age of 51. Tegmark implies that this was at least partly a result of his neglect by the theoretical physics community – but there’s also evidence that his choices of career path and lifestyle derived from his natural inclinations.

During the last two decades of the 20th century, however, the multiverse idea began to be taken more seriously, and had some enthusiastic proponents such as the British theorist David Deutsch and indeed Tegmark himself.  In his book, Tegmark cites a couple of straw polls he took among theoretical physicists attending talks he gave, in 1997 and again in 2010. In the first case, out of a response of 48, 13 endorse the Copenhagen interpretation, and 8 the multiverse idea. (The remainder are mostly undecided, with a few endorsing alternative approaches). In 2010 there are 35 respondents, of whom none at all go for Copenhagen, and 16 for the multiverse. (Undecideds remain about the same – to 16 from 18). This seems to show a decisive rise in support for multiple universes; although I do wonder whether it also reflects which physicists who were prepared to attend Tegmark’s talks, his views having become more well known by 2010. It so happens that the drop in the respondent numbers – 13 – is the same as the disappearing support for the Copenhagen interpreation.

Nevertheless, it’s fair to say that the notion of a multiple universe as a reality has now entered the mainstream of theoretical science in a way that it had not done half a century ago. There’s an argument, I thought as I looked at that cathedral roof, that cosmology has been transformed even more radically in my lifetime than it had been in the preceding 500 years. The skill of the medieval stonemasons as they constructed the multiple rib vaults, and the wonder of the medieval congregation as they marvelled at the completed roof, were consciously directed to the higher vault of heaven that overarched the world of their time. Today those repeated radiating patterns might be seen as a metaphor for the multiple worlds that we are, perhaps, beginning dimly to discern.


*Tegmark, Max, Our Mathematical Universe: My Quest for the Ultimate Nature of Reality. Allen Lane/Penguin, 2014

Read All About It (Part 2)

Commuting days until retirement: 285

You’ll remember, if you have paid me the compliment of reading my previous post, that we started with that crumbling copy of the works of Shakespeare, incongruously finding itself on the moon. I diverged from the debate that I had inherited from my brother and sister-in-law, to discuss what this suggested regarding ‘aboutness’, or intentionality. But now I’m going to get back to what their disagreement was. The specific question at issue was this: was the value – the intrinsic merit we ascribe to the contents of that book – going to be locked within it for all time and all places, or would its value perish with the human race, or indeed wither away as a result of its remote location? More broadly, is value of this sort – literary merit – something absolute and unchangeable, or a quality which exists only in relation to the opinion of certain people?

I went on to distinguish between ‘book’ as physical object in time and space, and ‘book’ regarded as a collection of ideas and their expression in language, and not therefore entirely rooted in any particular spatial or temporal location. It’s the latter, the abstract creation, which we ascribe value to. So immediately it looks as if the location of this particular object is neither here nor there, and and the belief in absolutism gains support. If a work we admire is great regardless of where it is in time or space, and then surely it is great for all times and all places?

But then, in looking at the quality of ‘aboutness’, or intentionality, we concluded that nothing possessed it except by virtue of being created – or understood by – a conscious being such as a human. So, if it can derive intentionality only through the cognition of human beings, it looks as if the same is true for literary merit, and we seem to have landed up in a relativist position. On this view, to assert that something has a certain value is only to express an opinion, my opinion; if you like, it’s more a statement about me than about the work in question. Any idea of absolute literary merit dissolves away, to be replaced by a multitude of statements reflecting only the dispositions of individuals. And of course there may be as many opinions of a piece of work as readers or viewers – and perhaps more, given changes over time. Which isn’t to mention the creator herself or himself; anyone who has ever attempted to write anything with more pretensions than an email or a postcard will know how a writer’s opinion of their own work ricochets feverishly up and down from self-satisfaction to despair.

The dilemma: absolute or relative?

How do we reconcile these two opposed positions, each of which seems to flow from one of the conclusions in Part 1? I want to try and approach this question by way of a small example; I’m going to retrieve our Shakespeare from the moon and pick out a small passage. This is from near the start of Hamlet; it’s the ghost of Hamlet’s father speaking, starting to convey to his son a flavour of the evil that has been done:

I could a tale unfold whose lightest word
Would harrow up thy soul, freeze thy young blood,
Make thy two eyes, like stars, start from their spheres,
Thy knotted and combined locks to part
And each particular hair to stand on end,
Like quills upon the fretful porpentine.

This conveys a message very similar to something you’ll have heard quite often if you watch TV news:

This report contains scenes which some viewers may find upsetting.

So which of these two quotes has more literary value? Obviously a somewhat absurd example, since one is a piece of poetry that’s alive with fizzing imagery, and the other a plain statement with no poetic pretensions at all (although I would find it very gratifying if BBC newsreaders tried using the former). The point I want to make is that, in the first place, a passage will qualify as poetry through its use of the techniques we see here – imagery contributing to the subtle rhythm and shape of the passage, culminating in the completely unexpected and almost comical image of the porcupine.

Of course much poetry will try to use these techniques, and opinion will usually vary on how successful it is – on whether the poetry is good, bad or indifferent. And of course each opinion will depend on its owner’s prejudices and previous experiences; there’s a big helping of relativism here. But when it happens that a body of work, like the one I have taken my example from, becomes revered throughout a culture over a long period of time – well, it looks like as if we have something like an absolute quality here. Particularly so, given that the plays have long been popular, even in translation, across many cultures.

Britain’s Royal Shakespeare company has recently been introducing his work to primary school children from the age of five or so, and have found that they respond to it well, despite (or maybe because of) the complex language (a report here). I can vouch for this: one of the reasons I chose the passage I did was that I can remember quoting it to my son when he was around that age, and and he loved it, being particularly taken with the ‘porpentine’.

So when something appeals to young, unprejudiced children, there’s certainly a case for claiming that it reflects the absolute truth about some set of qualities possessed by our race. You may object that I am missing the point of consigning Shakespeare to the moon – that it would be nothing more than a puzzle to some future civilisation, human-descended or otherwise, and therefore of only relative value. Well, in the last post I brought in the example of the forty thousand year old Spanish cave art, which I’ve reproduced again here.

Cave painting

A 40,000 year old cave painting in the El Castillo Cave in Puente Viesgo, Spain (www.spain.info)

In looking at this, we are in very much the same position as those future beings who are ignorant of Shakespeare. Here’s something whose meaning is opaque to us, and if we saw it transcribed on to paper we might dismiss it as the random doodlings of a child. But I argued before that there are reasons to suppose it was of immense significance to its creators. And if so, it may represent some absolute truth about them. It’s valuable to us as it was valuable to them – but in admittedly in our case for rather different reasons. But there’s a link – we value it, I’d argue, because they did.  The fact that we are ignorant of what it meant to them does not render it of purely relative value; it goes without saying that there are many absolute truths about the universe of which we are ignorant. And one of them is the significance of that painting for its creators.

We live in a disputatious age, and people are now much more likely to argue that any opinion, however widely held, is merely relative. (Although the view that any opinion is relative sounds suspiciously absolute).  The BBC has a long-running radio programme of which most people will be aware, called Desert Island Discs. After choosing the eight records they would want to have with them on a lonely desert island, and they are invited to select a single book, “apart from Shakespeare and the Bible, which are already provided”. Given this permanent provision, many people find the programme rather quaint and out of touch with the modern age. But of course when the programme began, even more people than now would have chosen one of those items if it were not provided. They have been, if you like, the sacred texts of Western culture, our myths.

A myth, as is often pointed out, is not simply an untrue story, but expresses truth on a deeper level than its surface meaning. Many of Shakespeare’s plots are derived from traditional, myth-like stories, and I don’t need to rehearse here any of what has been said about the truth content of the Bible. It will be objected, of course, that since fewer people would now want these works for their desert island, that there is a strong case for believing that the sacred, or not-so-sacred, status of the works is a purely relative matter. Yes – but only to an extent. There’s no escaping their central position in the history and origins of our culture. Thinking of that crumbling book, as it nestles in the lunar dust, it seems to me that the truths it contains possess – if in a rather different way – some of the absolute truths about the universe that are also to be found in the chemical composition of the dust around it. Maybe those future discoverers will be able to decode one but not the other; but that is a fact about them, and not about the Shakespeare.

(Any comments supporting either absolutism or relativism welcome.)

Read All About It (part 1)

Commuting days until retirement: 300

Imagine a book. It’s a thick, heavy, distinguished looking book, with an impressive tooled leather binding, gilt-trimmed, and it has marbled page edges. A glance at the spine shows it to be a copy of Shakespeare’s complete works. It must be like many such books to be found on the shelves of libraries or well-to-do homes around the world at the present time, although it is not well preserved. The binding is starting to crumble, and much of the gilt lettering can no longer be made out. There’s also something particularly unexpected about this book, which accounts for the deterioration.  Let your mental picture zoom out, and you see, not a set of book-laden shelves, or a polished wood table bearing other books and papers, but an expanse of greyish dust, bathed in bright, harsh light. The lower cover is half buried in this dust, to a depth of an inch or so, and some is strewn across the front, as if ithe book had been dropped or thrown down. Zoom out some more, and you see a rocky expanse of ground, stretching away to what seems like a rather close, sharply defined horizon, separating this desolate landscape from a dark sky.

Yes, this book is on the moon, and it has been the focus of a long standing debate between my brother and sister-in-law. I had vaguely remembered one of them mentioning this some years back, and thought it would be a way in to this piece on intentionality, a topic I have been circling around warily in previous posts. To clarify: books are about things – in fact our moon-bound book is about most of the perennial concerns of human beings. What is it that gives books this quality of ‘aboutness’ – or intentionality? When all’s said and done our book boils down to a set of inert ink marks on paper. Placing it on the moon, spatially distant, and and perhaps temporally distant, from human activity, leaves us with the puzzle as to how those ink marks reach out across time and space to hook themselves into that human world. And if it had been a book ‘about’, say, physics or astronomy, that reach would have been, at least in one sense, wider.

Which problem?

Well, I thought that was what my brother and sister-in-law had been debating when I first heard about it; but when I asked them it turned out that their what they’d been arguing about was the question of literary merit, or more generally, intrinsic value. The book contains material that has been held in high regard by most of humanity (except perhaps GCSE students) for hundreds of years. At some distant point in space and time, perhaps after humanity has disappeared, does that value survive, contained within it, or is it entirely dependent upon who perceives and interprets it?

Two questions, then – let’s refer to them as the ‘aboutness’ question and the ‘value’ question. Although the value question wasn’t originally within the intended scope of this post, it might be worth trying to  tease out how far each question might shed light on the other.

What is a book?

First, an important consideration which I think has a bearing on both questions – and which may have occurred to you already. The term ‘book’ has at least two meanings. “Give me those books” – the speaker refers to physical objects, of the kind I began the post with. “He’s written two books” – there may of course be millions of copies of each, but these two books are abstract entities which may or may not have been published. Some years back I worked for a small media company whose director was wildly enthusiastic about the possibilities of IT (that was my function), but somehow he could never get his head around the concepts involved. When we discussed some notional project, he would ask, with an air of addressing the crucial point, “So will it be a floppy disk, or a CD-ROM?” (I said it was a long time ago.) In vain I tried to get it across to him that the physical instantiation, or the storage medium, was a very secondary matter. But he had a need to imagine himself clutching some physical object, or the idea would not fly in his mind. (I should have tried to explain by using the book example, but never thought of it at the time.)

So with this in mind, we can see that the moon-bound Shakespeare is what is sometimes called in philosophy an ‘intuition pump’ – an example intended to get us thinking in a certain way, but perhaps misleadingly so. This has particular importance for the value question, it seems to me: what we value is set of ideas and modes of expression, not some object. And so its physical, or temporal, location is not really relevant. We could object that there are cases where this doesn’t apply – what about works of art? An original Rembrandt canvas is a revered object; but if it were to be lost it would live on in its reproductions, and, crucially, in people’s minds. Its loss would be sharply regretted – but so, to an extent, would the loss of a first folio edition of Shakespeare. The difference is that for the Rembrandt, direct viewing is the essence of its appreciation, while we lose nothing from Shakespeare when watching, listening or reading, if we are not in the presence of some original artefact.

Value, we might say, does not simply travel around embedded in physical objects, but depends upon the existence of appreciating minds. This gives us a route into examination of the value question – but I’m going to put that aside for the moment and return to good old ‘aboutness’ – since these thoughts also give us  some leverage for developing our ideas there.

…and what is meaning?

So are we to conclude that our copy of Shakespeare itself, as it lies on the moon, has no intrinsic connection with anything of concern or meaning to us? Imagine that some disaster eliminated human life from the earth. Would the book’s links to the world beyond be destroyed at the same time, the print on its pages suddenly reduced to meaningless squiggles?  This is perhaps another way in which we are misled by the imaginary book.

Cave painting

A 40,000 year old cave painting in the El Castillo Cave in Puente Viesgo, Spain (www.spain.info)

Think of prehistoric cave paintings which have persisted, unseen, thousands of years after the deaths of those for whom they were particularly meaningful. Eventually they are found by modern men who rediscover some meaning in them. Many of them depict recognisable animals – perhaps a food source for the people of the time; and as representational images their central meaning is clear to us. But of course we can only make educated guesses at the cloud of associations they would have had for their creators, and their full significance in their culture. And other ancient cave wall markings have been discovered which are still harder to interpret – strange abstract patterns of dots and lines (see above). What’s interesting is that we can sense that there seems to have been some sort of purpose in their creation, without having any idea what it might have been.

Luttrell Psalter

A detail from the Luttrell Psalter (Bristish Library)

Let’s look at a more recent example: the marvellous illuminated script of the Luttrell Psalter, the 14th century illuminated manuscript, now in the British Library. (you can view it in wonderful detail by going to the British Library’s Turning the Pages application.) It’s a psalter, written in Latin, and so the subject matter is still accessible to us. Of more interest are the illustrations around the text – images showing a whole range of activities we can recognise, but as they were carried on in the medieval world. This of course is a wonderful primary historical source, but it’s also more than that. Alongside the depiction of these activities is a wealth of decoration, ranging from simple flourishes to all sorts of fantastical creatures and human-animal hybrids. Some may be symbols which no longer have meaning in today’s culture, and others perhaps just jeux d’esprit on the part of the artist. It’s mostly impossible now for us to distinguish between these.

Think also of the ‘authenticity’ debate in early music that I mentioned in Words and Music a couple of posts back. The full, authentic effect of a piece of music composed some hundreds of years ago, so one argument goes, could only affect an audience as the composer intended if the audience were also of his time. Indeed, even today’s music, of any genre, will have different associations for, and effects on, a listener depending on their background and experience. And indeed, it’s quite common now for artists, conceptual or otherwise, to eschew any overriding purpose as to the meaning of their work, but to intend each person to interpret it in his or her own idiosyncratic way.

Rather too many examples, perhaps, to illustrate the somewhat obvious point that meaning is not an intrinsic property of inert symbols, such as the printed words in our lunar Shakespeare. In transmitting their sense and associations from writer to reader the symbols depend upon shared knowledge, cultural assumptions and habits of thought; something about the symbols, or images, must be recognisable by both creator and consumer. When this is not the case we are just left with a curious feeling, as when looking at that abstract cave art. We get a a strong sense of meaning and intention, but the content of the thoughts behind it are entirely unknown to us. Perhaps some unthinkably different aliens will have the same feeling on finding the Voyager robot spacecraft, which was sent on its way with some basic information about the human race and our location in the galaxy. Looking at the cave patterns we can detect that information is present – but meaning is more than just information. Symbols comprise the latter without intrinsically containing the former, otherwise we’d be able to know what those cave patterns signified.

Physical signs can’t embody meaning of themselves,  apart from the creator and the consumer, any more than a saw can cut wood without a carpenter to wield it. Tool use, indeed, in early man or advanced animals, is an indicator of intentionality – the ability to form abstract ‘what if’ concepts about what might be done, before going ahead and doing it. A certain cinematic moment comes to mind: in Kubrick’s 2001: A Space Odyssey, where the bone wielded as a tool by the primate creature in the distant past is thrown into the air, and cross-fades into a space ship in the 21st century.

Here be dragons

Information theory developed during the 20th century, and is behind all the advances of the period in computing and communications. Computers are like the examples of symbols we have looked at: the states of their circuits and storage media contain symbolic information but are innocent of meaning. Which thought, it seems to me, it leads us to the heart of the perplexity around the notion of aboutness, or intentionality. Brains are commonly thought of as sophisticated computers of a sort, which to some extent at least they must be. So how come that when, in a similar sort of way, information is encoded in the neurochemical states of our brains, it is magically invested with meaning? In his well-known book A Brief History of Time, Stephen Hawking uses a compelling phrase when reflecting on the possibility of a universal theory. Such a theory would be “just a set of rules and equations”. But, he asks,

What is it that breathes fire into the equations and makes a universe for them to describe?

I think that, in a similar spirit, we have to ask: what breathes fire into our brain circuits to add meaning to their information content?

The Chinese Room

If you’re interested enough to have come this far with me, you will probably know about a famous philosophical thought experiment which serves to support the belief that my question is indeed a meaningful and legitimate one – John Searle’s ‘Chinese Room’ argument. But I’ll explain it briefly anyway; skip the next paragraph if you don’t need the explanation.

Chinese Room

A visualisation of John Searle inside the Chinese Room

Searle imagines himself cooped up in a rather bizarre room where he can only communicate with the outside world by passing and receiving notes through an aperture. Within the room he is equipped only with an enormous card filing system containing a set of Chinese characters and rules for manipulating them. He has Chinese interlocutors outside the room, who pass in pieces of paper bearing messages in Chinese. Unable to understand Chinese, he goes through a cumbersome process of matching and manipulating the Chinese symbols using his filing system. Eventually this process yields a series of characters as an answer, which are transcribed on to another piece of paper and passed back out. The people outside (if they are patient enough) get the impression that they are having a conversation with someone inside the room who understands and responds to their messages. But, as Searle says, no understanding is taking place inside the room. As he puts it, it deals with syntax, not semantics, or in the terms we have been using, symbols, not meaning. Searle’s purpose is to demolish the claims of what he calls ‘strong AI’ – the claim that a computer system with this sort of capability could truly understand what we tell it, as judged from its ability to respond and converse. The Chinese Room could be functionally identical to such a system (only much slower) but Searle is demonstrating that is is devoid of anything that we could call understanding.

If you have an iPhone you’ll probably have used an app called ‘Siri’ which has just this sort of capability – and there are equivalents on other types of phone. When combined with the remote server that it communicates with, it can come up with useful and intelligent answers to questions. In fact, you don’t have to try very hard to make it come up with bizarre or useless answers, or flatly fail. But that’s just a question of degree – no doubt future versions will be more sophisticated. We might loosely say that Siri ‘understands’ us – but of course it’s really just a rather more efficient Chinese Room. Needless to say, Searle’s argument has generated years of controversy. I’m not going to enter into that debate, but will just say that I find the argument convincing; I don’t think that Siri can ‘understand’ me.

So if we think of understanding as the ‘fire’ that’s breathed into our brain circuits, where does it come from? Think of the experience of reading a gripping novel. You may be physically reading the words, but you’re not aware of it. ‘Understanding’ is hardly an issue, in that it goes without saying. More than understanding, you are living the events of the novel, with a succession of vivid mental images. Another scenario: you are a parent, and your child comes home from school to tell you breathlessly about some playground encounter that day – maybe positive or negative. You are immediately captivated, visualising the scene, maybe informed by memories of you own school experiences. In both of these cases, what you are doing is not really to do with processing information – that’s just the stimulus that starts it all off. You are experiencing – the information you recognise has kicked off conscious experiences; and yes, we are back with our old friend consciousness.

Understanding and consciousness

Searle also links understanding to consciousness; his position, as I understand it, is that consciousness is a specifically biological function, not to be found in clever artefacts such as computers. But he insists that it’s purely a function of physical processes nontheless – and I find it difficult to understand this view. If biologically evolved creatures can produce consciousness as a by-product of their physical functioning, how can he be so sure that computers cannot? He could be right, but it seems to be a mere dogmatic assertion. I agree with him that you can’t have meaning – and hence intentionality – without consciousness. For sure, although he denies it, he leaves open the possibility that a computer (and thus, presumably, the Chinese Room as a whole) could be conscious. But he does have going for him the immense implausibility of that idea.

Dog

How much intentionality?

So does consciousness automatically bring intentionality with it? In my last post I referred to a dog’s inability to understand or recognise a pointing gesture. We assume that dogs have consciousness of some sort – in a simpler form, they have some of the characteristics which lead us to assume that other humans like ourselves have it. But try thinking yourself for a moment into what it might be to inhabit the mind of a dog. Your experiences consist of the here and now (as ours do) but probably not a lot more. There’s no evidence that a dog’s awareness of the past consists of more than simple learned associations of a Pavlovian kind. They can recognise ‘walkies’, but it seems a mere trigger for a state of excitement, rather than a gateway to a rich store of memories. And they don’t have the brain power to anticipate the future. I know some dog owners might dispute these points – but even if a dog’s awareness extends beyond ‘is’ to ‘was’ and ‘will be’, it surely doesn’t include ‘might be’ or ‘could have been’. Add to this the dog’s inability to use offered information to infer that the mind of another individual contains a truth about the world that hitherto has not been in your own mind (i.e. the ability to understand pointing – see the previous post) and it starts to become clearer what is involved in intentionality. Mere unreflective experiencing of the present moment doesn’t lead to the notion of the objects of your thought, as disticnct from the thought itself. I don’t want to offend dog-owners – maybe their pets’ abilites extend beyond that; but there are certainly other creatures – conscious ones, we assume – who have no such capacity.

So intentionality requires consciousness, but isn’t synonymous with it: in the jargon, consciousness is necessary but not sufficient for intentionality. As hinted earlier, the use of tools is perhaps the simplest indicator of what is sufficient – the ability to imagine how something could be done, and then to take action to make it a reality. And the earliest surviving evidence from prehistory of something resembling a culture is taken to be the remains of ancient graves, where objects surrounding a body indicate that thought was given to the body’s destiny – in other words, there was a concept of what may or may not happen in the future. It’s with these capabilities, we assume, that consciousness started to co-exist with the mental capacity which made intentionality possible.

So some future civilisation, alien or otherwise, finding that Shakespeare volume on the moon, will have similar thoughts to those that we would have on discovering the painted patterns in the cave. They’ll conclude that there were beings in our era who possessed the capacity for intentionality, but they won’t have the shared experience which enables them to deduce what the printed symbols are about. And, unless they have come to understand better than we do what the nature of consciousness is, they won’t have any better idea what the ultimate nature of intentionality is.

The value of what they would find is another question, which I said I would return to – and will. But this post is already long enough, and it’s too long since I last published one – so I’ll deal with that topic next time.

A Few Pointers

Commuting days until retirement: 342

Michaelangelo's finger

The act of creation, in the detail from the Sistine Chapel ceiling which provides Tallis’s title

After looking in the previous post at how certain human artistic activities map on to the world at large, let’s move our attention to something that seems much more primitive. Primitive, at any rate, in the sense that most small children become adept at it before they develop any articulate speech. This post is prompted by a characteristically original book by Raymond Tallis I read a few years back – Michaelangelo’s Finger. Tallis shows how pointing is a quintessentially human activity, depending on a whole range of capabilities that are exclusive to humans. In the first place, it could be thought of as a language in itself – Pointish, as Tallis calls it. But aren’t pointing fingers, or arrows, obvious in their meaning, and capable of only one interpretation? I’ve thought of a couple of examples to muddy the waters a little.

Pointing – but which way?

TV aerial

Which way does it point?

The first is perhaps a little trivial, even silly. Look at this picture of a TV aerial. If asked where it is pointing, you would say the TV transmitter, which will be in the direction of the thin end of the aerial. But if we turn it sideways, as I’ve done underneath, we find what we would naturally interpret as an arrow pointing in the opposite direction. It seems that our basic arrow understanding is weakened by the aerial’s appearance and overlaid by other considerations, such as a sense of how TV aerials work.

My second example is something I heard about which is far more profound and interesting, and deliciously counter-intuitive. It has to do with the stages by which a child learns language, and also with signing, as used by deaf people. Two facts are needed to explain the context: first is that, as you may know, sign language is not a mere substitute for language, but is itself a language in every sense. This can be demonstrated in numerous ways: for example, conversing in sign has been shown to use exactly the same area of the brain as does the use of spoken language. And, compared with those rare and tragic cases where a child is not exposed to language in early life, and consequently never develops a proper linguistic capability, young children using only sign language at this age are not similarly handicapped. Generally, for most features of spoken language, equivalents can be found in signing. (To explore this further, you could try Oliver Sacks’ book Seeing Voices.) The second fact concerns conventional language development: at a certain stage, many children, hearing themselves referred to as ‘you’, come to think of ‘you’ as a name for themselves, and start to call themselves ‘you’; I remember both my children doing this.

And so here’s the payoff: in most forms of sign language, the word for ‘you’ is to simply point at the person one is speaking to. But children who are learning signing as a first language will make exactly the same mistake as their hearing counterparts, pointing at the person they are addressing in order to refer to themselves. We could say, perhaps, that they are still learning the vocabulary of Pointish. The aerial example didn’t seem very important, as it merely involved a pointing action that we ascribe to a physical object. Of course the object itself can’t have an intention; it’s only a human interpretation we are considering, which can work either way. This sign language example is more surprising because the action of pointing – the intention – is a human one, and in thinking of it we implicitly transfer our consciousness into the mind of the pointer, and attempt to get our head around how they can make a sign whose meaning is intuitively obvious to us, but intend it in exactly the opposite sense.

What’s involved in pointing?

Tallis teases out how pointing relies on a far more sophisticated set of mental functions than it might seem to involve at first sight. As a first stab at demonstrating this, there is the fact that pointing, either the action or the understanding of it, appears to be absent in animals – Tallis devotes a chapter to this. He describes a slightly odd-feeling experience which I have also had, when throwing a stick for a dog to retrieve. The animal is often in a high state of excitement and distraction at this point, and dogs do not have very keen sight. Consequently it often fails to notice that you have actually thrown the stick, and continues to stare at you expectantly. You point vigorously with an outstretched arm: “It’s over there!” Intuitively, you feel the dog should respond to that, but of course it just continues to watch you even more intensely, and you realise that it simply has no notion of the meaning of the gesture – no notion, in fact, of ‘meaning’ at all. You may object that there is a breed of dog called a Pointer, because it does just that – points. But let’s just examine for a moment what pointing involves.

Primarily, in most cases, the the key concept is attention: you may want to draw the attention of another to something,. Or maybe, if you are creating a sign with an arrow, you may be indicating by proxy where others should go, on the assumption that they have a certain objective. Attention, objective: these are mental entities which we can only ascribe to others if we first have a theory of mind – that is, if we have already achieved the sophisticated ability to infer that others have minds, and and a private world, like our own. Young children will normally start to point before they have very much speech (as opposed to language – understanding develops in advance of expression). It’s significant that autistic children usually don’t show any pointing behaviour at this stage. Lack of insight into the minds of others – an under-developed theory of mind – is a defining characteristic of autism.

So, returning to the example of the dog, we can take it that for an animal to show genuine pointing behaviour, it must have a developed notion of other minds, and which seems unlikely. The action of the Pointer dog looks more like instinctive behaviour, evolved through the cooperation of packs and accentuated by selective breeding. There are other examples of instinctive pointing in animal species: that of bees is particularly interesting, with the worker ‘dance’ that communicates to the hive where a food source is. This, however, can be analysed down into a sequence of instinctive automatic responses which will always take the same form in the same circumstances, showing no sign of intelligent variation. Chimpanzees can be trained to point, and show some capacity for imitating humans, but there are no known examples of their use of pointing in the wild.

But there is some recent research which suggests a counter-example to Tallis’s assertion that pointing is unknown in animals. This shows elephants responding to human pointing gestures, and it seems there is a possibility that they point spontaneously with their trunks. This rather fits with other human-like behaviour that has been observed in elephants, such as apparently grieving for their dead. Grieving, it seems to me, has something in common with pointing, in that it also implies a theory of mind; the death of another individual is not just a neutral change in the shape and pattern of your world, but the loss of another mind. It’s not surprising that, in investigating ancient remains, we take signs of burial ritual to be a potent indicator of the emergence of a sophisticated civilisation of people who are able to recognise and communicate with minds other than their own – probably the emergence of language, in fact.

Pointing in philosophy

We have looked at the emergence of pointing and language in young children; and the relation between the two has an important place in the history of philosophy. There’s a simple, but intuitive notion that language is taught to a child by pointing to objects and saying the word for them – so-called ostensive definition. And it can’t be denied that this has a place. I can remember both of my children taking obvious pleasure in what was, to them, a discovery – each time they pointed to something they could elicit a name for it from their parent. In a famous passage at the start of Philosophical Investigations, Wittgenstein identifies this notion – of ostensive definition as the cornerstone of language learning – in a passage from the writings of St Augustine, and takes him to task over it. Wittgenstein goes on to show, with numerous examples, how dynamic and varied an activity the use of language is, in contrast to the monolithic and static picture suggested by Augustine (and indeed by Wittgenstein himself in his earlier incarnation). We already have our own example in the curious and unique way in which the word ‘you’ and its derivatives are used, and a sense of the stages by which children develop the ability to use it correctly.

The Boyhood of Raleigh

Perhaps the second most famous pointing finger in art: Millais’ The Boyhood of Raleigh

The passage of Augustine also suggests a notion of pointing as a primitive,  primary action, needing no further explanation. However, we’ve seen how it relies on a prior set of sophisticated abilities: having the notion that oneself is distinct from the world – a world that contains other minds like one’s own, whose attention may have different contents from one’s own; that it’s possible to communicate meaning by gestures to modify those contents; an idea of how these gestures can be ‘about’ objects within the world; and that there needs to be agreement on how to interpret the gestures, which aren’t always as intuitive and unambiguous as we may imagine. As Tallis rather nicely puts it, the arch of ostensive definition is constructed from these building bricks, with the pointing action as the coping stone which completes it.

The theme underlying both this and my previous post is the notion of how one thing can be ‘about’ another – the notion of intentionality. This idea is presented to us in an especially stark way when it comes to the action of pointing. In the next post I intend to approach that more general theme head-on.

Words and Music

Commuting days until retirement: 360

Words and MusicThis piece was kicked off by some comments I heard from the poet Sean O’Brien on a recent radio programme. Speaking on the BBC’s Private Passions, where guests choose favourite music, and talking about Debussy, he said:

Poetry is always envious of music because that’s what poetry wants to be, whereas music has no need to be poetry. So [poets] are always following in the wake of music – but the two come quite close in sensibility here, it seems to me.

Discuss. Well, for me this immediately brought to mind a comment I once heard attributed to the composer Mendelssohn, to the effect that ‘music is not too vague for words, but too precise.’ So I went and did some Googling, and here’s the original quote:

People often complain that music is too ambiguous, that what they should think when they hear it is so unclear, whereas everyone understands words. With me, it is exactly the opposite, and not only with regard to an entire speech but also with individual words. These, too, seem to me so ambiguous, so vague, so easily misunderstood in comparison to genuine music, which fills the soul with a thousand things better than words. The thoughts which are expressed to me by music that I love are not too indefinite to be put into words, but on the contrary, too definite.

(Source: Wikiquote, where you can also see the original German.)

These two statements seem superficially similar – are they saying the same thing, from different standpoints? One difference, of course, is that O’Brien is specifically talking of poetry rather than words in general. In comparison with prose, poetry shares more of the characteristics of music: it employs rhythm, cadence, phrasing and repetition, and is frequently performed rather than read from the page; and music of course is virtually exclusively a performance art. So that’s a first thought about how poetry departs from the prosaic and finds itself approaching the territory inhabited by music.

But what is the precision which Mendelssohn insists is music’s preserve? It’s true that music is entirely bound up with the mathematics of sound frequency ratios in intervals between notes: it was this that gave Pythagoras and others their obsession with the mystique of numbers, in antiquity. A musician need not be grounded deeply in mathematical theory, but but he or she will always be intensely aware of the differing characters of musical intervals – as is anyone who enjoys music at all, if perhaps more indirectly.

I haven’t read a lot of theorising on this topic, but it seems to me that there is a strong link here with everyday speech. In my own language, as, I should imagine, in most others, pitch and phrasing indicate the emotional register of what you are saying, and hence an important element of the meaning. One can imagine this evolving from the less articulate cries of our ancestors, with an awareness of intervals of pitch developing as a method of communication, hence fostering group cohesion and Darwinian survival value. Indeed, in some languages, such as Chinese, intonation can be integral to the meaning of a word. And there’s scientific evidence of the closeness of music and language: here’s an example.

So, going back to Mendelssohn, it’s as if music has developed by abstracting certain elements from speech, leaving direct, referential meaning behind and evolving a vocabulary from pitch, timbre and the free-floating emotional states associated with them. Think for a moment of film music. It can be an interesting exercise, in the middle of a film or TV drama, to make yourself directly aware of the background music, and then imagine how your perception of what is happening would differ if it were absent. You come to realise that the music is often instructing you what to feel about a scene or a character, and it often connects with your emotions so directly that it doesn’t consciously occur to you that the feelings you experience are not your own spontaneous ones. And if you add to this the complex structures formed from key relationships and temporal development, which a professional musician would be particularly aware of, you can start to see what Mendelssohn was talking about.

The musical piece O’Brien was introducing was the ‘Dialogue between the wind and the sea’ from Debussy ‘s La Mer. In other words,a passage which seeks to evoke a visual and auditory scene, rather than simply exploring musical ideas and the emotions that arise directly from them. By contrast, we could imagine a description of such a scene in prose: to be effective the writer needs to choose the words whose meanings and associations come together in such a way that readers can recreate the sensory impressions, and and the subjective impact of the scene in their own minds. The music, on the other hand, can combine its direct emotional access with an auditory picture, in a highly effective way.

Rain Steam and Speed

Rain, Steam and Speed – J.M.W Turner (National Gallery, London)

I was trying to think of an equivalent in visual art, and and the painting that came to mind was this one, with its emphasis on the raw sensory feelings evoked by the scene, rather than a faithful (prosaic) portrayal. In his use of this technique, Turner was of course controversial in his time, and is now seen as a forerunner of the impressionist movement. Interestingly, what I didn’t know before Googling the painting was that he is also known to have influenced Debussy, who mentions him in his letters. Debussy was also sometimes spoken of as impressionist, but he hated this term being applied to his work, and here is a quote from one of those letters:

I am trying to do something different – an effect of reality… what the imbeciles call impressionism, a term which is as poorly used as possible, particularly by the critics, since they do not hesitate to apply it to Turner, the finest creator of mysterious effects in all the world of art.

I like to think that perhaps Debussy is also to some extent lining up with Mendelssohn here, and, beside his reference to Turner’s painting, maybe has in mind the unique form of access to our consciousness which music has, as opposed to other art forms. A portrayal in poetry would perhaps come somewhere between prose and music, given that poetry, as we’ve seen, borrows some of music’s tricks. I looked around for an example of a storm at sea in poetry: here’s a snippet from Swinburne, around the turn of the 20th century, describing a storm in the English Channel.

As a wild steed ramps in rebellion, and and rears till it swerves from a backward fall,
The strong ship struggled and reared, and her deck was upright as a sheer cliff’s wall.

In two lines here we have – besides two similes with one of them extended into a metaphor – alliteration, repetition and rhyme, all couched in an irregular, bucking rhythm which suggests the movement of the ship with the sea and wind. Much in common here, then, with a musical evocation of a storm. This I take to be part of what O’Brien means by poetry ‘wanting’ to be music, and being ‘close in sensibility’ in the example he was talking about.

But I don’t see how all this implies that we should somehow demote poetry to an inferior role. Yes, it’s true that words don’t so often trigger emotions as directly, by their sound alone, as does music – except perhaps in individual cases where someone has become sensitised to a word through experience. But the Swinburne passage is an example of poetry flexing the muscles which it alone possesses, in pursuit of its goal. And even when its direct purpose is not the evocation of a specific scene, the addition of the use of imagery to the auditory effects it commands can create a very compelling kind of ‘music’. A couple of instances that occur to me: first T. S. Eliot in Burnt Norton, from The Four Quartets.

Garlic and sapphires in the mud
Clot the bedded axle-tree.
The trilling wire in the blood
Sings below inveterate scars
Appeasing long-forgotten wars.

I’m vaguely aware that there all sorts of allusions in the poet’s mind which are beyond my awareness, but just at the level of the sound of the words combined with the immediate images they conjure, there is for me a magic about this which fastens itself in my mind, making it intensely enjoyable to repeat the words to myself, I just as a snatch of music can stick in the consciousness. Another example, from Dylan Thomas, known for his wayward and idiosyncratic use of language. This is the second stanza from Especially When the October Wind:

Shut, too, in a tower of words, I mark
On the horizon walking like the trees
The wordy shapes of women, and the rows
Of the star-gestured children in the park.
Some let me make you of the vowelled beeches,
Some of the oaken voices, from the roots
Of many a thorny shire tell you notes,
Some let me make you of the water’s speeches.

Again, I find pure pleasure in the play of sounds and images here, before ever considering what further meaning there may be. But what I also love about the whole poem (here is the poet reading it) is the way it is self-referential: while exploiting the power of words, it explicitly links language-words to  images – ‘vowelled beeches’ rhymed with ‘water’s speeches’. The poet himself is ‘shut in a tower of words’ .

But, returning to the comparison with music, there’s one obvious superficial difference between it and language, namely that language is generally ‘about’ something, but while music is not. But of course there are plenty of exceptions: music can be, at least in part, deliberately descriptive, as we saw with Debussy, he while poetry often does away with literal meaning to transform itself into a sort of word-music, as I’ve tried to show above. And another obvious point I haven’t made is that words and music are often – perhaps more often than not – yoked together in song. The voice itself can simultaneously be a musical instrument and a purveyor of meaning. And it may be that the music is recruited to add emotional resonance to the language – think opera or musical drama – or that the words serve to give the music a further dimension and personality, as often in popular music. In the light of his statement above, it’s interesting that Mendelssohn is famous particularly for his piano pieces entitled Songs Without Words. The oxymoron is a suggestive one, and he strongly resisted an attempt by a friend to put words to them.

But the question of what music itself is ‘about’ is a perplexing, and perhaps profound one. I am intending this post to be one approach to the philosophical question of how it’s possible for one thing to be ‘about’ another: the topic known as intentionality. In an interesting paper, Music and meaning, ambiguity and evolution, Ian Cross of the Cambridge Faculty of Music explores this question (1), referring to the fact that the associations and effects of a piece of music may differ between the performer and a listener, or between different listeners. Music has, as he puts it, ‘floating intentionality’. This reminds me of a debate that has taken place about the performance of early music. For authenticity in, say 16th century music, some claim, it’s essential that it is played on the instruments of the time. Their opponents retort that you won’t achieve that authenticity unless you have a 16th century audience as well.

Some might claim that most music is not ‘about’ anything but itself, or perhaps about the emotions it generates. I am not intending to come to any conclusion on that particular topic, but just to raise some questions in a fascinating area. In the next post I intend to approach this topic of intentionality from a completely different direction.


1. Cross, Ian: Music and meaning, ambiguity and evolution, in D. Miell, R. MacDonald & D. Hargreaves, Musical Communication, OUP, 2004.
You can read part of it here.

Accident of Birth

Commuting days until retirement: 390

My commuter train reading in recent weeks has been provided by Hilary Mantel’s two Mann Booker Prize-winning historical novels, Wolf Hall and Bring up the Bodies. If you don’t know, they are the first two of what is promised to be a trilogy covering the life of Thomas Cromwell, who rose to be Henry VIII’s right hand man. He’s a controversial figure in history: you may have seen Robert Bolt’s play (or the film of) A Man for All Seasons, where he is portrayed as King Henry’s evil arch-fixer, who engineers the execution of the man of the title, Sir Thomas More. He is also known to have had a big part in the downfall and death of Anne Boleyn.

The unique approach of Mantel’s account is to narrate exclusively from Cromwell’s own point of view. At the opening of the first book he is being violently assaulted by the drunken, irresponsible blacksmith father whom he subsequently escapes, seeking a fortune abroad as a very young man, and living on his very considerable wits. On his return to England, having gained wide experience and the command of several languages, he progresses quickly within the establishment, becoming a close advisor to Cardinal Wolsey, and later, of course, Henry VIII. I won’t create spoilers for the books by going into further detail – although if you are familiar with the relevant history you will already know some of these. I’ll just mention that in Mantel’s portrayal he emerges as phenomenally quick-witted, but loyal to those he serves. She shows him as an essentially unassuming man, well aware of his own abilities, and stoical whenever he suffers reverses or tragedies. These qualities give him a resilience which aids his rise to some of the highest offices in the England of his time. In the books we are privy to his dreams, and his relationships with his family – although he might appear to some as cold-blooded, he is also a man of natural feelings and passions.

Thomas Cromwell and the Duke of Norfolk

Thomas Cromwell (left) and Thomas Howard, Duke of Norfolk – both as portrayed by Hans Holbein

But the theme that kicked off my thoughts for this post was that of Cromwell’s humble origin. It’s necessarily central to the books, given that it was rare then for someone without nobility or inherited title to achieve the rank that he did. What Mantel brings out so well is the instinctive assumption that an individual’s value is entirely dependent on his or her inheritance – unquestioned in that time, as throughout most of history until the modern era. As the blacksmith’s son from Putney, Cromwell is belittled by his enemies and teased by his friends. But at the same time we watch him, with his realistic and perceptive awareness of his own position, often running rings around various blundering earls and dukes, and even subtly manipulating the thinking of the King. My illustrations show Cromwell himself and Thomas Howard, Duke of Norfolk, a jealous opponent. By all accounts Norfolk was a rather simple, plain-speaking man, and certainly without Cromwell’s intellectual gifts. So today we would perhaps see Cromwell as better qualified for the high office that both men held. But seen through 16th century eyes, Cromwell would be the anomaly, and Norfolk, with his royal lineage, the more natural holder of a seat in the Privy Council.

Throughout history there have of course been persistent outbreaks of protest from those disempowered by accident of birth. But the fundamental issues have often often obscured by the chaos and competition for privilege which result. We can most obviously point to the 18th century, with the convulsion of the French revolution, which resulted in few immediate benefits; and the foundation of a nation – America – on the ideals of equality and freedom, followed however by its enthusiastic maintenance of slavery for some years. Perhaps it wasn’t until the 19th century, and the steady, inexorable rise of the middle class, that fundamental change began. As this was happening, Darwin came along to ram home the point that any intrinsic superiority on the basis of your inheritance was illusory. Everyone’s origins were ultimately the same; what counted was how well adapted you were to the external conditions you were born into. But was this the same for human beings as for animals? The ability to thrive in the environment in which you found yourself was certainly a measure of utilitarian, or economic value. But is this the scale on which we should value humans? It’s a question that I’ll try to show there’s s still much confusion about today. Meanwhile Karl Marx was analysing human society in terms of class and mass movements, moving the emphasis away from the value of individuals – a perspective which had momentous consequences in the century to come.

But fundamental attitudes weren’t going to change quickly. In England the old class system was fairly steady on its feet until well into the 20th century. My own grandmother told me about the time that her father applied to enrol her brothers at a public school (i.e. a private school, if you’re not used to British terminology). This would have been, I estimate, between about 1905 and 1910. The headmaster of the school arrived at their house in a horse and trap to look the place over and assess their suitability. My great-grandfather had a large family, with a correspondingly large house, and all the servants one would then have had to keep the place running. He was a director of a successful wholesale grocery company – and hearing this, the headmaster politely explained that, being “in trade” he didn’t qualify as a father of sons who could be admitted. Had he been maybe a lawyer, or a clergyman, there would have been no problem.

Let’s move on fifty years or so, to the start of the TV age. It’s s very instructive to watch British television programmes from this era – or indeed films and newsreels. Presenters and commentators all have cut-glass accents that today, just 60 or so years on, appear to us impossibly affected and artificial. The working class don’t get much of a look in at all: in the large numbers of black-and-white B-movies that were turned out at this time the principal actors have the accents of the ruling class, while working class characters appear either as unprincipled gangster types, or as lovable ‘cheekie chappies’ showing proper deference to their masters.

By this time, staying with Britain, we had the 1944 Education Act, which had the laudable motive of making a suitable education available to all, regardless of birth. But how to determine what sort of education would be right for each child? We had the infamous eleven plus exam, where in a day or two of assessment the direction of your future would be set. While looking forward to a future of greater equality of opportunity, the conception seemed simultaneously mired in the class stratification of the past, where each child had a predetermined role and status, which no one, least of all the child himself or herself, could change. Of course this was a great step up for bright working class children who might otherwise have been neglected, and instead received a fitting education at grammar schools. Thomas Cromwell, in a different age, could have been the archetypal grammar school boy.

But given the rigid stratification of the system, it’s not surprising that within 20 years left wing administrations started to change things again. While the reforming Labour government of 1945-51 had many other things to concentrate on, the next one, achieving office in 1964, made education a priority, abolishing the 11 plus and introducing comprehensive schools. This established the framework which is only now starting to be seriously challenged by the policies of the current coalition government. Was the comprehensive project successful, and does it need challenging now? I’d argue that it does.

R A Butler

R A “Rab” Butler
(izquotes.com)

To return to basics, it seems to me that what’s at stake is, again, how you value an individual human being. In Cromwell’s time as we’ve seen, no one doubted that it was all to do with the status of your forbears. But by 1944 the ambitious middle class had long been a reality, showing that you could prove your value and rise to prosperity regardless of your origins. This was now a mass phenomenon, not confined to very unusual and lucky individuals, as it had been with Cromwell. And so education realigned itself around the new social structure. But with the education minister of the time, R.A. Butler, being a patrician (if liberal-minded) Tory, perhaps it was inevitable that something of the rigidity of the old class structure would be carried over into the new education system.

So if an exam at the age of eleven effectively determines your place in society, how are we now valuing human beings? It’s their intellectual ability, and their consequent economic value which is the determining factor. If you succeed you go to a grammar school to be primed for university, while if not, you may be given a condescending pat on the head and steered towards a less intellectually demanding trade. We would all agree that there is a more fundamental yardstick against which we measure individuals – an intrinsic, or moral value. We’d rate the honest low-achiever over the clever crook. But somehow the system, with its rigid and merciless classification, is sweeping the more important criterion aside.

Anthony Crosland

Anthony Crosland
(stpancrasstory.org)

And so the reforming zeal of the 1960s Labour government was to remove those class-defining barriers and provide the same education for all. The education minister of that time was a noted intellectual – private school and Oxford educated – Anthony Crosland. His reported remark, supposedly made to his wife, serves to demonstrate the passion of the project: “If it’s the last thing I do, I’m going to destroy every fucking grammar school in England. And Wales and Northern Ireland”. (In Northern Ireland, it should be noted, he was less successful than elsewhere). But the remark also suggests a fixity of purpose which spread to the educational establishment for many years to come. If it was illegitimate to value children unequally, then in no circumstances should this be done.

You may or may not agree with me that the justified indignation of the time was leading to a fatal confusion between the two yardsticks I distinguished – the economic one and the moral one. And so, by the lights of Labour at that time, if we are allocating different resources to children according to their aptitudes – well, we shouldn’t. All must be equal. Yes – in the moral sense. But in the economic one? Even Karl Marx made that distinction – remember his famous slogan, “From each according to his ability, to each according to his need”?  All that the reformists needed to do, in my opinion, was to take the rigidity out of the system – to let anyone aspire to a new calling that he or she can achieve, at whatever age, and under whatever circumstances that their need arises.

Back to personal experience. I can remember when we were looking over primary schools for our first child – this would be in the early 90s. One particular headmaster bridled when my wife asked about provision for children of different abilities. The A-word was clearly not to be used. Yet as he talked on, there were several times that he visibly recognised that he himself was about to use it, spotted the elephant trap at the last moment, and awkwardly stepped around it. This confused man was in thrall to the educational establishment’s fixed, if unconscious, assumption that differing ability equals unequal value. (We didn’t send our children to that school.)

Over the years, these attitudes have led to a frequent refusal to make any provision for higher ability pupils, with the consequence that talent which might previously have been nurtured, has been ignored. If you can afford it, of course, you can buy your way out of the system and opt for a private education. Private school pupils have consistently had the lion’s share of places at the top universities, and so the architects and supporters of the state system ideology have called for the universities to be forced to admit more applicants from that system, and to restrict those from the private sector. Is this right? I’d argue that the solution to failure in the state schools is not to try and extend the same failed ideology to the universities, but to try to address what is wrong in the schools. A confusion between our economic and moral valuations of individual threatens to lead to consequences which are damaging, it seems to me, both in an economic and a moral sense.

The plans of the present UK education minister, Michael Gove, have come in for a lot of criticism. It would be outside the scope of this piece – and indeed my competence – to go into that in detail, but it does seem to me that he is making a principled and well intentioned attempt to restore the proper distinction between those economic and moral criteria – making good use of individual ability where it can be found, without being condescending to those who are not so academic, or making the distinctions between them too rigid. And of course I haven’t addressed the issue of whether the existence of a separate private education sector is desirable – again outside the scope of this post.

Martin Luther King

Martin Luther King
(Nobel Foundation)

What, at least, all now agree on is that the original criterion of individual value we looked at – birth status – is no longer relevant. Well, almost all. Racist ideologies, of course, persist in the old attitude. A recent anniversary has reminded us of one of the defining speeches of the 20th century, that of Martin Luther King, who laid bare the failure of the USA to uphold the principles of its constitution, and famously looked forward to a time when people would be “judged not by the color of their skin but by the content of their character”. The USA, whose segregationist policies in some states he was addressing, has certainly made progress since then. But beyond the issues I have described, there are many further problems around the distinction between moral and economic values. In most societies there are those whose contribution is valued far more in the moral sense than the economic one: nurses, teachers. What, if if anything, should we do about that? I don’t claim to know any easy answers.

I kicked off from the themes in Hilary Mantel’s books and embarked on a topic which I soon realised was a rather unmanageably vast one for a simple blog post. Along the way I have been deliberately contentious – please feel free to agree or disagree in the comments below. But what got me going was the way in which Mantel’s study of Cromwell takes us into the collective mind of an age when the instinctive ways of evaluating individuals were entirely different. What I don’t think anyone can reasonably disagree with is the importance of history in throwing the prejudices of our own age into a fresh and revealing perspective.