Read All About It (part 1)

Commuting days until retirement: 300

Imagine a book. It’s a thick, heavy, distinguished looking book, with an impressive tooled leather binding, gilt-trimmed, and it has marbled page edges. A glance at the spine shows it to be a copy of Shakespeare’s complete works. It must be like many such books to be found on the shelves of libraries or well-to-do homes around the world at the present time, although it is not well preserved. The binding is starting to crumble, and much of the gilt lettering can no longer be made out. There’s also something particularly unexpected about this book, which accounts for the deterioration.  Let your mental picture zoom out, and you see, not a set of book-laden shelves, or a polished wood table bearing other books and papers, but an expanse of greyish dust, bathed in bright, harsh light. The lower cover is half buried in this dust, to a depth of an inch or so, and some is strewn across the front, as if ithe book had been dropped or thrown down. Zoom out some more, and you see a rocky expanse of ground, stretching away to what seems like a rather close, sharply defined horizon, separating this desolate landscape from a dark sky.

Yes, this book is on the moon, and it has been the focus of a long standing debate between my brother and sister-in-law. I had vaguely remembered one of them mentioning this some years back, and thought it would be a way in to this piece on intentionality, a topic I have been circling around warily in previous posts. To clarify: books are about things – in fact our moon-bound book is about most of the perennial concerns of human beings. What is it that gives books this quality of ‘aboutness’ – or intentionality? When all’s said and done our book boils down to a set of inert ink marks on paper. Placing it on the moon, spatially distant, and and perhaps temporally distant, from human activity, leaves us with the puzzle as to how those ink marks reach out across time and space to hook themselves into that human world. And if it had been a book ‘about’, say, physics or astronomy, that reach would have been, at least in one sense, wider.

Which problem?

Well, I thought that was what my brother and sister-in-law had been debating when I first heard about it; but when I asked them it turned out that their what they’d been arguing about was the question of literary merit, or more generally, intrinsic value. The book contains material that has been held in high regard by most of humanity (except perhaps GCSE students) for hundreds of years. At some distant point in space and time, perhaps after humanity has disappeared, does that value survive, contained within it, or is it entirely dependent upon who perceives and interprets it?

Two questions, then – let’s refer to them as the ‘aboutness’ question and the ‘value’ question. Although the value question wasn’t originally within the intended scope of this post, it might be worth trying to  tease out how far each question might shed light on the other.

What is a book?

First, an important consideration which I think has a bearing on both questions – and which may have occurred to you already. The term ‘book’ has at least two meanings. “Give me those books” – the speaker refers to physical objects, of the kind I began the post with. “He’s written two books” – there may of course be millions of copies of each, but these two books are abstract entities which may or may not have been published. Some years back I worked for a small media company whose director was wildly enthusiastic about the possibilities of IT (that was my function), but somehow he could never get his head around the concepts involved. When we discussed some notional project, he would ask, with an air of addressing the crucial point, “So will it be a floppy disk, or a CD-ROM?” (I said it was a long time ago.) In vain I tried to get it across to him that the physical instantiation, or the storage medium, was a very secondary matter. But he had a need to imagine himself clutching some physical object, or the idea would not fly in his mind. (I should have tried to explain by using the book example, but never thought of it at the time.)

So with this in mind, we can see that the moon-bound Shakespeare is what is sometimes called in philosophy an ‘intuition pump’ – an example intended to get us thinking in a certain way, but perhaps misleadingly so. This has particular importance for the value question, it seems to me: what we value is set of ideas and modes of expression, not some object. And so its physical, or temporal, location is not really relevant. We could object that there are cases where this doesn’t apply – what about works of art? An original Rembrandt canvas is a revered object; but if it were to be lost it would live on in its reproductions, and, crucially, in people’s minds. Its loss would be sharply regretted – but so, to an extent, would the loss of a first folio edition of Shakespeare. The difference is that for the Rembrandt, direct viewing is the essence of its appreciation, while we lose nothing from Shakespeare when watching, listening or reading, if we are not in the presence of some original artefact.

Value, we might say, does not simply travel around embedded in physical objects, but depends upon the existence of appreciating minds. This gives us a route into examination of the value question – but I’m going to put that aside for the moment and return to good old ‘aboutness’ – since these thoughts also give us  some leverage for developing our ideas there.

…and what is meaning?

So are we to conclude that our copy of Shakespeare itself, as it lies on the moon, has no intrinsic connection with anything of concern or meaning to us? Imagine that some disaster eliminated human life from the earth. Would the book’s links to the world beyond be destroyed at the same time, the print on its pages suddenly reduced to meaningless squiggles?  This is perhaps another way in which we are misled by the imaginary book.

Cave painting

A 40,000 year old cave painting in the El Castillo Cave in Puente Viesgo, Spain (www.spain.info)

Think of prehistoric cave paintings which have persisted, unseen, thousands of years after the deaths of those for whom they were particularly meaningful. Eventually they are found by modern men who rediscover some meaning in them. Many of them depict recognisable animals – perhaps a food source for the people of the time; and as representational images their central meaning is clear to us. But of course we can only make educated guesses at the cloud of associations they would have had for their creators, and their full significance in their culture. And other ancient cave wall markings have been discovered which are still harder to interpret – strange abstract patterns of dots and lines (see above). What’s interesting is that we can sense that there seems to have been some sort of purpose in their creation, without having any idea what it might have been.

Luttrell Psalter

A detail from the Luttrell Psalter (Bristish Library)

Let’s look at a more recent example: the marvellous illuminated script of the Luttrell Psalter, the 14th century illuminated manuscript, now in the British Library. (you can view it in wonderful detail by going to the British Library’s Turning the Pages application.) It’s a psalter, written in Latin, and so the subject matter is still accessible to us. Of more interest are the illustrations around the text – images showing a whole range of activities we can recognise, but as they were carried on in the medieval world. This of course is a wonderful primary historical source, but it’s also more than that. Alongside the depiction of these activities is a wealth of decoration, ranging from simple flourishes to all sorts of fantastical creatures and human-animal hybrids. Some may be symbols which no longer have meaning in today’s culture, and others perhaps just jeux d’esprit on the part of the artist. It’s mostly impossible now for us to distinguish between these.

Think also of the ‘authenticity’ debate in early music that I mentioned in Words and Music a couple of posts back. The full, authentic effect of a piece of music composed some hundreds of years ago, so one argument goes, could only affect an audience as the composer intended if the audience were also of his time. Indeed, even today’s music, of any genre, will have different associations for, and effects on, a listener depending on their background and experience. And indeed, it’s quite common now for artists, conceptual or otherwise, to eschew any overriding purpose as to the meaning of their work, but to intend each person to interpret it in his or her own idiosyncratic way.

Rather too many examples, perhaps, to illustrate the somewhat obvious point that meaning is not an intrinsic property of inert symbols, such as the printed words in our lunar Shakespeare. In transmitting their sense and associations from writer to reader the symbols depend upon shared knowledge, cultural assumptions and habits of thought; something about the symbols, or images, must be recognisable by both creator and consumer. When this is not the case we are just left with a curious feeling, as when looking at that abstract cave art. We get a a strong sense of meaning and intention, but the content of the thoughts behind it are entirely unknown to us. Perhaps some unthinkably different aliens will have the same feeling on finding the Voyager robot spacecraft, which was sent on its way with some basic information about the human race and our location in the galaxy. Looking at the cave patterns we can detect that information is present – but meaning is more than just information. Symbols comprise the latter without intrinsically containing the former, otherwise we’d be able to know what those cave patterns signified.

Physical signs can’t embody meaning of themselves,  apart from the creator and the consumer, any more than a saw can cut wood without a carpenter to wield it. Tool use, indeed, in early man or advanced animals, is an indicator of intentionality – the ability to form abstract ‘what if’ concepts about what might be done, before going ahead and doing it. A certain cinematic moment comes to mind: in Kubrick’s 2001: A Space Odyssey, where the bone wielded as a tool by the primate creature in the distant past is thrown into the air, and cross-fades into a space ship in the 21st century.

Here be dragons

Information theory developed during the 20th century, and is behind all the advances of the period in computing and communications. Computers are like the examples of symbols we have looked at: the states of their circuits and storage media contain symbolic information but are innocent of meaning. Which thought, it seems to me, it leads us to the heart of the perplexity around the notion of aboutness, or intentionality. Brains are commonly thought of as sophisticated computers of a sort, which to some extent at least they must be. So how come that when, in a similar sort of way, information is encoded in the neurochemical states of our brains, it is magically invested with meaning? In his well-known book A Brief History of Time, Stephen Hawking uses a compelling phrase when reflecting on the possibility of a universal theory. Such a theory would be “just a set of rules and equations”. But, he asks,

What is it that breathes fire into the equations and makes a universe for them to describe?

I think that, in a similar spirit, we have to ask: what breathes fire into our brain circuits to add meaning to their information content?

The Chinese Room

If you’re interested enough to have come this far with me, you will probably know about a famous philosophical thought experiment which serves to support the belief that my question is indeed a meaningful and legitimate one – John Searle’s ‘Chinese Room’ argument. But I’ll explain it briefly anyway; skip the next paragraph if you don’t need the explanation.

Chinese Room

A visualisation of John Searle inside the Chinese Room

Searle imagines himself cooped up in a rather bizarre room where he can only communicate with the outside world by passing and receiving notes through an aperture. Within the room he is equipped only with an enormous card filing system containing a set of Chinese characters and rules for manipulating them. He has Chinese interlocutors outside the room, who pass in pieces of paper bearing messages in Chinese. Unable to understand Chinese, he goes through a cumbersome process of matching and manipulating the Chinese symbols using his filing system. Eventually this process yields a series of characters as an answer, which are transcribed on to another piece of paper and passed back out. The people outside (if they are patient enough) get the impression that they are having a conversation with someone inside the room who understands and responds to their messages. But, as Searle says, no understanding is taking place inside the room. As he puts it, it deals with syntax, not semantics, or in the terms we have been using, symbols, not meaning. Searle’s purpose is to demolish the claims of what he calls ‘strong AI’ – the claim that a computer system with this sort of capability could truly understand what we tell it, as judged from its ability to respond and converse. The Chinese Room could be functionally identical to such a system (only much slower) but Searle is demonstrating that is is devoid of anything that we could call understanding.

If you have an iPhone you’ll probably have used an app called ‘Siri’ which has just this sort of capability – and there are equivalents on other types of phone. When combined with the remote server that it communicates with, it can come up with useful and intelligent answers to questions. In fact, you don’t have to try very hard to make it come up with bizarre or useless answers, or flatly fail. But that’s just a question of degree – no doubt future versions will be more sophisticated. We might loosely say that Siri ‘understands’ us – but of course it’s really just a rather more efficient Chinese Room. Needless to say, Searle’s argument has generated years of controversy. I’m not going to enter into that debate, but will just say that I find the argument convincing; I don’t think that Siri can ‘understand’ me.

So if we think of understanding as the ‘fire’ that’s breathed into our brain circuits, where does it come from? Think of the experience of reading a gripping novel. You may be physically reading the words, but you’re not aware of it. ‘Understanding’ is hardly an issue, in that it goes without saying. More than understanding, you are living the events of the novel, with a succession of vivid mental images. Another scenario: you are a parent, and your child comes home from school to tell you breathlessly about some playground encounter that day – maybe positive or negative. You are immediately captivated, visualising the scene, maybe informed by memories of you own school experiences. In both of these cases, what you are doing is not really to do with processing information – that’s just the stimulus that starts it all off. You are experiencing – the information you recognise has kicked off conscious experiences; and yes, we are back with our old friend consciousness.

Understanding and consciousness

Searle also links understanding to consciousness; his position, as I understand it, is that consciousness is a specifically biological function, not to be found in clever artefacts such as computers. But he insists that it’s purely a function of physical processes nontheless – and I find it difficult to understand this view. If biologically evolved creatures can produce consciousness as a by-product of their physical functioning, how can he be so sure that computers cannot? He could be right, but it seems to be a mere dogmatic assertion. I agree with him that you can’t have meaning – and hence intentionality – without consciousness. For sure, although he denies it, he leaves open the possibility that a computer (and thus, presumably, the Chinese Room as a whole) could be conscious. But he does have going for him the immense implausibility of that idea.

Dog

How much intentionality?

So does consciousness automatically bring intentionality with it? In my last post I referred to a dog’s inability to understand or recognise a pointing gesture. We assume that dogs have consciousness of some sort – in a simpler form, they have some of the characteristics which lead us to assume that other humans like ourselves have it. But try thinking yourself for a moment into what it might be to inhabit the mind of a dog. Your experiences consist of the here and now (as ours do) but probably not a lot more. There’s no evidence that a dog’s awareness of the past consists of more than simple learned associations of a Pavlovian kind. They can recognise ‘walkies’, but it seems a mere trigger for a state of excitement, rather than a gateway to a rich store of memories. And they don’t have the brain power to anticipate the future. I know some dog owners might dispute these points – but even if a dog’s awareness extends beyond ‘is’ to ‘was’ and ‘will be’, it surely doesn’t include ‘might be’ or ‘could have been’. Add to this the dog’s inability to use offered information to infer that the mind of another individual contains a truth about the world that hitherto has not been in your own mind (i.e. the ability to understand pointing – see the previous post) and it starts to become clearer what is involved in intentionality. Mere unreflective experiencing of the present moment doesn’t lead to the notion of the objects of your thought, as disticnct from the thought itself. I don’t want to offend dog-owners – maybe their pets’ abilites extend beyond that; but there are certainly other creatures – conscious ones, we assume – who have no such capacity.

So intentionality requires consciousness, but isn’t synonymous with it: in the jargon, consciousness is necessary but not sufficient for intentionality. As hinted earlier, the use of tools is perhaps the simplest indicator of what is sufficient – the ability to imagine how something could be done, and then to take action to make it a reality. And the earliest surviving evidence from prehistory of something resembling a culture is taken to be the remains of ancient graves, where objects surrounding a body indicate that thought was given to the body’s destiny – in other words, there was a concept of what may or may not happen in the future. It’s with these capabilities, we assume, that consciousness started to co-exist with the mental capacity which made intentionality possible.

So some future civilisation, alien or otherwise, finding that Shakespeare volume on the moon, will have similar thoughts to those that we would have on discovering the painted patterns in the cave. They’ll conclude that there were beings in our era who possessed the capacity for intentionality, but they won’t have the shared experience which enables them to deduce what the printed symbols are about. And, unless they have come to understand better than we do what the nature of consciousness is, they won’t have any better idea what the ultimate nature of intentionality is.

The value of what they would find is another question, which I said I would return to – and will. But this post is already long enough, and it’s too long since I last published one – so I’ll deal with that topic next time.

A Few Pointers

Commuting days until retirement: 342

Michaelangelo's finger

The act of creation, in the detail from the Sistine Chapel ceiling which provides Tallis’s title

After looking in the previous post at how certain human artistic activities map on to the world at large, let’s move our attention to something that seems much more primitive. Primitive, at any rate, in the sense that most small children become adept at it before they develop any articulate speech. This post is prompted by a characteristically original book by Raymond Tallis I read a few years back – Michaelangelo’s Finger. Tallis shows how pointing is a quintessentially human activity, depending on a whole range of capabilities that are exclusive to humans. In the first place, it could be thought of as a language in itself – Pointish, as Tallis calls it. But aren’t pointing fingers, or arrows, obvious in their meaning, and capable of only one interpretation? I’ve thought of a couple of examples to muddy the waters a little.

Pointing – but which way?

TV aerial

Which way does it point?

The first is perhaps a little trivial, even silly. Look at this picture of a TV aerial. If asked where it is pointing, you would say the TV transmitter, which will be in the direction of the thin end of the aerial. But if we turn it sideways, as I’ve done underneath, we find what we would naturally interpret as an arrow pointing in the opposite direction. It seems that our basic arrow understanding is weakened by the aerial’s appearance and overlaid by other considerations, such as a sense of how TV aerials work.

My second example is something I heard about which is far more profound and interesting, and deliciously counter-intuitive. It has to do with the stages by which a child learns language, and also with signing, as used by deaf people. Two facts are needed to explain the context: first is that, as you may know, sign language is not a mere substitute for language, but is itself a language in every sense. This can be demonstrated in numerous ways: for example, conversing in sign has been shown to use exactly the same area of the brain as does the use of spoken language. And, compared with those rare and tragic cases where a child is not exposed to language in early life, and consequently never develops a proper linguistic capability, young children using only sign language at this age are not similarly handicapped. Generally, for most features of spoken language, equivalents can be found in signing. (To explore this further, you could try Oliver Sacks’ book Seeing Voices.) The second fact concerns conventional language development: at a certain stage, many children, hearing themselves referred to as ‘you’, come to think of ‘you’ as a name for themselves, and start to call themselves ‘you’; I remember both my children doing this.

And so here’s the payoff: in most forms of sign language, the word for ‘you’ is to simply point at the person one is speaking to. But children who are learning signing as a first language will make exactly the same mistake as their hearing counterparts, pointing at the person they are addressing in order to refer to themselves. We could say, perhaps, that they are still learning the vocabulary of Pointish. The aerial example didn’t seem very important, as it merely involved a pointing action that we ascribe to a physical object. Of course the object itself can’t have an intention; it’s only a human interpretation we are considering, which can work either way. This sign language example is more surprising because the action of pointing – the intention – is a human one, and in thinking of it we implicitly transfer our consciousness into the mind of the pointer, and attempt to get our head around how they can make a sign whose meaning is intuitively obvious to us, but intend it in exactly the opposite sense.

What’s involved in pointing?

Tallis teases out how pointing relies on a far more sophisticated set of mental functions than it might seem to involve at first sight. As a first stab at demonstrating this, there is the fact that pointing, either the action or the understanding of it, appears to be absent in animals – Tallis devotes a chapter to this. He describes a slightly odd-feeling experience which I have also had, when throwing a stick for a dog to retrieve. The animal is often in a high state of excitement and distraction at this point, and dogs do not have very keen sight. Consequently it often fails to notice that you have actually thrown the stick, and continues to stare at you expectantly. You point vigorously with an outstretched arm: “It’s over there!” Intuitively, you feel the dog should respond to that, but of course it just continues to watch you even more intensely, and you realise that it simply has no notion of the meaning of the gesture – no notion, in fact, of ‘meaning’ at all. You may object that there is a breed of dog called a Pointer, because it does just that – points. But let’s just examine for a moment what pointing involves.

Primarily, in most cases, the the key concept is attention: you may want to draw the attention of another to something,. Or maybe, if you are creating a sign with an arrow, you may be indicating by proxy where others should go, on the assumption that they have a certain objective. Attention, objective: these are mental entities which we can only ascribe to others if we first have a theory of mind – that is, if we have already achieved the sophisticated ability to infer that others have minds, and and a private world, like our own. Young children will normally start to point before they have very much speech (as opposed to language – understanding develops in advance of expression). It’s significant that autistic children usually don’t show any pointing behaviour at this stage. Lack of insight into the minds of others – an under-developed theory of mind – is a defining characteristic of autism.

So, returning to the example of the dog, we can take it that for an animal to show genuine pointing behaviour, it must have a developed notion of other minds, and which seems unlikely. The action of the Pointer dog looks more like instinctive behaviour, evolved through the cooperation of packs and accentuated by selective breeding. There are other examples of instinctive pointing in animal species: that of bees is particularly interesting, with the worker ‘dance’ that communicates to the hive where a food source is. This, however, can be analysed down into a sequence of instinctive automatic responses which will always take the same form in the same circumstances, showing no sign of intelligent variation. Chimpanzees can be trained to point, and show some capacity for imitating humans, but there are no known examples of their use of pointing in the wild.

But there is some recent research which suggests a counter-example to Tallis’s assertion that pointing is unknown in animals. This shows elephants responding to human pointing gestures, and it seems there is a possibility that they point spontaneously with their trunks. This rather fits with other human-like behaviour that has been observed in elephants, such as apparently grieving for their dead. Grieving, it seems to me, has something in common with pointing, in that it also implies a theory of mind; the death of another individual is not just a neutral change in the shape and pattern of your world, but the loss of another mind. It’s not surprising that, in investigating ancient remains, we take signs of burial ritual to be a potent indicator of the emergence of a sophisticated civilisation of people who are able to recognise and communicate with minds other than their own – probably the emergence of language, in fact.

Pointing in philosophy

We have looked at the emergence of pointing and language in young children; and the relation between the two has an important place in the history of philosophy. There’s a simple, but intuitive notion that language is taught to a child by pointing to objects and saying the word for them – so-called ostensive definition. And it can’t be denied that this has a place. I can remember both of my children taking obvious pleasure in what was, to them, a discovery – each time they pointed to something they could elicit a name for it from their parent. In a famous passage at the start of Philosophical Investigations, Wittgenstein identifies this notion – of ostensive definition as the cornerstone of language learning – in a passage from the writings of St Augustine, and takes him to task over it. Wittgenstein goes on to show, with numerous examples, how dynamic and varied an activity the use of language is, in contrast to the monolithic and static picture suggested by Augustine (and indeed by Wittgenstein himself in his earlier incarnation). We already have our own example in the curious and unique way in which the word ‘you’ and its derivatives are used, and a sense of the stages by which children develop the ability to use it correctly.

The Boyhood of Raleigh

Perhaps the second most famous pointing finger in art: Millais’ The Boyhood of Raleigh

The passage of Augustine also suggests a notion of pointing as a primitive,  primary action, needing no further explanation. However, we’ve seen how it relies on a prior set of sophisticated abilities: having the notion that oneself is distinct from the world – a world that contains other minds like one’s own, whose attention may have different contents from one’s own; that it’s possible to communicate meaning by gestures to modify those contents; an idea of how these gestures can be ‘about’ objects within the world; and that there needs to be agreement on how to interpret the gestures, which aren’t always as intuitive and unambiguous as we may imagine. As Tallis rather nicely puts it, the arch of ostensive definition is constructed from these building bricks, with the pointing action as the coping stone which completes it.

The theme underlying both this and my previous post is the notion of how one thing can be ‘about’ another – the notion of intentionality. This idea is presented to us in an especially stark way when it comes to the action of pointing. In the next post I intend to approach that more general theme head-on.

Words and Music

Commuting days until retirement: 360

Words and MusicThis piece was kicked off by some comments I heard from the poet Sean O’Brien on a recent radio programme. Speaking on the BBC’s Private Passions, where guests choose favourite music, and talking about Debussy, he said:

Poetry is always envious of music because that’s what poetry wants to be, whereas music has no need to be poetry. So [poets] are always following in the wake of music – but the two come quite close in sensibility here, it seems to me.

Discuss. Well, for me this immediately brought to mind a comment I once heard attributed to the composer Mendelssohn, to the effect that ‘music is not too vague for words, but too precise.’ So I went and did some Googling, and here’s the original quote:

People often complain that music is too ambiguous, that what they should think when they hear it is so unclear, whereas everyone understands words. With me, it is exactly the opposite, and not only with regard to an entire speech but also with individual words. These, too, seem to me so ambiguous, so vague, so easily misunderstood in comparison to genuine music, which fills the soul with a thousand things better than words. The thoughts which are expressed to me by music that I love are not too indefinite to be put into words, but on the contrary, too definite.

(Source: Wikiquote, where you can also see the original German.)

These two statements seem superficially similar – are they saying the same thing, from different standpoints? One difference, of course, is that O’Brien is specifically talking of poetry rather than words in general. In comparison with prose, poetry shares more of the characteristics of music: it employs rhythm, cadence, phrasing and repetition, and is frequently performed rather than read from the page; and music of course is virtually exclusively a performance art. So that’s a first thought about how poetry departs from the prosaic and finds itself approaching the territory inhabited by music.

But what is the precision which Mendelssohn insists is music’s preserve? It’s true that music is entirely bound up with the mathematics of sound frequency ratios in intervals between notes: it was this that gave Pythagoras and others their obsession with the mystique of numbers, in antiquity. A musician need not be grounded deeply in mathematical theory, but but he or she will always be intensely aware of the differing characters of musical intervals – as is anyone who enjoys music at all, if perhaps more indirectly.

I haven’t read a lot of theorising on this topic, but it seems to me that there is a strong link here with everyday speech. In my own language, as, I should imagine, in most others, pitch and phrasing indicate the emotional register of what you are saying, and hence an important element of the meaning. One can imagine this evolving from the less articulate cries of our ancestors, with an awareness of intervals of pitch developing as a method of communication, hence fostering group cohesion and Darwinian survival value. Indeed, in some languages, such as Chinese, intonation can be integral to the meaning of a word. And there’s scientific evidence of the closeness of music and language: here’s an example.

So, going back to Mendelssohn, it’s as if music has developed by abstracting certain elements from speech, leaving direct, referential meaning behind and evolving a vocabulary from pitch, timbre and the free-floating emotional states associated with them. Think for a moment of film music. It can be an interesting exercise, in the middle of a film or TV drama, to make yourself directly aware of the background music, and then imagine how your perception of what is happening would differ if it were absent. You come to realise that the music is often instructing you what to feel about a scene or a character, and it often connects with your emotions so directly that it doesn’t consciously occur to you that the feelings you experience are not your own spontaneous ones. And if you add to this the complex structures formed from key relationships and temporal development, which a professional musician would be particularly aware of, you can start to see what Mendelssohn was talking about.

The musical piece O’Brien was introducing was the ‘Dialogue between the wind and the sea’ from Debussy ‘s La Mer. In other words,a passage which seeks to evoke a visual and auditory scene, rather than simply exploring musical ideas and the emotions that arise directly from them. By contrast, we could imagine a description of such a scene in prose: to be effective the writer needs to choose the words whose meanings and associations come together in such a way that readers can recreate the sensory impressions, and and the subjective impact of the scene in their own minds. The music, on the other hand, can combine its direct emotional access with an auditory picture, in a highly effective way.

Rain Steam and Speed

Rain, Steam and Speed – J.M.W Turner (National Gallery, London)

I was trying to think of an equivalent in visual art, and and the painting that came to mind was this one, with its emphasis on the raw sensory feelings evoked by the scene, rather than a faithful (prosaic) portrayal. In his use of this technique, Turner was of course controversial in his time, and is now seen as a forerunner of the impressionist movement. Interestingly, what I didn’t know before Googling the painting was that he is also known to have influenced Debussy, who mentions him in his letters. Debussy was also sometimes spoken of as impressionist, but he hated this term being applied to his work, and here is a quote from one of those letters:

I am trying to do something different – an effect of reality… what the imbeciles call impressionism, a term which is as poorly used as possible, particularly by the critics, since they do not hesitate to apply it to Turner, the finest creator of mysterious effects in all the world of art.

I like to think that perhaps Debussy is also to some extent lining up with Mendelssohn here, and, beside his reference to Turner’s painting, maybe has in mind the unique form of access to our consciousness which music has, as opposed to other art forms. A portrayal in poetry would perhaps come somewhere between prose and music, given that poetry, as we’ve seen, borrows some of music’s tricks. I looked around for an example of a storm at sea in poetry: here’s a snippet from Swinburne, around the turn of the 20th century, describing a storm in the English Channel.

As a wild steed ramps in rebellion, and and rears till it swerves from a backward fall,
The strong ship struggled and reared, and her deck was upright as a sheer cliff’s wall.

In two lines here we have – besides two similes with one of them extended into a metaphor – alliteration, repetition and rhyme, all couched in an irregular, bucking rhythm which suggests the movement of the ship with the sea and wind. Much in common here, then, with a musical evocation of a storm. This I take to be part of what O’Brien means by poetry ‘wanting’ to be music, and being ‘close in sensibility’ in the example he was talking about.

But I don’t see how all this implies that we should somehow demote poetry to an inferior role. Yes, it’s true that words don’t so often trigger emotions as directly, by their sound alone, as does music – except perhaps in individual cases where someone has become sensitised to a word through experience. But the Swinburne passage is an example of poetry flexing the muscles which it alone possesses, in pursuit of its goal. And even when its direct purpose is not the evocation of a specific scene, the addition of the use of imagery to the auditory effects it commands can create a very compelling kind of ‘music’. A couple of instances that occur to me: first T. S. Eliot in Burnt Norton, from The Four Quartets.

Garlic and sapphires in the mud
Clot the bedded axle-tree.
The trilling wire in the blood
Sings below inveterate scars
Appeasing long-forgotten wars.

I’m vaguely aware that there all sorts of allusions in the poet’s mind which are beyond my awareness, but just at the level of the sound of the words combined with the immediate images they conjure, there is for me a magic about this which fastens itself in my mind, making it intensely enjoyable to repeat the words to myself, I just as a snatch of music can stick in the consciousness. Another example, from Dylan Thomas, known for his wayward and idiosyncratic use of language. This is the second stanza from Especially When the October Wind:

Shut, too, in a tower of words, I mark
On the horizon walking like the trees
The wordy shapes of women, and the rows
Of the star-gestured children in the park.
Some let me make you of the vowelled beeches,
Some of the oaken voices, from the roots
Of many a thorny shire tell you notes,
Some let me make you of the water’s speeches.

Again, I find pure pleasure in the play of sounds and images here, before ever considering what further meaning there may be. But what I also love about the whole poem (here is the poet reading it) is the way it is self-referential: while exploiting the power of words, it explicitly links language-words to  images – ‘vowelled beeches’ rhymed with ‘water’s speeches’. The poet himself is ‘shut in a tower of words’ .

But, returning to the comparison with music, there’s one obvious superficial difference between it and language, namely that language is generally ‘about’ something, but while music is not. But of course there are plenty of exceptions: music can be, at least in part, deliberately descriptive, as we saw with Debussy, he while poetry often does away with literal meaning to transform itself into a sort of word-music, as I’ve tried to show above. And another obvious point I haven’t made is that words and music are often – perhaps more often than not – yoked together in song. The voice itself can simultaneously be a musical instrument and a purveyor of meaning. And it may be that the music is recruited to add emotional resonance to the language – think opera or musical drama – or that the words serve to give the music a further dimension and personality, as often in popular music. In the light of his statement above, it’s interesting that Mendelssohn is famous particularly for his piano pieces entitled Songs Without Words. The oxymoron is a suggestive one, and he strongly resisted an attempt by a friend to put words to them.

But the question of what music itself is ‘about’ is a perplexing, and perhaps profound one. I am intending this post to be one approach to the philosophical question of how it’s possible for one thing to be ‘about’ another: the topic known as intentionality. In an interesting paper, Music and meaning, ambiguity and evolution, Ian Cross of the Cambridge Faculty of Music explores this question (1), referring to the fact that the associations and effects of a piece of music may differ between the performer and a listener, or between different listeners. Music has, as he puts it, ‘floating intentionality’. This reminds me of a debate that has taken place about the performance of early music. For authenticity in, say 16th century music, some claim, it’s essential that it is played on the instruments of the time. Their opponents retort that you won’t achieve that authenticity unless you have a 16th century audience as well.

Some might claim that most music is not ‘about’ anything but itself, or perhaps about the emotions it generates. I am not intending to come to any conclusion on that particular topic, but just to raise some questions in a fascinating area. In the next post I intend to approach this topic of intentionality from a completely different direction.


1. Cross, Ian: Music and meaning, ambiguity and evolution, in D. Miell, R. MacDonald & D. Hargreaves, Musical Communication, OUP, 2004.
You can read part of it here.

Accident of Birth

Commuting days until retirement: 390

My commuter train reading in recent weeks has been provided by Hilary Mantel’s two Mann Booker Prize-winning historical novels, Wolf Hall and Bring up the Bodies. If you don’t know, they are the first two of what is promised to be a trilogy covering the life of Thomas Cromwell, who rose to be Henry VIII’s right hand man. He’s a controversial figure in history: you may have seen Robert Bolt’s play (or the film of) A Man for All Seasons, where he is portrayed as King Henry’s evil arch-fixer, who engineers the execution of the man of the title, Sir Thomas More. He is also known to have had a big part in the downfall and death of Anne Boleyn.

The unique approach of Mantel’s account is to narrate exclusively from Cromwell’s own point of view. At the opening of the first book he is being violently assaulted by the drunken, irresponsible blacksmith father whom he subsequently escapes, seeking a fortune abroad as a very young man, and living on his very considerable wits. On his return to England, having gained wide experience and the command of several languages, he progresses quickly within the establishment, becoming a close advisor to Cardinal Wolsey, and later, of course, Henry VIII. I won’t create spoilers for the books by going into further detail – although if you are familiar with the relevant history you will already know some of these. I’ll just mention that in Mantel’s portrayal he emerges as phenomenally quick-witted, but loyal to those he serves. She shows him as an essentially unassuming man, well aware of his own abilities, and stoical whenever he suffers reverses or tragedies. These qualities give him a resilience which aids his rise to some of the highest offices in the England of his time. In the books we are privy to his dreams, and his relationships with his family – although he might appear to some as cold-blooded, he is also a man of natural feelings and passions.

Thomas Cromwell and the Duke of Norfolk

Thomas Cromwell (left) and Thomas Howard, Duke of Norfolk – both as portrayed by Hans Holbein

But the theme that kicked off my thoughts for this post was that of Cromwell’s humble origin. It’s necessarily central to the books, given that it was rare then for someone without nobility or inherited title to achieve the rank that he did. What Mantel brings out so well is the instinctive assumption that an individual’s value is entirely dependent on his or her inheritance – unquestioned in that time, as throughout most of history until the modern era. As the blacksmith’s son from Putney, Cromwell is belittled by his enemies and teased by his friends. But at the same time we watch him, with his realistic and perceptive awareness of his own position, often running rings around various blundering earls and dukes, and even subtly manipulating the thinking of the King. My illustrations show Cromwell himself and Thomas Howard, Duke of Norfolk, a jealous opponent. By all accounts Norfolk was a rather simple, plain-speaking man, and certainly without Cromwell’s intellectual gifts. So today we would perhaps see Cromwell as better qualified for the high office that both men held. But seen through 16th century eyes, Cromwell would be the anomaly, and Norfolk, with his royal lineage, the more natural holder of a seat in the Privy Council.

Throughout history there have of course been persistent outbreaks of protest from those disempowered by accident of birth. But the fundamental issues have often often obscured by the chaos and competition for privilege which result. We can most obviously point to the 18th century, with the convulsion of the French revolution, which resulted in few immediate benefits; and the foundation of a nation – America – on the ideals of equality and freedom, followed however by its enthusiastic maintenance of slavery for some years. Perhaps it wasn’t until the 19th century, and the steady, inexorable rise of the middle class, that fundamental change began. As this was happening, Darwin came along to ram home the point that any intrinsic superiority on the basis of your inheritance was illusory. Everyone’s origins were ultimately the same; what counted was how well adapted you were to the external conditions you were born into. But was this the same for human beings as for animals? The ability to thrive in the environment in which you found yourself was certainly a measure of utilitarian, or economic value. But is this the scale on which we should value humans? It’s a question that I’ll try to show there’s s still much confusion about today. Meanwhile Karl Marx was analysing human society in terms of class and mass movements, moving the emphasis away from the value of individuals – a perspective which had momentous consequences in the century to come.

But fundamental attitudes weren’t going to change quickly. In England the old class system was fairly steady on its feet until well into the 20th century. My own grandmother told me about the time that her father applied to enrol her brothers at a public school (i.e. a private school, if you’re not used to British terminology). This would have been, I estimate, between about 1905 and 1910. The headmaster of the school arrived at their house in a horse and trap to look the place over and assess their suitability. My great-grandfather had a large family, with a correspondingly large house, and all the servants one would then have had to keep the place running. He was a director of a successful wholesale grocery company – and hearing this, the headmaster politely explained that, being “in trade” he didn’t qualify as a father of sons who could be admitted. Had he been maybe a lawyer, or a clergyman, there would have been no problem.

Let’s move on fifty years or so, to the start of the TV age. It’s s very instructive to watch British television programmes from this era – or indeed films and newsreels. Presenters and commentators all have cut-glass accents that today, just 60 or so years on, appear to us impossibly affected and artificial. The working class don’t get much of a look in at all: in the large numbers of black-and-white B-movies that were turned out at this time the principal actors have the accents of the ruling class, while working class characters appear either as unprincipled gangster types, or as lovable ‘cheekie chappies’ showing proper deference to their masters.

By this time, staying with Britain, we had the 1944 Education Act, which had the laudable motive of making a suitable education available to all, regardless of birth. But how to determine what sort of education would be right for each child? We had the infamous eleven plus exam, where in a day or two of assessment the direction of your future would be set. While looking forward to a future of greater equality of opportunity, the conception seemed simultaneously mired in the class stratification of the past, where each child had a predetermined role and status, which no one, least of all the child himself or herself, could change. Of course this was a great step up for bright working class children who might otherwise have been neglected, and instead received a fitting education at grammar schools. Thomas Cromwell, in a different age, could have been the archetypal grammar school boy.

But given the rigid stratification of the system, it’s not surprising that within 20 years left wing administrations started to change things again. While the reforming Labour government of 1945-51 had many other things to concentrate on, the next one, achieving office in 1964, made education a priority, abolishing the 11 plus and introducing comprehensive schools. This established the framework which is only now starting to be seriously challenged by the policies of the current coalition government. Was the comprehensive project successful, and does it need challenging now? I’d argue that it does.

R A Butler

R A “Rab” Butler
(izquotes.com)

To return to basics, it seems to me that what’s at stake is, again, how you value an individual human being. In Cromwell’s time as we’ve seen, no one doubted that it was all to do with the status of your forbears. But by 1944 the ambitious middle class had long been a reality, showing that you could prove your value and rise to prosperity regardless of your origins. This was now a mass phenomenon, not confined to very unusual and lucky individuals, as it had been with Cromwell. And so education realigned itself around the new social structure. But with the education minister of the time, R.A. Butler, being a patrician (if liberal-minded) Tory, perhaps it was inevitable that something of the rigidity of the old class structure would be carried over into the new education system.

So if an exam at the age of eleven effectively determines your place in society, how are we now valuing human beings? It’s their intellectual ability, and their consequent economic value which is the determining factor. If you succeed you go to a grammar school to be primed for university, while if not, you may be given a condescending pat on the head and steered towards a less intellectually demanding trade. We would all agree that there is a more fundamental yardstick against which we measure individuals – an intrinsic, or moral value. We’d rate the honest low-achiever over the clever crook. But somehow the system, with its rigid and merciless classification, is sweeping the more important criterion aside.

Anthony Crosland

Anthony Crosland
(stpancrasstory.org)

And so the reforming zeal of the 1960s Labour government was to remove those class-defining barriers and provide the same education for all. The education minister of that time was a noted intellectual – private school and Oxford educated – Anthony Crosland. His reported remark, supposedly made to his wife, serves to demonstrate the passion of the project: “If it’s the last thing I do, I’m going to destroy every fucking grammar school in England. And Wales and Northern Ireland”. (In Northern Ireland, it should be noted, he was less successful than elsewhere). But the remark also suggests a fixity of purpose which spread to the educational establishment for many years to come. If it was illegitimate to value children unequally, then in no circumstances should this be done.

You may or may not agree with me that the justified indignation of the time was leading to a fatal confusion between the two yardsticks I distinguished – the economic one and the moral one. And so, by the lights of Labour at that time, if we are allocating different resources to children according to their aptitudes – well, we shouldn’t. All must be equal. Yes – in the moral sense. But in the economic one? Even Karl Marx made that distinction – remember his famous slogan, “From each according to his ability, to each according to his need”?  All that the reformists needed to do, in my opinion, was to take the rigidity out of the system – to let anyone aspire to a new calling that he or she can achieve, at whatever age, and under whatever circumstances that their need arises.

Back to personal experience. I can remember when we were looking over primary schools for our first child – this would be in the early 90s. One particular headmaster bridled when my wife asked about provision for children of different abilities. The A-word was clearly not to be used. Yet as he talked on, there were several times that he visibly recognised that he himself was about to use it, spotted the elephant trap at the last moment, and awkwardly stepped around it. This confused man was in thrall to the educational establishment’s fixed, if unconscious, assumption that differing ability equals unequal value. (We didn’t send our children to that school.)

Over the years, these attitudes have led to a frequent refusal to make any provision for higher ability pupils, with the consequence that talent which might previously have been nurtured, has been ignored. If you can afford it, of course, you can buy your way out of the system and opt for a private education. Private school pupils have consistently had the lion’s share of places at the top universities, and so the architects and supporters of the state system ideology have called for the universities to be forced to admit more applicants from that system, and to restrict those from the private sector. Is this right? I’d argue that the solution to failure in the state schools is not to try and extend the same failed ideology to the universities, but to try to address what is wrong in the schools. A confusion between our economic and moral valuations of individual threatens to lead to consequences which are damaging, it seems to me, both in an economic and a moral sense.

The plans of the present UK education minister, Michael Gove, have come in for a lot of criticism. It would be outside the scope of this piece – and indeed my competence – to go into that in detail, but it does seem to me that he is making a principled and well intentioned attempt to restore the proper distinction between those economic and moral criteria – making good use of individual ability where it can be found, without being condescending to those who are not so academic, or making the distinctions between them too rigid. And of course I haven’t addressed the issue of whether the existence of a separate private education sector is desirable – again outside the scope of this post.

Martin Luther King

Martin Luther King
(Nobel Foundation)

What, at least, all now agree on is that the original criterion of individual value we looked at – birth status – is no longer relevant. Well, almost all. Racist ideologies, of course, persist in the old attitude. A recent anniversary has reminded us of one of the defining speeches of the 20th century, that of Martin Luther King, who laid bare the failure of the USA to uphold the principles of its constitution, and famously looked forward to a time when people would be “judged not by the color of their skin but by the content of their character”. The USA, whose segregationist policies in some states he was addressing, has certainly made progress since then. But beyond the issues I have described, there are many further problems around the distinction between moral and economic values. In most societies there are those whose contribution is valued far more in the moral sense than the economic one: nurses, teachers. What, if if anything, should we do about that? I don’t claim to know any easy answers.

I kicked off from the themes in Hilary Mantel’s books and embarked on a topic which I soon realised was a rather unmanageably vast one for a simple blog post. Along the way I have been deliberately contentious – please feel free to agree or disagree in the comments below. But what got me going was the way in which Mantel’s study of Cromwell takes us into the collective mind of an age when the instinctive ways of evaluating individuals were entirely different. What I don’t think anyone can reasonably disagree with is the importance of history in throwing the prejudices of our own age into a fresh and revealing perspective.

Consciousness 3 – The Adventures of a Naive Dualist

Commuting days until retirement: 408

A long gap since my last post: I can only plead lack of time and brain-space (or should I say mind-space?). Anyhow, here we go with Consciousness 3:

Coronation

A high point for English Christianity in the 50s: the Queen’s coronation. I can remember watching it on a relative’s TV at the age of 5

I think I must have been a schoolboy, perhaps just a teenager, when I was first aware that the society I had been born into supported two entirely different ways of looking at the world. Either you believed that the physical world around us, sticks, stones, fur, skin, bones – and of course brains – was all that existed; or you accepted one of the many varieties of belief which insisted that there was more to it than that. My mental world was formed within the comfortable surroundings of the good old Church of England, my mother and father being Christians by conviction and by social convention, respectively. The numinous existed in a cosy relationship with the powers-that-were, and parents confidently consigned their children’s dead pets to heaven, without there being quite such a Santa Claus feel to the assertion.

But, I discovered, it wasn’t hard to find the dissenting voices. The ‘melancholy long withdrawing roar’ of the ‘sea of faith’ which Matthew Arnold had complained about in the 19th century was still under way, if you listened out for it. Ever since Darwin, and generations of physicists from Newton onwards, the biological and physical worlds had appeared to get along fine without divine support; and even in my own limited world I was aware of plenty of instances of untimely deaths of innocent sufferers, which threw doubt on God’s reputedly infinite mercy.

John Robinson

John Robinson, Bishop of Woolwich (Church Times)

And then in the 1960s a brick was thrown into the calm pool of English Christianity by a certain John Robinson, the Bishop of Woolwich at the time. It was a book called Honest to God, which sparked a vigorous debate that is now largely forgotten. Drawing on the work of other radical theologians, and aware of the strong currents of atheism around him, Robinson argued for a new understanding of religion. He noted that our notion of God had moved on from the traditional old man in the sky to a more diffuse being who was ‘out there’, but considered that this was also unsatisfactory. Any God whom someone felt they had proved to be ‘out there’ “would merely be a further piece of existence, that might conceivably have not been there”. Rather, he says, we must approach from a different angle.

God is, by definition, ultimate reality. And one cannot argue whether ultimate reality exists.

My pencilled zig-zags in the margin of the book indicate that I felt there was something wrong with this at the time. Later, after studying some philosophy, I recognised it as a crude form of Anselm’s ontological argument for the existence of God, which is rather more elegant, but equally unsatisfactory. But, to be fair, this is perhaps missing the point a little. Robinson goes on to say that “one can only ask “what ultimate reality is like – whether it… is to be described in personal or impersonal categories.” His book proceeds to develop the notion of God as in some way identical with reality, rather than as a special part of it. One might cynically characterise this as a response to atheism of the form “if you can’t beat them, join them” – hence the indignation that the book stirred in religious circles.

Teenage reality

But, leaving aside the well worn blogging topic of the existence of God, there was the teenage me, still wondering about ‘ultimate reality’, and what on earth, for want of a better expression, that might be. Maybe the ‘personal’ nature of reality which Robinson espoused was a clue. I was a person, and being a person meant having thoughts, experiences – a self, or a subjective identity.  My experiences seemed to be something quite other from the objective world described by science – which, according to the ‘materialists’ of the time, was all that there was. What I was thinking of then was the topic of my previous post, Consciousness 2 – my qualia, although I didn’t know that word at the time. So yes, there were the things around us (including our own bodies and brains), our knowledge and understanding of which had been, and was, advancing at a great rate. But it seemed to me that no amount of knowledge of the mechanics of the world could ever explain these private, subjective experiences of mine (and I assumed, of others). I was always strongly motivated to believe that there was no limit to possible knowledge – however much we knew, there would always be more to understand. Materialsm, on the other hand, seemed to embody the idea of a theoretically finite limit to what could be known – a notion which gave me a sense of claustrophobia (of which more in a future post).

So I made my way about the world, thinking of my qualia as the armour to fend off the materialist assertion that physics was the whole story. I had something that was beyond their reach: I was a something of a young Cartesian, before I had learned about Descartes. It was a another few years before ‘consciousness’ became a legitimate topic of debate in philosophy and science. One commentator I have read dates this change to the appearance of Nagel’s paper What is it like to be a Bat in 1973, which I referred to in Consciousness 1. Seeing the debate emerging, I was tempted to preen myself with the horribly arrogant thought that the rest of the world had caught up with me.

The default position

Philosophers and scientists are still seeking to find ways of assimilating consciousness to physics: such physicalism, although coming in a variety of forms, is often spoken of as the default, orthodox position. But although my perspective has changed quite a lot over the years, my fundamental opposition to physicalism has not. I am still at heart the same naive dualist I was then. But I am not a dogmatic dualist – my instinct is to believe that some form of monism might ultimately be true, but beyond our present understanding. This consigns me into another much-derided category of philosophers – the so-called ‘mysterians’.

But I’d retaliate by pointing out that there is also a bit of a vacuum at the heart of the physicalist project. Thoughts and feelings, say its supporters, are just physical things or events, and we know what we mean by that, don’t we? But do we? We have always had the instinctive sense of what good old, solid matter is – but you don’t have to know any physics to realise there are problems with the notion. If something were truly solid it would entail that it was infinitely dense – so the notion of atomism, starting with the ancient Greeks, steadily took hold. But even then, atoms can’t be little solid balls, as they were once imagined – otherwise we are back with the same problem. In the 20th century, atomic physics confirmed this, and quantum theory came up with a whole zoo of particles whose behaviour entirely conflicted with our intuitive ideas gained from experience; and this is as you might expect, since we are dealing with phenomena which we could not, in principle, perceive as we perceive the things around us. So the question “What are these particles really like?” has no evident meaning. And, approaching the problem from another standpoint, where psychology joins hands with physics, it has become obvious that the world with which we are perceptually familiar is an elaborate fabrication constructed by our brains. To be sure, it appears to map on to the ‘real’ world in all sorts of ways, but has qualities (qualia?) which we supply ourselves.

Truth

So what true, demonstrable statements can be made about the nature of matter? We are left with the potently true findings – true in the the sense of explanatory and predictive power – of quantum physics. And, when you’ve peeled away all the imaginative analogies and metaphors, these can only be expressed mathematically. At this point, rather unexpectedly, I find myself handing the debate back to our friend John Robinson. In a 1963 article in The Observer newspaper, heralding the publication of Honest to God, he wrote:

Professor Herman Bondi, commenting in the BBC television programme, “The Cosmologists” on Sir James Jeans’s assertion that “God is a great mathematician”, stated quite correctly that what he should have said is “Mathematics is God”. Reality, in other words, can finally be reduced to mathematical formulae.

In case this makes Robinson sound even more heretical than he in fact was, I should note that he goes on to say that Christianity adds to this “the deeper reliability of an utterly personal love”. But I was rather gratified to find this referral to the concluding thoughts of my post by the writer I quoted at the beginning.

I’m not going to speculate any further into such unknown regions, or into religious belief, which isn’t my central topic. But I’d just like to finish with the hope that I have suggested that the ‘default position’ in current thinking about the mind is anything but natural or inevitable.

Consciousness 2 – The Colour of Nothing

Commuting days until retirement: 437

When it comes down to basics, is there just one sort of thing, or are there two sorts of thing? (We won’t worry about the possibility of even more than that.) Anyone who has done an elementary course in philosophy will know that Descartes’ investigations led him to believe that there were two sorts: mental things and physical things, and that he thus gave birth to the modern conception of dualism.

Stone lion

Lifeless

As scientific knowledge has progressed over the centuries since, it has put paid to all sorts of beliefs in mystical entities which were taken to be explanations for how things are. A good example would be vitalism, the belief in a ‘principle of life’  something that a real lion would possess and a stone lion would not. Needless to say, we now know that the real lion would have DNA, a respiratory system and so on, all of whose modes of operation we have much understanding – and so the principle of life has withered away, as surplus to needs.

Descartes mental world, however, has been harder to kill off. There seems nothing that scientific theory can grasp which is recognisable as the something it is like I discussed in my previous post. It’s rather like one of those last houses to go as Victorian terraces are cleared for a new development, with Descartes as the obstinate old tenant who stands on his rights and refuses to be rehoused. But the philosophical bulldozers are doing their best to help the builders of science, in making way for  their objectively regular modern blocks.

Gilbert Ryle led the charge in 1949, in his book The Concept of Mind. He famously characterised dualism as the doctrine of ‘the Ghost in the Machine’: to suppose that there was some mystical entity within us corresponding to our mind was to be misled by language into making a ‘category mistake’. Ryle’s standpoint fits more or less into the area of behaviourism, also previously discussed. Then, in the 1950s, identity theory arose. The contents of your mind  colors, smells  may seem different from from all that mushy stuff in your head and its workings, but in fact they are just the same thing, if perhaps seen from a different viewpoint. There’s a name, the ‘Morning Star’, for that bright star that can be seen at dawn, and another one, the ‘Evening Star’, for its equivalent at dusk; but with a little further knowledge you discover that they are one and the same.

Nowadays, while still around, the identity theory is somewhat mired in technical philosophical debate. Meanwhile brain science has made huge strides, and at the same time computing science has become mainstream. So on the one hand, it’s tempting to see the mind as the software of the brain (functionalism, very broadly), or perhaps just to attempt to show that with enough understanding of the wiring of those tightly packed nerve fibres, and whatever is chugging around them, everything can be explained. This last approach  materialism, or in its modern, science-aware form, physicalism  can take various forms, one of them being the identity theory. Or you may consider, for example, that such mental entities as beliefs, or pains, may be real enough, but are ideally explained as  or reduced to  brain/body functions. This would make you a reductionist.

But you may be more radical and simply say that these mental things don’t really exist at all: we are just kidded into thinking they do by our habitual way of talking about ourselves folk psychology, as it’s often referred to. Then you would be an eliminativist  and it’s the eliminativists I’d like to get my philosophical knife into here. Although I don’t agree with old Descartes on that much (I’ll expand in the next post), I have an certain affinity for him, and I’m willing to join him in his threatened, tumbledown house, looking out at the bulldozers ranged across the building site of 21st century Western philosophy.

Getting rid of qualia  or not

Acer leaves

My acer leaves

I think it would be fair to say that the arch-eliminativist is one Daniel Dennett, and it’s his treatment of qualia that I’d like to focus on. Qualia (singular quale) are those raw, subjective elements of which our sensory experience is composed (or as Dennett would have it, we imagine it to be composed): the vivid visual experience I’m having now of the delicately coloured acer leaves outside my window; or that smell when I burn the toast. I’m thinking of Dennett’s treatment of the topic to be found in his 1988 paper Quining Qualia, (QQ) and in Qualia Disqualified, Chapter 12 of his 1991 book Consciousness Explained (CE: with a great effort I refrain from commenting on the title). Now the task is to show that, when it comes to mental things, all that grey matter and its workings is all there is. But this is a problem, because when we look inside people’s skulls we don’t ever find the colour of acer leaves or the smell of burnt toast.

Dennett quotes an introductory book on brain science: ‘”Color” as such does not exist in the world: it exists only in the eye and brain of the beholder.’ But as he rightly points out, however good this book is on science, it has its philosophy very muddled. For one thing, the ‘eye and brain of the beholder’ are themselves part of the world – the world in which colour, we are told, does not exist. And eyes and brains have colours, too. But not like the acer leaves I’m looking at. There’s only one way to get to where Dennett wants to be: he has to strike out the qualia from the equation. They are really not there at all. That acer-colour quale I think I’m experiencing is non-existent. Really?

Argument 1: The beetle in the box

Maybe there is some help available to Dennett from one of the philosophical giants  Wittgenstein. Dennett calls it in, anyway, as support for the position that ‘the very idea of qualia is nonsense’ (CE, p.390). There is a famous passage in Wittgenstein’s Philosophical Investigations where he talks of our private sensations in an analogy:

Suppose everyone had a box with something in it: we call it a “beetle”. No one can look into anyone else’s box, and everyone says he knows what a beetle is only by looking at his beetle. Here it would be quite possible for everyone to have something different in his box … The thing in the box has no place in the language-game at all; not even as a something: for the box might even be empty. No, one can ‘divide through’ by the thing in the box; it cancels out, whatever it is.

I don’t see how this does help Dennett. It is part of Wittgenstein’s exposition known as the private language argument. He is seeking to show that language is a necessarily public activity, and that the notion of a private language known only to its one ‘speaker’ is incoherent. I think it’s significant that the example of a sensation he uses is pain, as you’ll see if you follow the link. Elsewhere Wittgenstein considers whether someone might have a private word for one of his own sensations. But, like the pain, this is just a sensation, and there’s no publicly viewable aspect to it.   But consider my acer leaves: my wife might come and join me in admiring them. We have a publicly available referent for our discussion, and if I ask her about the quality of her own sensation of the colour, she will give every appearance of knowing what I am talking about. True, I can never tell if her sensation is the same as mine, or whether it even makes sense to ask that. Nor can I tell for certain whether she really has the sensation, or is simply behaving as if she did. But I’ll leave that to Wittgenstein. His argument doesn’t seek to deny that I am acquainted with my ‘beetle’  only that it ‘has no place in the language game’. In other words, my wife and I can discuss the acer leaves and what we think of them, but we can’t discuss the precise nature of the sensation they give me – my quale. My wife would have nothing to refer to when speaking of it. In Wittgenstein’s terms, we talk about the leaves and their colour, but our intrinsically private sensations drop out of the discussion. Does this mean the qualia don’t exist? Just a moment I’ll have another look… no, mine do, anyway. Sorry, Dan.

Argument 2: Grown-up drinking

Bottled Qualia

Bottled Qualia

Another strategy open to Dennett is to point out how our supposed qualia may seem unstable in certain ways, and subject to change. He notes how beer is an acquired taste, seeming pretty unpleasant to a child, who may well take it up with gusto later in life. Can the adult be having the same qualia as the child, if the response is so different?

This strikes a chord with me. I started to sample whisky when still a teenager because it made me feel mature and sophisticated. Never mind the fact that it was disgusting  much more important to pretend to be the sort of person I wanted to be. The odd thing is  and I have often wondered about this  that I think I can remember the moment of realisation that eventually came: “Hey  I actually like this stuff!”

So what happened? Did something about these particular qualia suddenly change, rather as if I one day licked a bar of soap and found that it tasted of strawberries? Clearly not. So maybe we could say, that, although it tasted the same, it was just that I started to react to it in a different way  some neural pathway opened up in my brain that engendered a different response. There are difficulties with that idea. As Dennett puts it, in QQ:

For if it is admitted that one’s attitudes towards, or reactions to, experiences are in any way and in any degree constitutive of their experiential qualities, so that a change in reactivity amounts to or guarantees a change in the property, then those properties, those “qualitative or phenomenal features,” cease to be “intrinsic” properties, and in fact become paradigmatically extrinsic, relational properties.

He’s saying, and I agree,  that we can’t mix up subjective and objective properties in this way, otherwise the subjective elements – the qualia – are dragged off their pedestal of private ineffability and are rendered into ordinary, objectively viewable, ones. He goes on to argue, with other examples, that the concept of qualia inevitably leads to confusions of this sort, and that we can therefore banish the confusion by banishing the qualia.

So is there another way out of the dilemma, which rescues them? As with the acer leaves, my whisky-taste qualia are incontrovertibly there. Consider another type of subjective experience  everyone probably remembers something similar. You have been working, maybe in an office, for an hour or two, and suddenly an air conditioning fan is turned off. It was a fairly innocuous noise, and although it was there you simply weren’t aware of it. But now that it’s gone, you’re aware that it’s gone. As you may know, the objective, scientific term for this is ‘habituation’; your system ceases to respond to a constant stimulus. But this time I am not going to make the mistake of mixing this objective description with the subjective one. A habituated stimulus is simply removed from consciousness  your subjective qualia do change as it fades. And something like this, I would argue, is what was happening with the whisky. To a mature palate, it has a complex flavour, or to put it another way, all sorts of different, pleasurable individual qualia which can be distinguished. These put the first, primary, sharp ‘kick’ in the flavour into a new context. But probably that kick is all that the immature version of myself was experiencing. Gradually, my qualia did change as I habituated sufficiently to that kick to allow it to recede a little and allow in the other elements. There had to come some point at which I made up my mind that the stuff was worth drinking for its own sake, and not just as a means to enhance my social status.

Argument 3: Torn cardboard

Torn cardboard

Matching halves

Not convinced? Let’s look at another argument. This starts with an unexpected – and ingenious  analogy: the Rosenbergs, Soviet spies in the US in the cold war era, had a system to enable to spies to verify one another’s identity: each had a fragment of cardboard packaging, originally torn halves of the same jelly package (US brand name Jell-O). So the jagged tear in each piece would perfectly and uniquely match the other. Dennett is equating our perceptual apparatus with one of the cardboard halves; and the characteristics of the world perceived with the other. The two have co-evolved. Anatomical investigation shows how birds and bees, whose nourishment depends on the recognition of flowers and berries, have colour perception, while primarily carnivorous animals  dogs and cats for example  do not. But at the same time plants have evolved flower and berry colour to enable pollination or seed dispersal by the bees or birds. The two sides evolve, matching each other perfectly, like the cardboard fragments. And of course we are omnivores, and have colour perception too. When hunting was scarce, our ability to recognise the colour of a ripe apple could have been a life-and-death matter. And so it would have been for the apple species too, as we unwittingly propagated its seeds. As he puts it:

Why is the sky blue? Because apples are red and grapes are purple, not the other way around. (CE p378)

A lovely idea, but what’s the relevance? His deeper intention with the torn cardboard analogy is to focus on the fact that, if we look at just one of the halves on its own, we are hard put to see anything but a piece of rubbish without purpose or significance  it is given validity only by its sibling. Dennett seeks to demote colour experiences, considered on their own, to a similarly nullified status. Here’s a crucial passage. ‘Otto’ is Dennett’s imaginary defender of qualia  for present purposes he’s me:

And Otto can’t say anything more about the property he calls pink than “It’s this!” (taking himself to be pointing “inside” at a private, phenomenal property of his experience). All that move accomplishes (at best) is to point to his own idiosyncratic color-discrimination state, a move that is parallel to holding up a piece of Jell-O box and saying that it detects this shape property. Otto points to his discrimination-device, perhaps, but not to any quale that is exuded by it, or worn by it, or rendered by it, when it does its work. There are no such things. (CE p383 – my italics).

I don’t think Dennett earns the right to arrive at his concluding statement. There seem to me to be two elements at work here. One is an appeal to the Wittgensteinian beetle argument we considered (‘…taking himself to be pointing “inside”…’), which I tried to show does not do Dennett’s work for him. The second appears to be simply a circular argument: if we decide to assert that Otto is not referring any private experience but something objective (a ‘color-discrimination state’) then we have only banished his qualia by virtue of this assertion. The fact that we can’t be aware of them for ourselves does not change this. The function of the cardboard fragment is an objective one, inseparable from its identification of its counterpart, just as colour perception as an objective function is inseparable from how it evolved. But there’s nothing about the cardboard that corresponds to subjective qualia  the analogy fails. When I think of my experience of the acer leaves I am not thinking of the ‘color-discrimination state’ of my brain  I don’t know anything about that. In fact it’s only from the science I have been taught that I know that there is any such thing. (This final notion nods to another well-known argument – this time in favour of qualia – Frank Jackson’s ‘knowledge’ argument  I’ll leave you to follow the link if you’re interested.)

But this being just a blog, and this post having already been delayed too long, I’ll content myself with having commented on just three arguments from one physicalist philosopher. And so I am still there with Descartes in his tottering house, resisting its demolition. In the next post I’ll enlarge on why I am so foolhardy and perverse.

Consciousness 1 – Zombies

Commuting days until retirement: 451

Commuting at this time of year, with the lengthening mornings and evenings, gives me a chance to lose myself in the sight of tracts of England sliding across my field of vision – I think of Philip Larkin in The Whitsun Weddings:  ‘An Odeon went past, a cooling tower, and someone running up to bowl…’  (His lines tend to jump into my mind like this). It’s tempting to enlarge a scene like this into a simile for life, like the one that Larkin’s poem leads into. Of course we are not just passive observers, but the notion of life as a film show – a series of scenes progressing past your eyes – has a certain curious attractiveness.

A rather more specatcular view than any I get on my train journey. Photo: Yamaguchi Yoshiaki. Wikimedia Commons

A rather more specatcular view than any I get on my train journey.
Photo: Yamaguchi Yoshiaki. Wikimedia Commons

Now imagine that, as I sit in the train, I am not quite a human being as you think of one. Instead I’m a cleverly constructed robot who appears in every way like a human but, being a robot, has something important missing. The objects outside the train form images on some sensor in each of my pseudo-eyes, and the results may then be processed by successive layers of digital circuitry which perform ever more sophisticated interpretative functions. Perhaps these resolve the light patterns that entered my ‘eyes’ into discrete objects, and and trigger motor functions which cause my head and eyes to swivel and follow them as they pass. Much, in fact, like the real me, idly watching the scenes sliding by.

Now let’s elaborate our robot to have capabilities beyond sitting on a train and following the objects outside; now it can produce all the behaviour that any human being can.This curious offspring of a thought-experiment is what philosophers refer to as a zombie – not the sort in horror films with the disintegrating face and staring eyeballs, but a creature who may be as well behaved and courteous as any decent human being. The only difference is that, despite (we presume) the brain churning away as busily as anyone else’s, there are no actual sensations in there – none of those primary, immediate experiences with a subjective quality: the fresh green of a spring day, or the inner rapture of an orgasm. So what’s different? There are a number of possibilities, but, as you will have guessed, the one I am thinking of is that inner, subjective world of experience we all have, but assume that machines do not. This is well expressed by saying that there’s something that it is like to be me, but not something that it’s like to be a machine.(1)  The behaviour is there all right, but that’s all. In the phrase I rather like, the lights are on but nobody’s at home.

Many people who think about the question nowadays, especially those of a scientific bent, tend to conclude that, of course, we must ultimately be nothing but machines of one sort or another. We have discovered many – perhaps most – of the physical principles upon which our brains and bodies work, and we have traced their evolution over time from simple molecular entities. So there we are – machines. But conscious machines – machines that there is something it is like to be? It has frequently been debated whether or not such a machine with all these capabilities would ipso facto be conscious – whether it would have a mind. Or, in other words, whether we could in principle build a conscious machine. (There are some who speculate that we may already have done so.)

One philosophical response to this problem is that of behaviourism, a now justly neglected philosophical position.(2) If you are a behaviourist you believe that your mind, and your mental activity – your thoughts – are defined in terms of your behaviour. The well-known Turing Test constitutes a behaviourist criterion, since it is based on the principle that a computer system whose responses are indistinguishable from those of a human is taken for all practical purposes to have a mind. (I wrote about Turing a little while ago – but here I part company with him.) And for a behaviourist, the phrase ‘What it is like to be…’ can have no meaning, or at best a rather convoluted one based on what we say or do; but its meaning is plain and obvious to you or me. It’s difficult to resist repeating the old joke about behaviourism: two post-coital behaviourists lie in bed together, and one says ‘That was great for you – how was it for me?’ But I take the view of behaviourism that the joke implies – it’s absurd.

Behaviourists, however, can’t be put down as burglars or voyeurs: they don’t peer into the lighted windows to see what’s going on inside. It’s enough for them that the lights are on. For them the concept of a zombie is either meaningless or a logical impossibility.  But there is another position on the nature of the mind which is much more popular in contemporary thought, but which has a different sort of problem with the notion of a zombie. I’m thinking of eliminative materialism.

Well, as I write this post, I feel it extending indefinitely as more ideas churn through that machine I refer to as my brain. So to avoid it becoming impossibly long, and taking another three weeks to write it, I’ll stop there, and just entitle this piece as Part 1. Part 2 will take up the topic of eliminative materialism.

In the meantime I’d just like to leave one thought: I started with a snatch of Philip Larkin, and I’ve always felt that poetry is in essence a celebration of conscious experience; without consciousness I don’t believe that poetry would be possible.


(1) The phrase is mainly associated with Thomas Nagel, and his influential 1974 paper What is it Like to be a Bat? But he in turn attributes it to the English philosopher Timothy Sprigge.

(2) I’m referring to the philosophical doctrine of behaviourism – distinct from, but related to the psychological one – J B Watson, B F Skinner et al.

Fate – Grim or Otherwise

Commuting days until retirement: 469

life-after-lifeThe existence of each one of us, and the crucial events of our lives, are entirely dependent upon a chain of often minor and unrecorded preceding circumstances. Yes – a rather pompous-sounding and trivial observation, but when seen from a subjective point of view it can seem to assume a more profound significance. What prompts this is the novel I’ve just finished, Kate Atkinson’s Life After Life.

It’s a theme that often surfaces in contemporary fiction: in Making it Up Penelope Lively takes the events of her own life and allows them to develop in plausible directions other than the one in which they actually did; David Mitchell (the novelist, not the TV personality), in his first novel Ghostwritten, traces interlinked chains of causality around the globe in which, giving just one example, a fleeting encounter in a London street has critical consequences for the future of humanity.

Kate Atkinson has been a favourite of mine since her first novel, Behind the Scenes at the Museum. She has a Dickensian ability to create an extensive collection of characters each of whom are entirely convincing, and whose interactions with each other may surprise you, but are never less than believable. In fact I find her characters more realistic, and less caricatured, than those of Dickens.

In Life After Life her realism becomes a little more magical. It concerns Ursula Todd, born in 1910 and witnessing the events of the 20th century in a variety of different ways (or not at all) as repeated versions of her life take different courses. I’m not giving anything much away – as both of these events come at the very beginning of the book – if I tell you that in one life she dies at birth, and in another gets to assassinate Hitler before he becomes Chancellor of Germany. Ursula’s large and believable family (Atkinson is particularly good at families) also individually suffer a variety of fates alongside Ursula’s own. The close juxtaposition of earthy reality and fanciful metaphysics comes off, for me, entirely successfully.

So what about the metaphysics of real life? Just to consider this blog, it owes its existence to a whole host of preceding factors. Among them is Sir Tim Berners-Lee, as is the precise trajectory of a shell in the battle of the Somme in 1916, which, had it been very slightly different, would have resulted in the death of my grandfather. As it was, it landed close enough to him to give him what all WW1 soldiers hoped for – a ‘Blighty wound’, which was a passport back home.

A scene at Pozieres during the Battle of the Somme

A scene at Pozières during the Battle of the Somme

Torquay in the early 20th century

Torquay in the early 20th century

But the first my grandfather knew of it was when he came back to consciousness in a hospital in Torquay. Like most who have been through experiences like his, he never said very much about them. However some measure of what he had been through lies in the fact that he felt moved to return to Torquay for a holiday nearly every year for the rest of his life. His son, who would have been my uncle, was not so lucky, though. He lost his life in the first months of the second world war at the age of 19. I was told how likeable and outgoing he was as a character, and I’m sure he would have gone on to have a family. I sometimes spare a thought for my non-existent cousins.

Photo: Steve Cadman

Photo: Steve Cadman

Most of us are aware of certain fateful moments in our own lives – at any rate in retrospect. The one that often returns to me took place when I was on a work trip to New York. My hotel room had a view over the United Nations, giving me an almost cheesily memorable backdrop for my thoughts as I sat there. And my thoughts were about a woman of my acquaintance, and how a postcard suggesting we should go out together would be received, if I sent it to her.

The indecision finally resolved itself, and I sent the card. It gives me a curious, vertiginous feeling to think that the existence of my very real, and now adult, children hung in the balance at that moment. You may wonder what their reaction would be on reading this. It would be Oh God, not that story again.

Well, my nearly-wasn’t wife and I are shortly off to Venice for a long weekend. The idea is to have a relaxing break, but having just run the gauntlet of the Ryanair online check-in process we are starting to wonder. Anyhow, I don’t expect to encounter any life-changing events there; but if I return with any memories worth mentioning they may find their way on to these pages.

Memento Mori

Commuting days until retirement: 477

After my stay in what is officially an Area of Outstanding Natural Beauty, here’s some beauty of a less conventional kind. I glimpsed it out of the train window, and it stayed in my thoughts, and maybe dreams, for a few days afterwards. Since it’s right by a station I was able to return and photograph it.

broken shed
At this point I wondered whether I should simply leave it there for you to enjoy (or not), but since writing is what this blog is really about, I’m going to go ahead and write about it.

What this perfectly exemplifies for me is all those abandoned and forgotten enclaves of wilderness that are constantly close to us, especially in an urban environment. Many of them seem to be created by the presence of railway lines, which carve out little squares and triangles of unusable, inaccessible land which grow weeds and irresistibly attract plastic bottles, tin cans and all the detritus of the surrounding activity. Activity that’s hard to escape from if you need to earn a living, serving a multiplicity of ephemeral but urgent needs. (I’m sounding like an Church of England sermon.)  I wanted to say that forgotten outposts like this pictured one, by contrast, lie outside the frantic zone, and just are. This example is just by the main line where hundreds of thousands of commuters pass daily with their laptops, iPads, dry cleaned suits and power hair styles. Some of them, like me, must give it their attention as they stare out of the window.

I like the way that it immediately changes the perspective that my mind is locked into much of the time. The effect is like one of those stark portraits of an elderly person on the fringes of life, usually from a third world setting, that you often see in the work of a professional photographer. You are struck by the deep wrinkles, the inscrutable expression and the steady gaze. Here it’s the thoroughly wrecked appearance, as well as the utter unregarded dereliction, that invokes some obscure emotional response. Dirt and decay. How did it come to suffer not only broken windows and a holed roof, but also a total structural dislocation, as if picked up and thrown down by a giant hand? It seems to mock the vertical regularity of the flats visible behind it.

It has itself been regular, designed artefact, originally formed out of the surrounding chaos only to be irresistibly drawn back into it – and I think that’s the morbid attraction of a sight like this. For the purposeful, dressed and coiffured commuters who pass by daily it’s a reminder of the disorder and death on the fringes of their assiduously chased aspirations. I’m reminded of the famously death-averse (and dead) poet Philip Larkin, and his poem titled with a jaunty irony Next Please. He characterises our hopes and ambitions:

Watching from a bluff the tiny, clear
Sparkling armada of promises draw near.

But concludes

Only one ship is seeking us, a black-
Sailed unfamiliar, towing at her back
A huge and birdless silence.

Too much to read into a picture of an old shed? If I have made anyone unnecessarily gloomy I apologise. Perhaps blogs should carry warnings, like films or TV programmes: This post contains thoughts that some readers may find depressing. But I like a good wallow.

Right Now

Commuting days until retirement: 491

Right now I am at home – another day off – waiting for some gravel to be delivered for our front drive. Right now, you are reading this (well I hope somebody is, or will). You can see I am having trouble with tenses here, because your ‘now’ is not my ‘now’. I know you are not reading this now, because I haven’t published it. But you know you are reading it now.

BlackboardThis all might seem a bit trivial and pointless, but stay with me for a bit. The notion I am circling around is the curious status of this concept of now. Let’s approach it another way: imagine yourself back at school, in a physics lesson. This may seem either an enticing or an entirely appalling prospect to you, but please indulge my little thought experiment. The teacher has chalked a diagram up on the blackboard (well, that was the cutting edge of presentation technology when I was at school). There’s the diagram up on the right. t1 and t2 obviously represent two instants of time for the ball, in its progress down the slope.

Somehow you are managing to stay awake, just. But in your semi-stupor you find yourself putting up your hand.

‘Yes?’, says the teacher irritably, wondering how there could be any serious question to be asked so far, and expecting something entirely facetious.

‘Er – which one is now?’ you ask. The teacher could perhaps consider your question carefully, for the sake any deep conceptual problem concealed within it, but instead she wonders why she bothered to get up this morning.

Not in the curriculum

Warwick University

Warwick University

However there is a serious philosophical issue here – admittedly not in the physics curriculum, to be fair to the teacher. And the reason it’s not in the curriculum is that the concept of ‘now’ is alien to physics. ‘Now’ is entirely confined to our subjective perception of the world. Think of the earth in its nascent state, a ball of molten lava and all that. Does it even make sense to imagine there was a ‘now’ then? We can say that this red-hot lava whirlpool formed before that one did – but we can’t say that either of them is forming now. Well of course not, in the obvious sense – it was four and a half billion years ago. But you could say that there was a time when your first day at school was ‘now’; and you can also say that there was a time when the execution of Marie Antoinette was ‘now’ – for somebody, that is, even perhaps for the unfortunate woman herself. But as for the formation of the earth – there was no one around for whom it could be a ‘now’. (Small green men excepted.)  You’re thinking of it as a ‘now’, I expect, but that’s because in your imagined scenario you are in fact there, as some sort of implicit presence suspended in space, viewing the proceedings.

It’s odd to try and visualise an exclusively objective world – one without a point of view – “The View from Nowhere” as the philosopher Thomas Nagel has put it; it’s the title of one of his books. In such a world there is no ‘now’, and therefore no past and no future, but only a ‘before’ and ‘after’ relative to any arbitrary point in time. And I was always struck by the way that T. S. Eliot, in Burnt Norton, from his Four Quartets, associates ‘time past and time future’ with the poetic and spiritual, and ‘time before and time after’ with the prosaic and mundane.

Language

Our language – indeed most languages – are built around the ‘now’, in that tenses correspond to past and future. Without the subjective sense of a ‘now’, language would surely work in a very different way. Interestingly, there is an example possibly relevant to this from the Pirahã people of the Amazon, who have been studied by the controversial linguist Daniel Everett. Their relationship to the passage of time seems to be different from ours – Everett claims that they have no real sense of history or of planning for the future, and so live in a kind of perpetual present. Correspondingly, inflections in their utterances are related not to temporal comparisons, like our tenses, but to the surrounding circumstances – e.g. whether something being described is right here, or is known first-hand, or has been reported by some other person. (Everett originally went to them as a Christian missionary, but was dismayed to find that they had no interest at all in Jesus unless Everett could claim to have met him.)

So all this would seem to support a philosopher I remember reading a long time ago. I don’t remember who he was, and can no longer find the passage. But I remember the sentence “Our language has a tiresome bias in favour of time.” I think this man was from the old-school style of linguistic philosophy, which held that most philosophical problems can be resolved into confusions caused by our use of language – and so time concepts were just another example of this. But I don’t think this is at all adequate as an approach, Pirahã or no Pirahã. However my language works, I would still have a sense of the differing character of past events, which cannot be changed, and future events, which mostly cannot be known – and of course a present, a now, which is the defining division between them. I would be surprised if the experience of a Pirahã person did not include that.

Space and time

How about another attack on the problem – to make an analogy between the spatial and the temporal? The spatial equivalent of ‘now’ is ‘here’. And there doesn’t seem to be any perplexity about that. ‘Here’ is where I am, er, now. Oh dear. Maybe these aren’t so easy to separate out. Perhaps ‘here’ seems simpler because we each have our own particular ‘here’. It’s where our body is, and that’s easily seen by others. And we can change it at will. But we all share the same ‘now’, and there’s not a lot we can do to change that. There is, of course, the remote possibility of relativistic time travel. I could in some sense change my ‘now’ relative to yours – but when I come back to earth I am back in the same predicament – just one that differs slightly in degree.

But do we all share the same ‘now’?  Here’s a slightly more disturbing thought. I have made out that my own sense of ‘now’ is confined to my own private experience, and doesn’t exist in the world ‘out there’. And the same is true of you, of course. I can see and hear you, and I find from your behaviour and the things you say that you are experiencing the the same, contemporaneous events that I am. But it’s not your private experience, or your ‘now’ that I am seeing – only your body. And your body – including of course your brain – is very much a part of the world ‘out there’. It’s only your private experience which isn’t, and I can’t experience that, by definition. So how do I know that your ‘now’ is the same as mine? Do we each float around in our own isolated time bubbles?

I think perhaps there is a solution of some sort to this. If your ‘now’ is different from mine, it must therefore be either before it or after it. Let’s suppose it’s an hour after. Then if my ‘now’ is at 4.30, yours is now at 5.30. But of course there’s a problem with the now that I have put in bold. It doesn’t refer to actual time, but to a sort of meta-time by which we mark out time itself. And how could this make sense? It’s rather like asking “how fast does time flow?” when there is no other secondary, or meta-time by which we could measure the ‘speed’ of normal time.

So perhaps this last idea crumbles into nonsense. But I still believe that, in the notion of ‘now’ there is a deep problem, which is one aspect of the more general mystery of consciousness. Do you agree? Most don’t.

But right now, the gravel is here, and is spread over the drive. So at least I’ve managed to do something more practical and down-to-earth today than write this post. And that’s a little bit of my past – or what is now my past – that I can be proud of.