Linguistic Lingering, or Languor in Language

Like all other crusty sexagenarians, I get irritated by certain trends in the evolution of the language I speak. I suppose it’s natural to retain a deeply held desire to hold on to the speech cadences we suck in with our mothers’ milk. So when you’ve been around for more than half a century you inevitably encounter an increasing number of linguistic innovations which you feel reluctant to adopt. Some, of course, are refreshing and attractive; but it’s when certain phrases I’m used to are subverted or distorted by changing usage that I get that instinctive feeling of revulsion.

Here’s an example: when getting tired of something, my parents were always “bored with” it, or maybe “fed up with” it. And so, of course, was I. But nowadays it’s virtually universal to be “bored of” or “fed up of“. Of course, the prepositions we use are quite arbitrary: lately I’ve been learning some Italian, and knowing which preposition to use in each situation can be quite difficult for a beginner: ‘su’, ‘in’, ‘da’, ‘di’, ‘a’…? It just has to be learnt. And it can be quite important: ask for ‘un bicchiere di vino’ and you’ll be given a glass of wine; but if instead you go for ‘un bicchiere da vino’ what you’ll get is a wine glass.

But the conventions for our choice of prepositions are really quite arbitrary; and with the bored/fed up example there’s no danger of mistaking the meaning, so what am I making a fuss about? Well, my problem is that every time I hear the newer version I get that visceral, chalk-squeaking-on-a-blackboard spasm in the gut, and I can’t help it. And don’t get me started on some of those other verbal habits that so many of my generation love to hate: ‘phenomena’, ‘criteria’ or media’ used as singular nouns; or ‘disinterested’ to mean ‘uninterested’, ‘infer’ to mean ‘imply’ or (heard absolutely everywhere now) ‘refute’ to mean ‘deny’.

Of course all those who protest about such things are at pains to rationalise their distaste, and bring to bear an argued justification. The late philosopher Anthony Flew puts it this way:

If we oafishly employ our verbal chisels as verbal screwdrivers, we thereby unfit them for the job to which they are best suited. So what do we use for a chisel when a chisel is what we need?’ (Anthony Flew, Thinking about Thinking).

Well, true, I suppose, as far as it goes; once all your chisels are well and truly blunted there’s no means of resharpening them, in this instance at least. But language is endlessly innovative; it will always be rummaging around in the toolbox for replacements, or perhaps fashioning entirely new instruments. I think that what is really bothering Flew is that old bred-in, instinctive revulsion that kicks in when he detects the linguistic ground on which he originally learned to walk slipping and sliding around beneath his feet. Trust me, I know – I feel it myself.

Nevertheless, our standard sources often take a more relaxed view. The Oxford Dictionary, for example, accepts the shift in meaning of ‘disinterested’ – on the basis that a dictionary is in the business of reflecting usage, rather than dictating it. But do we pedants take that lying down? Oh no. There’s always that old fallback, ‘the purity of the language’.

‘What purity?’, you might justifiably ask. English, of all languages, has the most tenuous claim to this quality. As it has memorably been put:

The problem with defending the purity of the English language is that English is about as pure as a cribhouse whore. We don’t just borrow words; on occasion, English has pursued other languages down alleyways to beat them unconscious and riffle through their pockets for new vocabulary. (Attributed to the Canadian writer James Davis Nicoll, from an internet forum in 1990).

So that’s one way of finding your replacement chisels – nick them out of the back of the nearest unsuspecting linguistic builder’s van.

A more recent self-appointed language policeman is Simon Heffer, whose book Strictly English: The Correct Way to Write…and Why It Matters offers a comprehensive guide to ‘correct’ English. There is, to be fair, much reasonable common-sense advice on clarity of expression between its covers, but once again, Heffer seems to share with me a distaste for deviations from what seem rather arbitrary rules – although he takes the crusade much further than I would dare; many critics have pointed out that his feet seem to be planted firmly in the early 1900s. And of course, it’s always fun to see whether the these guardians of supposed purity practise what they preach. Here, unfortunately, Heffer is rather open to ridicule on a number of counts. For example, on page 49 he tells us sternly:

The phrase each other can only apply to two people or things – “John and Mary wrote to each other” is correct but “John, Mary and Jane wrote to each other” is not. “John, Mary and Jane wrote to one another” is.

If Heffer is offended when this rule is flouted he must have had a particularly straight-laced education; I must admit it’s new one to me. But on page 189 we find Heffer saying:

…I advise my colleagues on The Daily Telegraph to bear in mind the sensitivities of the readers, because we would like them to continue to buy the newspaper and not feel alienated by its diction. So our readers communicate with each other on writing paper, not notepaper.

As one critic has pointed out, if we give Heffer the benefit of the doubt and assume he is not breaking his own rule, we can only conclude that the The Daily Telegraph has only two readers. But I’m inclined to forgive him, if only because on page 119 he is stipulating that ‘One is bored by or with something, never of it’, and that ‘One becomes fed up with things, not of them.’ (Hurray!)

But of course it’s true that without rules of some sort, we’d have no consistency at all and would be unable to understand each other – sorry, one another. As in so many other fields, the art is in finding creative ways to break these rules, thus opening up a whole new range of expressive possibilities. It’s rather like that graphic technique where you position a frame around an illustration, and then let the contents burst outside it – the effect always works, and injects extra vigour into the subject in way that wouldn’t have been achievable unless the frame were there in the first place. Rules can be used to great effect when they are broken, just as the frame is given a new purpose when its more obvious function is subverted.

However, with my pedant’s hat on I’d point out that rule breaking is more often a result of what Flew in the passage above called ‘oafishness’, and not the exercise of creative flair. But either way, it happens. It seems to me inevitable that there will always be tension between natural variety of expression and the restraining force of a rule framework. I am put in mind of one of the illuminating metaphors which the philosopher Wittgenstein was apt to employ. He compares those notions we accept implicitly – ‘bedrock’ notions as we might call them – with the rock of a river-bed, which guides the river’s flow of water – our day-to-day thoughts: “But I distinguish between the movement of the waters on the river-bed and the shift of the bed itself; though there is not a sharp division of the one from the other.” (On Certainty, §97) The rules of language seem to be a specific example of Wittgenstein’s idea: the rules of our language determine how we express ourselves, but our everyday discourse, in turn, gradually wears away and remoulds the rock of the rules themselves; the river changes course.

The Heffers of this world (and, I’m afraid, the Duncommutins) will always be vainly attempting to shore up the banks and maintain that river in the same course – but they will ultimately be defeated by the torrent. The fact is that I still prefer to say “fed up with” and “bored with”. Like any self-respecting dinosaur I shall remain set in my ways, and addicted to the comforting sound of the language I am used to. Until, that is, I become extinct.

Freedom and Purpose

Commuting days until retirement: 34

Retirement, I find, involves a lot of decisions. This blog shows that the important one was taken over two years ago – but there have been doubts about that along the way. And then, as the time approaches, a whole cluster of secondary decisions loom. Do I take my pension income by this method or that method? Can I phase my retirement and continue part-time for a while? (That one was taken care of for me – the answer was no. I felt relieved; I didn’t really want to.)  So I am free to make the first of these decisions, but not the second. And that brings me to what this post is about: what it means when we say we are ‘free’ to make a decision.

I’m not referring to the trivial sense, in which we are not free if some external factor constrains us, as with my part-time decision. It’s that more thorny philosophical problem I’m chasing, namely the dilemma as to whether we can take full responsibility as the originators of our actions; or whether we should assume that they are an inevitable consequence of the way things are in the world – the world of which our bodies and brains are a part.

It’s a dilemma which seems unresolved in modern Western society: our intuitive everyday assumption is that the first is true; indeed our whole system of morals – and of law and justice – is founded on it: we are individually held responsible for our actions unless constrained by external circumstances, or perhaps some mental dysfunction that we cannot help. Yet in our increasingly secular society, majority educated opinion drifts towards the materialist view – that the traditional assumption of freedom of the will is an illusion.

Any number of books have been written on how these approaches might be reconciled; I’m not going to get far in one blog post. But it does seem to me that this concept of freedom of action is far more elusive than is often accepted, and that facile approaches to it often end up by missing the point altogether. I would just like to try and give some idea of why I think that.

Early in Ian McEwan’s novel Atonement, the child writer Briony finds herself alone in a quiet house, in a reflective frame of mind:

Briony Tallis

Briony Tallis depicted on the cover of Atonement

She raised one hand and flexed its fingers and wondered, as she had sometimes before, how this thing, this machine for gripping, this fleshy spider on the end of her arm, came to be hers, entirely at her command. Or did it have some little life of its own? She bent her finger and straightened it. The mystery was in the instant before it moved, the dividing moment between not moving and moving, when her intention took effect. It was like a wave breaking. If she could only find herself at the crest, she thought, she might find the secret of herself, that part of her that was really in charge. She brought her forefinger closer to her face and stared at it, urging it to move. It remained still because she was pretending, she was not entirely serious, and because willing it to move, or being about to move it, was not the same as actually moving it. And when she did crook it finally, the action seemed to start in the finger itself, not in some part of her mind. When did it know to move, when did she know to move it?  There was no catching herself out. It was either-or. There was no stitching, no seam, and yet she knew that behind the smooth continuous fabric was the real self – was it her soul? – which took the decision to cease pretending, and gave the final command.

I don’t know whether, at time of writing, McEwan knew of the famous (or infamous) experiments of Benjamin Libet, some 18 years before the book was published. McEwan is a keen follower of scientific and philosophical ideas, so it’s quite likely that he did. Libet, who had been a neurological researcher since the early 1960s, designed a seminal series of experiments in the early eighties in which he examined the psychophysiological processes underlying the experience McEwan evokes.

Subjects were hooked up to detectors of brain impulses, and then asked to press a key or take some other detectable action at a moment of their own choosing, during some given period of time. They were also asked to record the instant at which they consciously made the decision to take action, by registering the position of a moving spot on an oscilloscope.

The most talked about finding of these experiments was not only that there was an identifiable electrical brain impulse associated with each decision, but that it generally occurred before the reported moment of the subject’s conscious decision. And so, on the face of it, the conclusion to be drawn is that, when we imagine ourselves to be freely taking a decision, it is really being driven by some physical process of which we are unaware; ergo free will is an illusion.

Benjamin Libet

Benjamin Libet

But of course it’s not quite that simple. In the course of his experiments Libet himself found that sometimes there was an impulse looking like the initiation of an action which was not actually followed by one. It turned out that in these cases the subject had considered moving at that moment but decided against it; so it’s as if, even when there is some physical drive to action we may still have the freedom to veto it. Compare McEwan’s Briony: ‘It remained still because she was pretending, she was not entirely serious, and because willing it to move, or being about to move it, was not the same as actually moving it.’ And this description is one that I should think we can all recognise from our own experience.

There have been other criticisms: if a subject may be deluded about when their actions are initiated, how reliable can their assessment be of exactly when they made a decision? (This from arch-physicalist Daniel Dennett). We feel we are floating helplessly in some stirred-up conceptual soup where objective and subjective measurements are difficult to disentangle from one another.

But you may be wondering all this time what these random finger crookings and key pressings have to do with my original question of whether we are free to make the important decisions which can shape our lives. Well, I hope you are, because that’s really the point of my post. There’s a big difference between these rather meaningless physical actions and the sorts of voluntary decisions that really interest us. Most, if not all, significant actions we take in our lives are chosen with a purpose. Philosophical reveries like Briony’s apart, we don’t sit around considering whether to move our finger at this moment or that moment; such minor bodily movements are normally triggered quite unconsciously, and generally in the pursuit of some higher end.

Rather, before opting for one of the paths open to us, there is some mental process of weighing up and considering what the result of each alternative might be, and which outcome we think it best to bring about. This may be an almost instantaneous judgement (which way to turn the steering wheel) or a more extended consideration of, for example, whether I should arrange my finances to my own maximum advantage, or to that of my family after my death. In either case I am constrained by a complicated network of beliefs, prejudices and instincts, some of which I am probably only slightly consciously aware of, if at all.

Teasing out the meaning of what it is for a decision to be ‘free’ in this context, is evidently very difficult, and certainly not something I’m going to try and achieve here, even if I could. But what is clear is that an isolated action like crooking your finger or pressing a button at some random moment, and for no specific purpose, has very little in common with the decisions by which we order our lives. It’s extremely difficult to imagine any objective experiment which could reliably investigate the causes of those more significant choices.

David Hume

David Hume

Immanuel Kant

Immanuel Kant

So maybe we are driven towards the philosopher Hume’s view that ‘reason is, and ought only to be the slave of the passions’. But I find the Kantian view attractive – that we can objectively deduce a morally correct course of action from our own existence as rational, sentient beings. Perhaps our freedom somehow consists in our ability to navigate a course between these two – to recognize when our ‘passions’ are driving us in the ‘right’ direction, and when they are not. Or that when we have conflicting instincts, as we often do, there is the potential freedom to rationally adjudicate between them.

Some have attempted to carve out a space for freewill in a supposedly deterministic universe by pointing out the randomness of quantum events and suchlike as the putative first causes of action. But this is an obvious fallacy. If our actions were bound by such meaningless occurrences, there is no sense in which we could be considered free at all. However this perspective does, it seems to me, throw some light on the Libet experiments. If we are asked to take random, meaning-free decisions, is it surprising that we then appear to be subjugating ourselves to whatever random, purposeless events that might be taking place in our nervous system?

Ian McEwan must have had in mind the dichotomy between meaningless, consequence-free actions and significant ones, and how we can ascribe responsibility. The plot of Atonement, as its title hints, eventually hinges on the character Briony’s own sense of responsibility for those of her actions that are significant in a broader perspective. But as we are introduced to her, McEwan has her puzzling over the source of those much more limited impulses that do not spring from any sort of rationale.

Recently I wrote about Martin Gardner, a strict believer in scientific rigour but also in metaphysical truths not capable of scientific demonstration, and his approach appeals to me. Freewill, he asserts, is inseparable from consciousness:

For me, free will and consciousness are two names for the same thing. I cannot conceive of myself being self-aware without having some degree of free will. Persons completely paralyzed can decide what to think about or when to blink their eyes. Nor can I imagine myself having free will without being conscious. (From The Whys of a Philosophical Scrivener, Postscript)

At the beginning of his chapter on free will he refers to Wittgenstein’s doctrine that only those questions which can meaningfully be asked can have answers, and what remains cannot be spoken about, continuing:

The thesis of this chapter, although extremely simple and therefore annoying to most contemporary thinkers, is that the free-will problem cannot be solved because we do not know exactly how to put the question.

The chapter examines a wide range of views before restating Gardner’s own position.  ‘Indeed,’ he says, ‘it was with a feeling of enormous relief that I concluded, long ago, that free will is an unfathomable mystery.’

It will be with another feeling of enormous relief that I will soon have a taste of freedom of a kind I haven’t before experienced; but will I be truly free? Well, I will at least have more time to think (freely or otherwise) about it.

The Mathematician and the Surgeon

Commuting days until retirement: 108

After my last post, which, among other things, compared differing attitudes to death and its aftermath (or absence of one) on the part of Arthur Koestler and George Orwell, here’s another fruitful comparison. It seemed to arise by chance from my next two commuting books, and each of the two people I’m comparing, as before, has his own characteristic perspective on that matter. Unlike my previous pair both could loosely be called scientists, and in each case the attitude expressed has a specific and revealing relationship with the writer’s work and interests.

The Mathematician

The first writer, whose book I came across by chance, has been known chiefly for mathematical puzzles and games. Martin Gardner was born in Oklahoma USA in 1914; his father was an oil geologist, and it was a conventionally Christian household. Although not trained as a mathematician, and going into a career as a journalist and writer, Gardner developed a fascination with mathematical problems and puzzles which informed his career – hence the justification for his half of my title.

Martin Gardner

Gardner as a young man (Wikimedia)

This interest continued to feed the constant books and articles he wrote, and he was eventually asked to write the Scientific American column Mathematical Games which ran from 1956 until the mid 1980s, and for which he became best known; his enthusiasm and sense of fun shines through the writing of these columns. At the same time he was increasingly concerned with the many types of fringe beliefs that had no scientific foundation, and was a founder member of PSICOPS,  the organisation dedicated to the exposing and debunking of pseudoscience. Back in February last year I mentioned one of its other well-known members, the flamboyant and self-publicising James Randi. By contrast, Gardner was mild-mannered and shy, averse from public speaking and never courting publicity. He died in 2010, leaving behind him many admirers and a two-yearly convention – the ‘Gathering for Gardner‘.

Before learning more about him recently, and reading one of his books, I had known his name from the Mathematical Games column, and heard of his rigid rejection of things unscientific. I imagined some sort of skinflint atheist, probably with a hard-nosed contempt for any fanciful or imaginative leanings – however sane and unexceptionable they might be – towards what might be thought of as things of the soul.

How wrong I was. His book that I’ve recently read, The Whys of a Philosophical Scrivener, consists of a series of chapters with titles of the form ‘Why I am not a…’ and he starts by dismissing solipsism (who wouldn’t?) and various forms of relativism; it’s a little more unexpected that determinism also gets short shrift. But in fact by this stage he has already declared that

I myself am a theist (as some readers may be surprised to learn).

I was surprised, and also intrigued. Things were going in an interesting direction. But before getting to the meat of his theism he spends a good deal of time dealing with various political and economic creeds. The book was written in the mid 80s, not long before the collapse of communism, which he seems to be anticipating (Why I am not a Marxist) . But equally he has little time for Reagan or Thatcher, laying bare the vacuity of their over-simplistic political nostrums (Why I am not a Smithian).

Soon after this, however, he is striding into the longer grass of religious belief: Why I am not a Polytheist; Why I am not a Pantheist; – so what is he? The next chapter heading is a significant one: Why I do not Believe the Existence of God can be Demonstrated. This is the key, it seems to me, to Gardner’s attitude – one to which I find myself sympathetic. Near the beginning of the book we find:

My own view is that emotions are the only grounds for metaphysical leaps.

I was intrigued by the appearance of the emotions in this context: here is a man whose day job is bound up with his fascination for the powers of reason, but who is nevertheless acutely conscious of the limits of reason. He refers to himself as a ‘fideist’ – one who believes in a god purely on the basis of faith, rather than any form of demonstration, either empirical or through abstract logic. And if those won’t provide a basis for faith, what else is there but our feelings? This puts Gardner nicely at odds with the modish atheists of today, like Dawkins, who never tires of telling us that he too could believe if only the evidence were there.

But at the same time he is squarely in a religious tradition which holds that ultimate things are beyond the instruments of observation and logic that are so vital to the secular, scientific world of today. I can remember my own mother – unlike Gardner a conventional Christian believer – being very definite on that point. And it reminds me of some of the writings of Wittgenstein; Gardner does in fact refer to him,  in the context of the freewill question. I’ll let him explain:

A famous section at the close of Ludwig Wittgenstein’s Tractatus Logico-Philosophicus asserts that when an answer cannot be put into words, neither can the question; that if a question can be framed at all, it is possible to answer it; and that what we cannot speak about we should consign to silence. The thesis of this chapter, although extremely simple and therefore annoying to most contemporary thinkers, is that the free-will problem cannot be solved because we do not know exactly how to put the question.

This mirrors some of my own thoughts about that particular philosophical problem – a far more slippery one than those on either side of it often claim, in my opinion (I think that may be a topic for a future post). I can add that Gardner was also on the unfashionable side of the question which came up in my previous post – that of an afterlife; and again he holds this out as a matter of faith rather than reason. He explores the philosophy of personal identity and continuity in some detail, always concluding with the sentiment ‘I do not know. Do not ask me.’ His underlying instinct seems to be that there has to something more than our bodily existence, given that our inner lives are so inexplicable from the objective point of view – so much more than our physical existence. ‘By faith, I hope and believe that you and I will not disappear for ever when we die.’ By contrast, Arthur Koestler, you may remember,  wrote in his suicide note of ‘tentative hopes for a depersonalised afterlife’ – but, as it turned out, these hopes were based partly on the sort of parapsychological evidence which was anathema to Gardner.

And of course Gardner was acutely aware of another related mystery – that of consciousness, which he finds inseparable from the issue of free will:

For me, free will and consciousness are two names for the same thing. I cannot conceive of myself being self-aware without having some degree of free will… Nor can I imagine myself having free will without being conscious.

He expresses utter dissatisfaction with the approach of arch-physicalists such as Daniel Dennett, who,  as he says,  ‘explains consciousness by denying that it exists’. (I attempted to puncture this particular balloon in an earlier post.)

Martin Gardner

Gardner in later life (Konrad Jacobs / Wikimedia)

Gardner places himself squarely within the ranks of the ‘mysterians’ – a deliberately derisive label applied by their opponents to those thinkers who conclude that these matters are mysteries which are probably beyond our capacity to solve. Among their ranks is Noam Chomsky: Gardner cites a 1983 interview with the grand old man of linguistics,  in which he expresses his attitude to the free will problem (scroll down to see the relevant passage).

The Surgeon

And so to the surgeon of my title, and if you’ve read one of my other blog posts you will already have met him – he’s a neurosurgeon named Henry Marsh, and I wrote a post based on a review of his book Do No Harm. Well, now I’ve read the book, and found it as impressive and moving as the review suggested. Unlike many in his profession, Marsh is a deeply humble man who is disarmingly honest in his account about the emotional impact of the work he does. He is simultaneously compelled towards,  and fearful of, the enormous power of the neurosurgeon both to save and to destroy. His narrative swings between tragedy and elation, by way of high farce when he describes some of the more ill-conceived management ‘initiatives’ at his hospital.

A neurosurgical operation

A neurosurgical operation (Mainz University Medical Centre)

The interesting point of comparison with Gardner is that Marsh – a man who daily manipulates what we might call physical mind-stuff – the brain itself – is also awed and mystified by its powers:

There are one hundred billion nerve cells in our brains. Does each one have a fragment of consciousness within it? How many nerve cells do we require to be conscious or to feel pain? Or does consciousness and thought reside in the electrochemical impulses that join these billions of cells together? Is a snail aware? Does it feel pain when you crush it underfoot? Nobody knows.

The same sense of mystery and wonder as Gardner’s; but approached from a different perspective:

Neuroscience tells us that it is highly improbable that we have souls, as everything we think and feel is no more or no less than the electrochemical chatter of our nerve cells… Many people deeply resent this view of things, which not only deprives us of life after death but also seems to downgrade thought to mere electrochemistry and reduces us to mere automata, to machines. Such people are profoundly mistaken, since what it really does is upgrade matter into something infinitely mysterious that we do not understand.

Henry Marsh

Henry Marsh

This of course is the perspective of a practical man – one who is emphatically working at the coal face of neurology, and far more familiar with the actual material of brain tissue than armchair speculators like me. While I was reading his book, although deeply impressed by this man’s humanity and integrity, what disrespectfully came to mind was a piece of irreverent humour once told to me by a director of a small company I used to work for which was closely connected to the medical industry. It was a sort of a handy cut-out-and-keep guide to the different types of medical practitioner:

Surgeons do everything and know nothing. Physicians know everything and do nothing. Psychiatrists know nothing and do nothing.  Pathologists know everything and do everything – but the patient’s dead, so it’s too late.

Grossly unfair to all to all of them, of course, but nonetheless funny, and perhaps containing a certain grain of truth. Marsh, belonging to the first category, perhaps embodies some of the aversion from dry theory that this caricature hints at: what matters to him ultimately, as a surgeon, is the sheer down-to-earth physicality of his work, guided by the gut instincts of his humanity. We hear from him about some members of his profession who seem aloof from the enormity of the dangers it embodies, and seem able to proceed calmly and objectively with what he sees almost as the detachment of the psychopath.

Common ground

What Marsh and Gardner seem to have in common is the instinct that dry, objective reasoning only takes you so far. Both trust the power of their own emotions, and their sense of awe. Both, I feel, are attempting to articulate the same insight, but from widely differing standpoints.

Two passages, one from each book, seem to crystallize both the similarities and differences between the respective approaches of the two men, both of whom seem to me admirably sane and perceptive, if radically divergent in many respects. First Gardner, emphasising in a Wittgensteinian way how describing how things appear to be is perhaps a more useful activity than attempting to pursue any ultimate reasons:

There is a road that joins the empirical knowledge of science with the formal knowledge of logic and mathematics. No road connects rational knowledge with the affirmations of the heart. On this point fideists are in complete agreement. It is one of the reasons why a fideist, Christian or otherwise, can admire the writings of logical empiricists more than the writings of philosophers who struggle to defend spurious metaphysical arguments.

And now Marsh – mystified, as we have seen, as to how the brain-stuff he manipulates daily can be the seat of all experience – having a go at reading a little philosophy in the spare time between sessions in the operating theatre:

As a practical brain surgeon I have always found the philosophy of the so-called ‘Mind-Brain Problem’ confusing and ultimately a waste of time. It has never seemed a problem to me, only a source of awe, amazement and profound surprise that my consciousness, my very sense of self, the self which feels as free as air, which was trying to read the book but instead was watching the clouds through the high windows, the self which is now writing these words, is in fact the electrochemical chatter of one hundred billion nerve cells. The author of the book appeared equally amazed by the ‘Mind-Brain Problem’, but as I started to read his list of theories – functionalism, epiphenomenalism, emergent materialism, dualistic interactionism or was it interactionistic dualism? – I quickly drifted off to sleep, waiting for the nurse to come and wake me, telling me it was time to return to the theatre and start operating on the old man’s brain.

I couldn’t help noticing that these two men – one unconventionally religious and the other not religious at all – seem between them to embody those twin traditional pillars of the religious life: faith and works.

A Few Pointers

Commuting days until retirement: 342

Michaelangelo's finger

The act of creation, in the detail from the Sistine Chapel ceiling which provides Tallis’s title

After looking in the previous post at how certain human artistic activities map on to the world at large, let’s move our attention to something that seems much more primitive. Primitive, at any rate, in the sense that most small children become adept at it before they develop any articulate speech. This post is prompted by a characteristically original book by Raymond Tallis I read a few years back – Michaelangelo’s Finger. Tallis shows how pointing is a quintessentially human activity, depending on a whole range of capabilities that are exclusive to humans. In the first place, it could be thought of as a language in itself – Pointish, as Tallis calls it. But aren’t pointing fingers, or arrows, obvious in their meaning, and capable of only one interpretation? I’ve thought of a couple of examples to muddy the waters a little.

Pointing – but which way?

TV aerial

Which way does it point?

The first is perhaps a little trivial, even silly. Look at this picture of a TV aerial. If asked where it is pointing, you would say the TV transmitter, which will be in the direction of the thin end of the aerial. But if we turn it sideways, as I’ve done underneath, we find what we would naturally interpret as an arrow pointing in the opposite direction. It seems that our basic arrow understanding is weakened by the aerial’s appearance and overlaid by other considerations, such as a sense of how TV aerials work.

My second example is something I heard about which is far more profound and interesting, and deliciously counter-intuitive. It has to do with the stages by which a child learns language, and also with signing, as used by deaf people. Two facts are needed to explain the context: first is that, as you may know, sign language is not a mere substitute for language, but is itself a language in every sense. This can be demonstrated in numerous ways: for example, conversing in sign has been shown to use exactly the same area of the brain as does the use of spoken language. And, compared with those rare and tragic cases where a child is not exposed to language in early life, and consequently never develops a proper linguistic capability, young children using only sign language at this age are not similarly handicapped. Generally, for most features of spoken language, equivalents can be found in signing. (To explore this further, you could try Oliver Sacks’ book Seeing Voices.) The second fact concerns conventional language development: at a certain stage, many children, hearing themselves referred to as ‘you’, come to think of ‘you’ as a name for themselves, and start to call themselves ‘you’; I remember both my children doing this.

And so here’s the payoff: in most forms of sign language, the word for ‘you’ is to simply point at the person one is speaking to. But children who are learning signing as a first language will make exactly the same mistake as their hearing counterparts, pointing at the person they are addressing in order to refer to themselves. We could say, perhaps, that they are still learning the vocabulary of Pointish. The aerial example didn’t seem very important, as it merely involved a pointing action that we ascribe to a physical object. Of course the object itself can’t have an intention; it’s only a human interpretation we are considering, which can work either way. This sign language example is more surprising because the action of pointing – the intention – is a human one, and in thinking of it we implicitly transfer our consciousness into the mind of the pointer, and attempt to get our head around how they can make a sign whose meaning is intuitively obvious to us, but intend it in exactly the opposite sense.

What’s involved in pointing?

Tallis teases out how pointing relies on a far more sophisticated set of mental functions than it might seem to involve at first sight. As a first stab at demonstrating this, there is the fact that pointing, either the action or the understanding of it, appears to be absent in animals – Tallis devotes a chapter to this. He describes a slightly odd-feeling experience which I have also had, when throwing a stick for a dog to retrieve. The animal is often in a high state of excitement and distraction at this point, and dogs do not have very keen sight. Consequently it often fails to notice that you have actually thrown the stick, and continues to stare at you expectantly. You point vigorously with an outstretched arm: “It’s over there!” Intuitively, you feel the dog should respond to that, but of course it just continues to watch you even more intensely, and you realise that it simply has no notion of the meaning of the gesture – no notion, in fact, of ‘meaning’ at all. You may object that there is a breed of dog called a Pointer, because it does just that – points. But let’s just examine for a moment what pointing involves.

Primarily, in most cases, the the key concept is attention: you may want to draw the attention of another to something,. Or maybe, if you are creating a sign with an arrow, you may be indicating by proxy where others should go, on the assumption that they have a certain objective. Attention, objective: these are mental entities which we can only ascribe to others if we first have a theory of mind – that is, if we have already achieved the sophisticated ability to infer that others have minds, and and a private world, like our own. Young children will normally start to point before they have very much speech (as opposed to language – understanding develops in advance of expression). It’s significant that autistic children usually don’t show any pointing behaviour at this stage. Lack of insight into the minds of others – an under-developed theory of mind – is a defining characteristic of autism.

So, returning to the example of the dog, we can take it that for an animal to show genuine pointing behaviour, it must have a developed notion of other minds, and which seems unlikely. The action of the Pointer dog looks more like instinctive behaviour, evolved through the cooperation of packs and accentuated by selective breeding. There are other examples of instinctive pointing in animal species: that of bees is particularly interesting, with the worker ‘dance’ that communicates to the hive where a food source is. This, however, can be analysed down into a sequence of instinctive automatic responses which will always take the same form in the same circumstances, showing no sign of intelligent variation. Chimpanzees can be trained to point, and show some capacity for imitating humans, but there are no known examples of their use of pointing in the wild.

But there is some recent research which suggests a counter-example to Tallis’s assertion that pointing is unknown in animals. This shows elephants responding to human pointing gestures, and it seems there is a possibility that they point spontaneously with their trunks. This rather fits with other human-like behaviour that has been observed in elephants, such as apparently grieving for their dead. Grieving, it seems to me, has something in common with pointing, in that it also implies a theory of mind; the death of another individual is not just a neutral change in the shape and pattern of your world, but the loss of another mind. It’s not surprising that, in investigating ancient remains, we take signs of burial ritual to be a potent indicator of the emergence of a sophisticated civilisation of people who are able to recognise and communicate with minds other than their own – probably the emergence of language, in fact.

Pointing in philosophy

We have looked at the emergence of pointing and language in young children; and the relation between the two has an important place in the history of philosophy. There’s a simple, but intuitive notion that language is taught to a child by pointing to objects and saying the word for them – so-called ostensive definition. And it can’t be denied that this has a place. I can remember both of my children taking obvious pleasure in what was, to them, a discovery – each time they pointed to something they could elicit a name for it from their parent. In a famous passage at the start of Philosophical Investigations, Wittgenstein identifies this notion – of ostensive definition as the cornerstone of language learning – in a passage from the writings of St Augustine, and takes him to task over it. Wittgenstein goes on to show, with numerous examples, how dynamic and varied an activity the use of language is, in contrast to the monolithic and static picture suggested by Augustine (and indeed by Wittgenstein himself in his earlier incarnation). We already have our own example in the curious and unique way in which the word ‘you’ and its derivatives are used, and a sense of the stages by which children develop the ability to use it correctly.

The Boyhood of Raleigh

Perhaps the second most famous pointing finger in art: Millais’ The Boyhood of Raleigh

The passage of Augustine also suggests a notion of pointing as a primitive,  primary action, needing no further explanation. However, we’ve seen how it relies on a prior set of sophisticated abilities: having the notion that oneself is distinct from the world – a world that contains other minds like one’s own, whose attention may have different contents from one’s own; that it’s possible to communicate meaning by gestures to modify those contents; an idea of how these gestures can be ‘about’ objects within the world; and that there needs to be agreement on how to interpret the gestures, which aren’t always as intuitive and unambiguous as we may imagine. As Tallis rather nicely puts it, the arch of ostensive definition is constructed from these building bricks, with the pointing action as the coping stone which completes it.

The theme underlying both this and my previous post is the notion of how one thing can be ‘about’ another – the notion of intentionality. This idea is presented to us in an especially stark way when it comes to the action of pointing. In the next post I intend to approach that more general theme head-on.