A Passage of Time

I have let rather too much time elapse since my last post. What are my excuses? Well, I mentioned in one post how helpful my commuter train journeys were in encouraging the right mood for blog-writing – and since retiring such journeys are few and far between. To get on the train every time I felt a blog-post coming on would not be a very pension-friendly approach, given current fares. My other excuse is the endless number of tasks around the house and garden that have been neglected until now. At least we are now starting to create a more attractive garden, and recently took delivery of a summerhouse: I am hoping that this could become another blog-friendly setting.

Time travel inventor

tvtropes.org

But since a lapse of time is my starting point, I could get back in by thinking again about the nature of time. Four years back (Right Now, 23/3/13) I speculated on the issue of why we experience a subjective ‘now’ which doesn’t seem to have a place in objective physical science. Since then I’ve come across various ruminations on the existence or non-existence of time as a real, out-there, component of the world’s fabric. I might have more to say about this in the, er, future – but what appeals to me right now is the notion of time travel. Mainly because I would welcome a chance to deal with my guilty feelings by going back in time and plugging the long gap in this blog over the past months.

I recently heard about an actual time travel experiment, carried out by no less a person that Stephen Hawking. In 2009, he held a party for time travellers. What marked it out as such was that he sent out the invitations after the party took place. I don’t know exactly who was invited; but, needless to say, the canapes remained uneaten and the champagne undrunk. I can’t help feeling that if I’d tried this, and no one had turned up, my disappointment would rather destroy any motivation to send out the invitations afterwards. Should I have sent them anyway, finally to prove time travel impossible? I can’t help feeling that I’d want to sit on my hands, and leave the possibility open for others to explore.  But the converse of this is the thought that, if the time travellers had turned up, I would be honour-bound to despatch those invites; the alternative would be some kind of universe-warping paradox. In that case I’d be tempted to try it and see what happened.

Elsewhere, in the same vein, Hawking has remarked that the impossibility of time travel into the past is demonstrated by the fact that we are not invaded by hordes of tourists from the future. But there is one rather more chilling explanation for their absence: namely that the time travel is theoretically possible, but we have no future in which to invent it. Since last year that unfortunately looks a little more likely, given the current occupant of the White House. That such a president is possible makes me wonder whether the universe is already a bit warped.

Should you wish to escape this troublesome present and escape into a different future, whatever it might hold, it can of course be done. As Einstein famously showed in 1905, it’s just a matter of inventing for yourself a spaceship that can accelerate to nearly the speed of light and taking a round trip in it. And of course this isn’t entirely science fiction: astronauts and satellites – even airliners – regularly take trips of microseconds or so into the future; and indeed our now familiar satnav devices wouldn’t work if this effect weren’t taken into account.

But the problem of course arises if you find the future you’ve travelled to isn’t one you like. (Trump, or some nepotic Trumpling,  as world president? Nuclear disaster? Both of these?) Whether you can get back again by travelling backward in time is not a question that’s really been settled. Indeed, it’s the ability to get at the past that raises all the paradoxes – most famously what would happen if you killed your grandparents or stopped them getting together.

Marty McFly with his teenage mother

Marty McFly with his teenage mother

This is a furrow well-ploughed in science fiction, of course. You may remember the Marty McFly character in the film Back to the Future, who embarks on a visit to the past enabled by his mad scientist friend. It’s one way of escaping from his dysfunctional, feckless parents, but having travelled back a generation in time he finds himself in being romantically approached by his teenage mother. He manages eventually to redirect her towards his young father, but on returning to the present finds his parents transformed into an impossibly hip and successful couple.

Then there’s Ray Bradbury’s story A Sound of Thunder, where tourists can return to hunt dinosaurs – but only those which were established to have been about to die in any case, and any bullets must then be removed from their bodies. As a further precaution, the would-be hunters are kept away from the ground by a levitating path, to prevent any other paradox-inducing changes to the past. One bolshie traveller breaks the rules however, reaches the ground, and ends up with a crushed butterfly on the sole of his boot. On returning to the present he finds that the language is subtly different, and that the man who had been the defeated fascist candidate for president has now won the election. (So, thinking of my earlier remarks, could some prehistoric butterfly crusher, yet to embark on his journey, be responsible for the current world order?)

My favourite paradox is the one portrayed in a story called The Muse by Anthony Burgess, in which – to leave a lot out – a time travelling literary researcher manages to meet William Shakespeare and question him on his work. Shakespeare’s eye alights on the traveller’s copy of the complete works, which he peruses and makes off with, intending to mine it for ideas. This seems like the ideal solution for struggling blog-writers like me, given that, having travelled forward in time and copied what I’d written on to a flash drive, I could return to the present and paste it in here. Much easier.

But these thoughts put me in mind of a more philosophical issue with time which has always fascinated me – namely whether it’s reversible. We know how to travel forward in time; however when it comes to travelling backward there are various theories as to how it might, in theory, be done, but no-one is very sure. Does this indicate a fundamental asymmetry in the way time works? Of course this is a question that has been examined in much greater detail in another context: the second law of thermodynamics, we are told, says it all.

Let’s just review those ideas. Think of running a film in reverse. Might it show anything that could never happen in real, forward time? Well of course if it were some sort of space film which showed planets orbiting the sun, or a satellite around the earth, then either direction is possible. But, back on earth, think of all those people you’d see walking backwards. Well, on the face of it, people can walk backwards, so what’s the problem? Well, here’s one of many that I could think of: imagine that one such person is enjoying a backward country walk on a path through the woods. As she approaches a protruding branch from a sapling beside the path, the branch suddenly whips sideways towards her as if to anticipate her passing, and then, laying itself against her body, unbends itself as she backward-walks by, and has then returned to its rest position as she recedes. Possible? Obviously not. But is it?

I’m going to argue against the idea that there is a fundamental ‘arrow of time’, and that despite the evident truth of the laws of thermodynamics and the irresistible tendency we observe toward increasing disorder, or entropy, there’s nothing ultimately irreversible about physical processes. I’ve deliberately chosen an example which seems to make my case harder to maintain, to see if I can explain my way out of it. You will have had the experience of walking by a branch which projects across your path, and noticing how your body bends it forwards as you pass, and seeing it spring back to its former position as you continue on. Could we envisage a sequence of events in the real world where all this happened in reverse?

Before answering that I’m going to look at a more familiar type of example. I remember being impressed many years ago by an example of the type of film I mentioned, illustrating the idea of entropy. It showed someone holding a cup of tea, and then letting go of it, with the expected results. Then the film was reversed. The mess of spilt tea and broken china on the floor drew itself together, and as the china pieces reassembled themselves into a complete cup and saucer, the tea obediently gathered itself back into the cup. As this process completed the whole assembly launched itself from the floor and back into the hands of its owner.

Obviously, that second part of the sequence would never happen in the real world. It’s an example of how, left to itself, the physical world will always progress to a state of greater disorder, or entropy. We can even express the degree of entropy mathematically, using information theory. Case closed, then – apart, perhaps, from biological evolution? And even then it can be shown that if some process – like the evolution of a more sophisticated organism – decreases entropy, it will always be balanced by a greater increase elsewhere; and so the universe’s total amount of entropy increases. The same applies to our own attempts to impose order on the world.

So how could I possibly plead for reversibility of time? Well, I tend to think that this apparent ‘arrow’ is a function of our point of view as limited creatures, and our very partial perception of the universe. I would ask you to imagine, for a moment, some far more privileged being – some sort of god, if you like – who is able to track all the universe’s individual particles and fields, and what they are up to. Should this prodigy turn her attention to our humble cup of tea, what she saw, would I think, be very different from the scene as experienced through our own senses. From her perspective, the clean lines of the china cup which we see would become less defined – lost in a turmoil of vibrating molecules, themselves constantly undergoing physical and chemical change. The distinction between the shattered cup on the floor and the unbroken one in the drinker’s hands would be less clear.

Colour blindness testWhat I’m getting at is the fact that what we think of as ‘order’ in our world is an arrangement that seems significant only from one particular point of view determined by the scale and functionality of our senses: the arrangement we think of as ‘order’ floats like an unstable mirage in a sea of chaos. As a very rough analogy, think of those patterns of coloured dots used to detect colour blindness. You can see the number in the one I’ve included only if your retinal cells function in a certain way; otherwise all you’d see would be random dots.

And, in addition to all this, think of the many arrangements which (to us) might appear to have ‘order’ – all the possible drinks in the cup, all the possible cup designs – etc, etc. But compared to all the ‘disordered’ arrangements of smashed china, splattered liquid and so forth, the number of potential configurations which would appeal to us as being ‘ordered’ is truly infinitesimal. So it follows that the likelihood of moving from a state we regard as ‘disordered’ to one of ‘order’ is unimaginably slim; but not, in principle, impossible.

So let’s imagine that one of these one-in-a-squillion chances actually comes off. There’s the smashed cup and mess of tea on the floor. It’s embedded in a maze of vibrating molecules making up the floor, the surrounding air, and so on. And, in this case it so happens that the molecular impacts between the elements of the cup and the tea, and their surroundings combine so as to nudge them all back into their ‘ordered’ configuration, and boost them back off the floor and back into the hands of the somewhat mystified drinker.

Yes, the energy is there to make that happen – it just has to come together in exactly the correct, fantastically unlikely way. I don’t know how to calculate the improbability of this, but I should imagine that to see it happen we would need to do continual trials for a time period which is some vast multiple of the age of the universe. (Think monkeys typing the works of Shakespeare, and then multiply by some large number.) In other words, of course, it just doesn’t happen in practice.

But, looked at another way, such unlikely things do happen. Think of when we originally dropped the cup, and ended up with some sort of mess on the floor – that is, out of the myriad of other possible messes that could have been created, had the cup been dropped at a slightly different angle, the floor had been dirtier, the weather had been different – and so on. How likely is that exact, particular configuration of mess that we ended up with? Fantastically unlikely, of course – but it happened. We’d never in practice be able to produce it a second time.

So of all these innumerable configurations of matter – whether or not they are one of the tiny minority that seem to us ‘ordered’ – one of them happens with each of the sorts of event we’ve been considering. The universe at large is indifferent to our notion of ‘order’, and at each juncture throws up some random selection of the unthinkably large number of possibilities. It’s just that these ordered states are so few in number compared to the disordered ones that they never in practice come about spontaneously, but only when we deliberately foster them into being, by doing such things as manufacturing teacups, or making tea.

Let’s return, then, to the branch that our walker brushes past on the woodland footpath, and give that a similar treatment. It’s a bit simpler, if anything: we just require the astounding coincidence that, as the backwards walker approaches the branch, the random Brownian motion of an unimaginably large number of air molecules just happen to combine to give the branch a series of rhythmic, increasing nudges. It appears to oscillate with increasing amplitude until one final nudge lays it against the walker’s body just as she passes. Not convinced? Well, this is just one of the truly countless possible histories of the movement of a vast number of air molecules – one which has a consequence we can see.

Remember that the original Robert Brown, of Brownian motion fame, did see random movements of pollen grains in water, and since it didn’t occur to him that the water molecules were responsible for this; he thought it was a property of the pollen grains. Should we happen to witness such an astronomically unlikely movement of the tree, we would suspect some mysterious bewitchment of the tree itself, rather than one specific and improbably combination of air molecule movements.

You’ll remember that I was earlier reflecting that we know how to travel forwards in time, but that backward time travel is more problematic. So doesn’t this indicate another asymmetry – another evidence of an arrow of time? Well I think the right way of thinking about this emerges when we are reminded that this very possibility of time travel was a consequence of a theory called ‘relativity’. So think relative. We know how to move forward in time relative to other objects in our vicinity. Equally, we know how they could move forward in time relative to us. Which of course means that we’d be moving backward relative to them. No asymmetry there.

Maybe the one asymmetry in time which can’t be analysed a way is our own subjective experience of moving constantly from a ‘past’ into a ‘future’ – as defined by our subjective ‘now’. But, as I was pointing out three years ago, this seems to be more a property of ourselves as experiencing creatures, rather than of the objective universe ‘out there’.

I’ll leave you with one more apparent asymmetry. If processes are reversible in time, why do we only have records of the past, and not records of the future? Well, I’ve gone on long enough, so in the best tradition of lazy writers, I will leave that as an exercise for the reader.

The Mathematician and the Surgeon

Commuting days until retirement: 108

After my last post, which, among other things, compared differing attitudes to death and its aftermath (or absence of one) on the part of Arthur Koestler and George Orwell, here’s another fruitful comparison. It seemed to arise by chance from my next two commuting books, and each of the two people I’m comparing, as before, has his own characteristic perspective on that matter. Unlike my previous pair both could loosely be called scientists, and in each case the attitude expressed has a specific and revealing relationship with the writer’s work and interests.

The Mathematician

The first writer, whose book I came across by chance, has been known chiefly for mathematical puzzles and games. Martin Gardner was born in Oklahoma USA in 1914; his father was an oil geologist, and it was a conventionally Christian household. Although not trained as a mathematician, and going into a career as a journalist and writer, Gardner developed a fascination with mathematical problems and puzzles which informed his career – hence the justification for his half of my title.

Martin Gardner

Gardner as a young man (Wikimedia)

This interest continued to feed the constant books and articles he wrote, and he was eventually asked to write the Scientific American column Mathematical Games which ran from 1956 until the mid 1980s, and for which he became best known; his enthusiasm and sense of fun shines through the writing of these columns. At the same time he was increasingly concerned with the many types of fringe beliefs that had no scientific foundation, and was a founder member of PSICOPS,  the organisation dedicated to the exposing and debunking of pseudoscience. Back in February last year I mentioned one of its other well-known members, the flamboyant and self-publicising James Randi. By contrast, Gardner was mild-mannered and shy, averse from public speaking and never courting publicity. He died in 2010, leaving behind him many admirers and a two-yearly convention – the ‘Gathering for Gardner‘.

Before learning more about him recently, and reading one of his books, I had known his name from the Mathematical Games column, and heard of his rigid rejection of things unscientific. I imagined some sort of skinflint atheist, probably with a hard-nosed contempt for any fanciful or imaginative leanings – however sane and unexceptionable they might be – towards what might be thought of as things of the soul.

How wrong I was. His book that I’ve recently read, The Whys of a Philosophical Scrivener, consists of a series of chapters with titles of the form ‘Why I am not a…’ and he starts by dismissing solipsism (who wouldn’t?) and various forms of relativism; it’s a little more unexpected that determinism also gets short shrift. But in fact by this stage he has already declared that

I myself am a theist (as some readers may be surprised to learn).

I was surprised, and also intrigued. Things were going in an interesting direction. But before getting to the meat of his theism he spends a good deal of time dealing with various political and economic creeds. The book was written in the mid 80s, not long before the collapse of communism, which he seems to be anticipating (Why I am not a Marxist) . But equally he has little time for Reagan or Thatcher, laying bare the vacuity of their over-simplistic political nostrums (Why I am not a Smithian).

Soon after this, however, he is striding into the longer grass of religious belief: Why I am not a Polytheist; Why I am not a Pantheist; – so what is he? The next chapter heading is a significant one: Why I do not Believe the Existence of God can be Demonstrated. This is the key, it seems to me, to Gardner’s attitude – one to which I find myself sympathetic. Near the beginning of the book we find:

My own view is that emotions are the only grounds for metaphysical leaps.

I was intrigued by the appearance of the emotions in this context: here is a man whose day job is bound up with his fascination for the powers of reason, but who is nevertheless acutely conscious of the limits of reason. He refers to himself as a ‘fideist’ – one who believes in a god purely on the basis of faith, rather than any form of demonstration, either empirical or through abstract logic. And if those won’t provide a basis for faith, what else is there but our feelings? This puts Gardner nicely at odds with the modish atheists of today, like Dawkins, who never tires of telling us that he too could believe if only the evidence were there.

But at the same time he is squarely in a religious tradition which holds that ultimate things are beyond the instruments of observation and logic that are so vital to the secular, scientific world of today. I can remember my own mother – unlike Gardner a conventional Christian believer – being very definite on that point. And it reminds me of some of the writings of Wittgenstein; Gardner does in fact refer to him,  in the context of the freewill question. I’ll let him explain:

A famous section at the close of Ludwig Wittgenstein’s Tractatus Logico-Philosophicus asserts that when an answer cannot be put into words, neither can the question; that if a question can be framed at all, it is possible to answer it; and that what we cannot speak about we should consign to silence. The thesis of this chapter, although extremely simple and therefore annoying to most contemporary thinkers, is that the free-will problem cannot be solved because we do not know exactly how to put the question.

This mirrors some of my own thoughts about that particular philosophical problem – a far more slippery one than those on either side of it often claim, in my opinion (I think that may be a topic for a future post). I can add that Gardner was also on the unfashionable side of the question which came up in my previous post – that of an afterlife; and again he holds this out as a matter of faith rather than reason. He explores the philosophy of personal identity and continuity in some detail, always concluding with the sentiment ‘I do not know. Do not ask me.’ His underlying instinct seems to be that there has to something more than our bodily existence, given that our inner lives are so inexplicable from the objective point of view – so much more than our physical existence. ‘By faith, I hope and believe that you and I will not disappear for ever when we die.’ By contrast, Arthur Koestler, you may remember,  wrote in his suicide note of ‘tentative hopes for a depersonalised afterlife’ – but, as it turned out, these hopes were based partly on the sort of parapsychological evidence which was anathema to Gardner.

And of course Gardner was acutely aware of another related mystery – that of consciousness, which he finds inseparable from the issue of free will:

For me, free will and consciousness are two names for the same thing. I cannot conceive of myself being self-aware without having some degree of free will… Nor can I imagine myself having free will without being conscious.

He expresses utter dissatisfaction with the approach of arch-physicalists such as Daniel Dennett, who,  as he says,  ‘explains consciousness by denying that it exists’. (I attempted to puncture this particular balloon in an earlier post.)

Martin Gardner

Gardner in later life (Konrad Jacobs / Wikimedia)

Gardner places himself squarely within the ranks of the ‘mysterians’ – a deliberately derisive label applied by their opponents to those thinkers who conclude that these matters are mysteries which are probably beyond our capacity to solve. Among their ranks is Noam Chomsky: Gardner cites a 1983 interview with the grand old man of linguistics,  in which he expresses his attitude to the free will problem (scroll down to see the relevant passage).

The Surgeon

And so to the surgeon of my title, and if you’ve read one of my other blog posts you will already have met him – he’s a neurosurgeon named Henry Marsh, and I wrote a post based on a review of his book Do No Harm. Well, now I’ve read the book, and found it as impressive and moving as the review suggested. Unlike many in his profession, Marsh is a deeply humble man who is disarmingly honest in his account about the emotional impact of the work he does. He is simultaneously compelled towards,  and fearful of, the enormous power of the neurosurgeon both to save and to destroy. His narrative swings between tragedy and elation, by way of high farce when he describes some of the more ill-conceived management ‘initiatives’ at his hospital.

A neurosurgical operation

A neurosurgical operation (Mainz University Medical Centre)

The interesting point of comparison with Gardner is that Marsh – a man who daily manipulates what we might call physical mind-stuff – the brain itself – is also awed and mystified by its powers:

There are one hundred billion nerve cells in our brains. Does each one have a fragment of consciousness within it? How many nerve cells do we require to be conscious or to feel pain? Or does consciousness and thought reside in the electrochemical impulses that join these billions of cells together? Is a snail aware? Does it feel pain when you crush it underfoot? Nobody knows.

The same sense of mystery and wonder as Gardner’s; but approached from a different perspective:

Neuroscience tells us that it is highly improbable that we have souls, as everything we think and feel is no more or no less than the electrochemical chatter of our nerve cells… Many people deeply resent this view of things, which not only deprives us of life after death but also seems to downgrade thought to mere electrochemistry and reduces us to mere automata, to machines. Such people are profoundly mistaken, since what it really does is upgrade matter into something infinitely mysterious that we do not understand.

Henry Marsh

Henry Marsh

This of course is the perspective of a practical man – one who is emphatically working at the coal face of neurology, and far more familiar with the actual material of brain tissue than armchair speculators like me. While I was reading his book, although deeply impressed by this man’s humanity and integrity, what disrespectfully came to mind was a piece of irreverent humour once told to me by a director of a small company I used to work for which was closely connected to the medical industry. It was a sort of a handy cut-out-and-keep guide to the different types of medical practitioner:

Surgeons do everything and know nothing. Physicians know everything and do nothing. Psychiatrists know nothing and do nothing.  Pathologists know everything and do everything – but the patient’s dead, so it’s too late.

Grossly unfair to all to all of them, of course, but nonetheless funny, and perhaps containing a certain grain of truth. Marsh, belonging to the first category, perhaps embodies some of the aversion from dry theory that this caricature hints at: what matters to him ultimately, as a surgeon, is the sheer down-to-earth physicality of his work, guided by the gut instincts of his humanity. We hear from him about some members of his profession who seem aloof from the enormity of the dangers it embodies, and seem able to proceed calmly and objectively with what he sees almost as the detachment of the psychopath.

Common ground

What Marsh and Gardner seem to have in common is the instinct that dry, objective reasoning only takes you so far. Both trust the power of their own emotions, and their sense of awe. Both, I feel, are attempting to articulate the same insight, but from widely differing standpoints.

Two passages, one from each book, seem to crystallize both the similarities and differences between the respective approaches of the two men, both of whom seem to me admirably sane and perceptive, if radically divergent in many respects. First Gardner, emphasising in a Wittgensteinian way how describing how things appear to be is perhaps a more useful activity than attempting to pursue any ultimate reasons:

There is a road that joins the empirical knowledge of science with the formal knowledge of logic and mathematics. No road connects rational knowledge with the affirmations of the heart. On this point fideists are in complete agreement. It is one of the reasons why a fideist, Christian or otherwise, can admire the writings of logical empiricists more than the writings of philosophers who struggle to defend spurious metaphysical arguments.

And now Marsh – mystified, as we have seen, as to how the brain-stuff he manipulates daily can be the seat of all experience – having a go at reading a little philosophy in the spare time between sessions in the operating theatre:

As a practical brain surgeon I have always found the philosophy of the so-called ‘Mind-Brain Problem’ confusing and ultimately a waste of time. It has never seemed a problem to me, only a source of awe, amazement and profound surprise that my consciousness, my very sense of self, the self which feels as free as air, which was trying to read the book but instead was watching the clouds through the high windows, the self which is now writing these words, is in fact the electrochemical chatter of one hundred billion nerve cells. The author of the book appeared equally amazed by the ‘Mind-Brain Problem’, but as I started to read his list of theories – functionalism, epiphenomenalism, emergent materialism, dualistic interactionism or was it interactionistic dualism? – I quickly drifted off to sleep, waiting for the nurse to come and wake me, telling me it was time to return to the theatre and start operating on the old man’s brain.

I couldn’t help noticing that these two men – one unconventionally religious and the other not religious at all – seem between them to embody those twin traditional pillars of the religious life: faith and works.

iPhobia

Commuting days until retirement: 238

If you have ever spoken at any length to someone who is suffering with a diagnosed mental illness − depression, say, or obsessive compulsive disorder − you may have come to feel that what they are experiencing differs only in degree from your own mental life, rather than being something fundamentally different (assuming, of course, that you are lucky enough not to have been similarly ill yourself). It’s as if mental illness, for the most part, is not something entirely alien to the ‘normal’ life of the mind, but just a distortion of it. Rather than the presence of a new unwelcome intruder, it’s more that the familiar elements of mental functioning have lost their usual proportion to one another. If you spoke to someone who was suffering from paranoid feelings of persecution, you might just feel an echo of them in the back of your own mind: those faint impulses that are immediately squashed by the power of your ability to draw logical common-sense conclusions from what you see about you. Or perhaps you might encounter someone who compulsively and repeatedly checks that they are safe from intrusion; but we all sometimes experience that need to reassure ourselves that a door is locked, when we know perfectly well that it is really.

That uncomfortably close affinity between true mental illness and everyday neurotic tics is nowhere more obvious than with phobias. A phobia serious enough to be clinically significant can make it impossible for the sufferer to cope with everyday situations; while on the other hand nearly every family has a member (usually female, but not always) who can’t go near the bath with a spider in it, as well as a member (usually male, but not always) who nonchalantly picks the creature up and ejects it from the house. (I remember that my own parents went against these sexual stereotypes.) But the phobias I want to focus on here are those two familiar opposites − claustrophobia and agoraphobia.

We are all phobics

In some degree, virtually all of us suffer from them, and perfectly rationally so. Anyone would fear, say, being buried alive, or, at the other extreme, being launched into some limitless space without hand or foothold, or any point of reference. And between the extremes, most of us have some degree of bias one way or the other. Especially so − and this is the central point of my post − in an intellectual sense. I want to suggest that there is such a phenomenon as an intellectual phobia: let’s call it an iphobia. My meaning is not, as the Urban Dictionary would have it, an extreme hatred of Apple products, or a morbid fear of breaking your iPhone. Rather, I want to suggest that there are two species of thinkers: iagorophobes and iclaustrophobes, if you’ll allow me such ugly words.

A typical iagorophobe will in most cases cleave to scientific orthodoxy. Not for her the wide open spaces of uncontrolled, rudderless, speculative thinking. She’s reassured by a rigid theoretical framework, comforted by predictability; any unexplained phenomenon demands to be brought into the fold of existing theory, for any other way, it seems to her, lies madness. But for the iclaustrophobe, on the other hand, it’s intolerable to be caged inside that inflexible framework. Telepathy? Precognition? Significant coincidence? Of course they exist; there is ample anecdotal evidence. If scientific orthodoxy can’t embrace them, then so much the worse for it − the incompatibility merely reflects our ignorance. To this the iagorophobe would retort that we have no logical grounds whatever for such beliefs. If we have nothing but anecdotal evidence, we have no predictability; and phenomena that can’t be predicted can’t therefore be falsified, so any such beliefs fall foul of the Popperian criterion of scientific validity. But why, asks the iclaustrophobe, do we have to be constrained by some arbitrary set of rules? These things are out there − they happen. Deal with it. And so the debate goes.

Archetypal iPhobics

Widening the arena more than somewhat, perhaps the archetypal iclaustrophobe was Plato. For him, the notion that what we see was all we would ever get was anathema – and he eloquently expressed his iclaustrophobic response to it in his parable of the cave. For him true reality was immeasurably greater than the world of our everyday existence. And of course he is often contrasted with his pupil Aristotle, for whom what we can see is, in itself, an inexhaustibly fascinating guide to the nature of our world − no further reality need be posited. And Aristotle, of course, is the progenitor of the syllogism and deductive logic. In Raphael’s famous fresco The School of Athens, the relevant detail of which you see below, Plato, on the left, indicates his world of forms beyond our immediate reality by pointing heavenward, while Aristotle’s gesture emphasises the earth, and the here and now. Raphael has them exchanging disputatious glances, which for me express the hostility that exists between the opposed iphobic world-views to this day.

School of Athens

Detail from Raphael’s School of Athens in the Vatican, Rome (Wikimedia Commons)

iPhobia today

It’s not surprising that there is such hostility; I want to suggest that we are talking not of a mere intellectual disagreement, but a situation where each side insists on a reality to which the other has a strong (i)phobic reaction. Let’s look at a specific present-day example, from within the WordPress forums. There’s a blog called Why Evolution is True, which I’d recommend as a good read. It’s written by Jerry Coyne, a distinguished American professor of biology. His title is obviously aimed principally at the flourishing belief in creationism which exists in the US − Coyne has extensively criticised the so-called Intelligent Design theory. (In in my view, that controversy is not a dispute between the two iphobias I have described, but between two forms of iagoraphobia. The creationists, I would contend, are locked up in an intellectual ghetto of their own making, since venturing outside it would fatally threaten their grip on their frenziedly held, narrowly based faith.)

Jerry Coyne

Jerry Coyne (Zooterkin/Wikimedia Commons)

But I want to focus on another issue highlighted in the blog, which in this case is a conflict between the two phobias. A year or so ago Coyne took issue with the fact that the maverick scientist Rupert Sheldrake was given a platform to explain his ideas in the TED forum. Note Coyne’s use of the hate word ‘woo’, often used by the orthodox in science as an insulting reference to the unorthodox. They would defend it, mostly with justification, as characterising what is mystical or wildly speculative, and without evidential basis − but I’d claim there’s more to it than that: it’s also the iagorophobe’s cry of revulsion.

Rupert Sheldrake

Rupert Sheldrake (Zereshk/Wikimedia Commons)

Coyne has strongly attacked Sheldrake on more than one occasion: is there anything that can be said in Sheldrake’s defence? As a scientist he has an impeccable pedigree, having a Cambridge doctorate and fellowship in biology. It seems that he developed his unorthodox ideas early on in his career, central among which is his notion of ‘morphic resonance’, whereby animal and human behaviour, and much else besides, is influenced by previous similar behaviour. It’s an idea that I’ve always found interesting to speculate about − but it’s obviously also a red rag to the iagorophobic bull. We can also mention that he has been careful to describe how his theories can be experimentally confirmed or falsified, thus claiming scientific status for them. He also invokes his ideas to explain aspects of the formation of organisms that, in to date, haven’t been explained by the action of DNA. But increasing knowledge of the significance of what was formerly thought of as ‘junk DNA’ is going a long way to filling these explanatory gaps, so Sheldrake’s position looks particularly weak here. And in his TED talks he not only defends his own ideas, but attacks many of the accepted tenets of current scientific theory.

However, I’d like to return to the debate over whether Sheldrake should be denied his TED platform. Coyne’s comments led to a reconsideration of the matter by the TED editors, who opened a public forum for discussion on the matter. The ultimate, not unreasonable, decision was that the talks were kept available, but separately from the mainstream content. Coyne said he was surprised by the level of invective arising from the discussion; but I’d say this is because we have here a direct confrontation between iclaustrophobes and iagorophobes − not merely a polite debate, but a forum where each side taunts the other with notions for which the opponents have a visceral revulsion. And it has always been so; for me the iphobia concept explains the rampant hostility which always characterises debates of this type − as if the participants are not merely facing opposed ideas, but respective visions which invoke in each a deeply rooted fear.

I should say at this point that I don’t claim any godlike objectivity in this matter; I’m happy to come out of the closet as an iclaustrophobe myself. This doesn’t mean in my case that I take on board any amount of New Age mumbo-jumbo; I try to exercise rational scepticism where it’s called for. But as an example, let’s go back to Sheldrake: he’s written a book about the observation that housebound dogs sometimes appear to show marked  excitement at the moment that their distant owner sets off to return home, although there’s no way they could have knowledge of the owner’s actions at that moment. I have no idea whether there’s anything in this − but the fact is that if it were shown to be true nothing would give me greater pleasure. I love mystery and inexplicable facts, and for me they make the world a more intriguing and stimulating place. But of course Coyne isn’t the only commentator who has dismissed the theory out of hand as intolerable woo. I don’t expect this matter to be settled in the foreseeable future, if only because it would be career suicide for any mainstream scientist to investigate it.

Science and iPhobia

Why should such a course of action be so damaging to an investigator? Let’s start by putting the argument that it’s a desirable state of affairs that such research should be eschewed by the mainstream. The success of the scientific enterprise is largely due to the rigorous methodology it has developed; progress has resulted from successive, well-founded steps of theorising and experimental testing. If scientists were to spend their time investigating every wild theory that was proposed their efforts would become undirected and diffuse, and progress would be stalled. I can see the sense in this, and any self-respecting iagorophobe would endorse it. But against this, we can argue that progress in science often results from bold, unexpected ideas that come out of the blue (some examples in a moment). While this more restrictive outlook lends coherence to the scientific agenda, it can, just occasionally, exclude valuable insights. To explain why the restrictive approach holds sway I would look at the how a person’s psychological make-up might influence their career choice. Most iagorophobes are likely to be attracted to the logical, internally consistent framework they would be working with as part of a scientific career; while those of an iclaustrophobic profile might be attracted in an artistic direction. Hence science’s inbuilt resistance to out-of-the-blue ideas.

Albert Einstein

Albert Einstein (Wikimedia Commons)

I may come from the iclaustrophobe camp, but I don’t want to claim that only people of that profile are responsible for great scientific innovations. Take Einstein, who may have had an early fantasy of riding on a light beam, but it was one which led him through rigorous mathematical steps to a vastly coherent and revolutionary conception. His essential iagorophbia is seen in his revulsion from the notion of quantum indeterminacy − his ‘God does not play dice’. Relativity, despite being wholly novel in its time, is often spoken of as a ‘classical’ theory, in the sense that it retains the mathematical precision and predictability of the Newtonian schema which preceded it.

Niels Bohr

Niels Bohr (Wikimedia Commons)

There was a long-standing debate between him and Niels Bohr, the progenitor of the so-called Copenhagen interpretation of quantum theory, which held that different sub-atomic scenarios coexisted in ‘superposition’ until an observation was made and the wave function collapsed. Bohr, it seems to me, with his willingness to entertain wildly counter-intuitive ideas, was a good example of an iclaustrophobe; so it’s hardly surprising that the debate between him and Einstein was so irreconcilable − although it’s to the credit of both that their mutual respect never faltered..

Over to you

Are you an iclaustrophobe or an iagorophobe? A Plato or an Aristotle? A Sheldrake or a Coyne? A Bohr or an Einstein? Or perhaps not particularly either? I’d welcome comments from either side, or neither.

Read All About It (Part 2)

Commuting days until retirement: 285

You’ll remember, if you have paid me the compliment of reading my previous post, that we started with that crumbling copy of the works of Shakespeare, incongruously finding itself on the moon. I diverged from the debate that I had inherited from my brother and sister-in-law, to discuss what this suggested regarding ‘aboutness’, or intentionality. But now I’m going to get back to what their disagreement was. The specific question at issue was this: was the value – the intrinsic merit we ascribe to the contents of that book – going to be locked within it for all time and all places, or would its value perish with the human race, or indeed wither away as a result of its remote location? More broadly, is value of this sort – literary merit – something absolute and unchangeable, or a quality which exists only in relation to the opinion of certain people?

I went on to distinguish between ‘book’ as physical object in time and space, and ‘book’ regarded as a collection of ideas and their expression in language, and not therefore entirely rooted in any particular spatial or temporal location. It’s the latter, the abstract creation, which we ascribe value to. So immediately it looks as if the location of this particular object is neither here nor there, and and the belief in absolutism gains support. If a work we admire is great regardless of where it is in time or space, and then surely it is great for all times and all places?

But then, in looking at the quality of ‘aboutness’, or intentionality, we concluded that nothing possessed it except by virtue of being created – or understood by – a conscious being such as a human. So, if it can derive intentionality only through the cognition of human beings, it looks as if the same is true for literary merit, and we seem to have landed up in a relativist position. On this view, to assert that something has a certain value is only to express an opinion, my opinion; if you like, it’s more a statement about me than about the work in question. Any idea of absolute literary merit dissolves away, to be replaced by a multitude of statements reflecting only the dispositions of individuals. And of course there may be as many opinions of a piece of work as readers or viewers – and perhaps more, given changes over time. Which isn’t to mention the creator herself or himself; anyone who has ever attempted to write anything with more pretensions than an email or a postcard will know how a writer’s opinion of their own work ricochets feverishly up and down from self-satisfaction to despair.

The dilemma: absolute or relative?

How do we reconcile these two opposed positions, each of which seems to flow from one of the conclusions in Part 1? I want to try and approach this question by way of a small example; I’m going to retrieve our Shakespeare from the moon and pick out a small passage. This is from near the start of Hamlet; it’s the ghost of Hamlet’s father speaking, starting to convey to his son a flavour of the evil that has been done:

I could a tale unfold whose lightest word
Would harrow up thy soul, freeze thy young blood,
Make thy two eyes, like stars, start from their spheres,
Thy knotted and combined locks to part
And each particular hair to stand on end,
Like quills upon the fretful porpentine.

This conveys a message very similar to something you’ll have heard quite often if you watch TV news:

This report contains scenes which some viewers may find upsetting.

So which of these two quotes has more literary value? Obviously a somewhat absurd example, since one is a piece of poetry that’s alive with fizzing imagery, and the other a plain statement with no poetic pretensions at all (although I would find it very gratifying if BBC newsreaders tried using the former). The point I want to make is that, in the first place, a passage will qualify as poetry through its use of the techniques we see here – imagery contributing to the subtle rhythm and shape of the passage, culminating in the completely unexpected and almost comical image of the porcupine.

Of course much poetry will try to use these techniques, and opinion will usually vary on how successful it is – on whether the poetry is good, bad or indifferent. And of course each opinion will depend on its owner’s prejudices and previous experiences; there’s a big helping of relativism here. But when it happens that a body of work, like the one I have taken my example from, becomes revered throughout a culture over a long period of time – well, it looks like as if we have something like an absolute quality here. Particularly so, given that the plays have long been popular, even in translation, across many cultures.

Britain’s Royal Shakespeare company has recently been introducing his work to primary school children from the age of five or so, and have found that they respond to it well, despite (or maybe because of) the complex language (a report here). I can vouch for this: one of the reasons I chose the passage I did was that I can remember quoting it to my son when he was around that age, and and he loved it, being particularly taken with the ‘porpentine’.

So when something appeals to young, unprejudiced children, there’s certainly a case for claiming that it reflects the absolute truth about some set of qualities possessed by our race. You may object that I am missing the point of consigning Shakespeare to the moon – that it would be nothing more than a puzzle to some future civilisation, human-descended or otherwise, and therefore of only relative value. Well, in the last post I brought in the example of the forty thousand year old Spanish cave art, which I’ve reproduced again here.

Cave painting

A 40,000 year old cave painting in the El Castillo Cave in Puente Viesgo, Spain (www.spain.info)

In looking at this, we are in very much the same position as those future beings who are ignorant of Shakespeare. Here’s something whose meaning is opaque to us, and if we saw it transcribed on to paper we might dismiss it as the random doodlings of a child. But I argued before that there are reasons to suppose it was of immense significance to its creators. And if so, it may represent some absolute truth about them. It’s valuable to us as it was valuable to them – but in admittedly in our case for rather different reasons. But there’s a link – we value it, I’d argue, because they did.  The fact that we are ignorant of what it meant to them does not render it of purely relative value; it goes without saying that there are many absolute truths about the universe of which we are ignorant. And one of them is the significance of that painting for its creators.

We live in a disputatious age, and people are now much more likely to argue that any opinion, however widely held, is merely relative. (Although the view that any opinion is relative sounds suspiciously absolute).  The BBC has a long-running radio programme of which most people will be aware, called Desert Island Discs. After choosing the eight records they would want to have with them on a lonely desert island, and they are invited to select a single book, “apart from Shakespeare and the Bible, which are already provided”. Given this permanent provision, many people find the programme rather quaint and out of touch with the modern age. But of course when the programme began, even more people than now would have chosen one of those items if it were not provided. They have been, if you like, the sacred texts of Western culture, our myths.

A myth, as is often pointed out, is not simply an untrue story, but expresses truth on a deeper level than its surface meaning. Many of Shakespeare’s plots are derived from traditional, myth-like stories, and I don’t need to rehearse here any of what has been said about the truth content of the Bible. It will be objected, of course, that since fewer people would now want these works for their desert island, that there is a strong case for believing that the sacred, or not-so-sacred, status of the works is a purely relative matter. Yes – but only to an extent. There’s no escaping their central position in the history and origins of our culture. Thinking of that crumbling book, as it nestles in the lunar dust, it seems to me that the truths it contains possess – if in a rather different way – some of the absolute truths about the universe that are also to be found in the chemical composition of the dust around it. Maybe those future discoverers will be able to decode one but not the other; but that is a fact about them, and not about the Shakespeare.

(Any comments supporting either absolutism or relativism welcome.)

A Few Pointers

Commuting days until retirement: 342

Michaelangelo's finger

The act of creation, in the detail from the Sistine Chapel ceiling which provides Tallis’s title

After looking in the previous post at how certain human artistic activities map on to the world at large, let’s move our attention to something that seems much more primitive. Primitive, at any rate, in the sense that most small children become adept at it before they develop any articulate speech. This post is prompted by a characteristically original book by Raymond Tallis I read a few years back – Michaelangelo’s Finger. Tallis shows how pointing is a quintessentially human activity, depending on a whole range of capabilities that are exclusive to humans. In the first place, it could be thought of as a language in itself – Pointish, as Tallis calls it. But aren’t pointing fingers, or arrows, obvious in their meaning, and capable of only one interpretation? I’ve thought of a couple of examples to muddy the waters a little.

Pointing – but which way?

TV aerial

Which way does it point?

The first is perhaps a little trivial, even silly. Look at this picture of a TV aerial. If asked where it is pointing, you would say the TV transmitter, which will be in the direction of the thin end of the aerial. But if we turn it sideways, as I’ve done underneath, we find what we would naturally interpret as an arrow pointing in the opposite direction. It seems that our basic arrow understanding is weakened by the aerial’s appearance and overlaid by other considerations, such as a sense of how TV aerials work.

My second example is something I heard about which is far more profound and interesting, and deliciously counter-intuitive. It has to do with the stages by which a child learns language, and also with signing, as used by deaf people. Two facts are needed to explain the context: first is that, as you may know, sign language is not a mere substitute for language, but is itself a language in every sense. This can be demonstrated in numerous ways: for example, conversing in sign has been shown to use exactly the same area of the brain as does the use of spoken language. And, compared with those rare and tragic cases where a child is not exposed to language in early life, and consequently never develops a proper linguistic capability, young children using only sign language at this age are not similarly handicapped. Generally, for most features of spoken language, equivalents can be found in signing. (To explore this further, you could try Oliver Sacks’ book Seeing Voices.) The second fact concerns conventional language development: at a certain stage, many children, hearing themselves referred to as ‘you’, come to think of ‘you’ as a name for themselves, and start to call themselves ‘you’; I remember both my children doing this.

And so here’s the payoff: in most forms of sign language, the word for ‘you’ is to simply point at the person one is speaking to. But children who are learning signing as a first language will make exactly the same mistake as their hearing counterparts, pointing at the person they are addressing in order to refer to themselves. We could say, perhaps, that they are still learning the vocabulary of Pointish. The aerial example didn’t seem very important, as it merely involved a pointing action that we ascribe to a physical object. Of course the object itself can’t have an intention; it’s only a human interpretation we are considering, which can work either way. This sign language example is more surprising because the action of pointing – the intention – is a human one, and in thinking of it we implicitly transfer our consciousness into the mind of the pointer, and attempt to get our head around how they can make a sign whose meaning is intuitively obvious to us, but intend it in exactly the opposite sense.

What’s involved in pointing?

Tallis teases out how pointing relies on a far more sophisticated set of mental functions than it might seem to involve at first sight. As a first stab at demonstrating this, there is the fact that pointing, either the action or the understanding of it, appears to be absent in animals – Tallis devotes a chapter to this. He describes a slightly odd-feeling experience which I have also had, when throwing a stick for a dog to retrieve. The animal is often in a high state of excitement and distraction at this point, and dogs do not have very keen sight. Consequently it often fails to notice that you have actually thrown the stick, and continues to stare at you expectantly. You point vigorously with an outstretched arm: “It’s over there!” Intuitively, you feel the dog should respond to that, but of course it just continues to watch you even more intensely, and you realise that it simply has no notion of the meaning of the gesture – no notion, in fact, of ‘meaning’ at all. You may object that there is a breed of dog called a Pointer, because it does just that – points. But let’s just examine for a moment what pointing involves.

Primarily, in most cases, the the key concept is attention: you may want to draw the attention of another to something,. Or maybe, if you are creating a sign with an arrow, you may be indicating by proxy where others should go, on the assumption that they have a certain objective. Attention, objective: these are mental entities which we can only ascribe to others if we first have a theory of mind – that is, if we have already achieved the sophisticated ability to infer that others have minds, and and a private world, like our own. Young children will normally start to point before they have very much speech (as opposed to language – understanding develops in advance of expression). It’s significant that autistic children usually don’t show any pointing behaviour at this stage. Lack of insight into the minds of others – an under-developed theory of mind – is a defining characteristic of autism.

So, returning to the example of the dog, we can take it that for an animal to show genuine pointing behaviour, it must have a developed notion of other minds, and which seems unlikely. The action of the Pointer dog looks more like instinctive behaviour, evolved through the cooperation of packs and accentuated by selective breeding. There are other examples of instinctive pointing in animal species: that of bees is particularly interesting, with the worker ‘dance’ that communicates to the hive where a food source is. This, however, can be analysed down into a sequence of instinctive automatic responses which will always take the same form in the same circumstances, showing no sign of intelligent variation. Chimpanzees can be trained to point, and show some capacity for imitating humans, but there are no known examples of their use of pointing in the wild.

But there is some recent research which suggests a counter-example to Tallis’s assertion that pointing is unknown in animals. This shows elephants responding to human pointing gestures, and it seems there is a possibility that they point spontaneously with their trunks. This rather fits with other human-like behaviour that has been observed in elephants, such as apparently grieving for their dead. Grieving, it seems to me, has something in common with pointing, in that it also implies a theory of mind; the death of another individual is not just a neutral change in the shape and pattern of your world, but the loss of another mind. It’s not surprising that, in investigating ancient remains, we take signs of burial ritual to be a potent indicator of the emergence of a sophisticated civilisation of people who are able to recognise and communicate with minds other than their own – probably the emergence of language, in fact.

Pointing in philosophy

We have looked at the emergence of pointing and language in young children; and the relation between the two has an important place in the history of philosophy. There’s a simple, but intuitive notion that language is taught to a child by pointing to objects and saying the word for them – so-called ostensive definition. And it can’t be denied that this has a place. I can remember both of my children taking obvious pleasure in what was, to them, a discovery – each time they pointed to something they could elicit a name for it from their parent. In a famous passage at the start of Philosophical Investigations, Wittgenstein identifies this notion – of ostensive definition as the cornerstone of language learning – in a passage from the writings of St Augustine, and takes him to task over it. Wittgenstein goes on to show, with numerous examples, how dynamic and varied an activity the use of language is, in contrast to the monolithic and static picture suggested by Augustine (and indeed by Wittgenstein himself in his earlier incarnation). We already have our own example in the curious and unique way in which the word ‘you’ and its derivatives are used, and a sense of the stages by which children develop the ability to use it correctly.

The Boyhood of Raleigh

Perhaps the second most famous pointing finger in art: Millais’ The Boyhood of Raleigh

The passage of Augustine also suggests a notion of pointing as a primitive,  primary action, needing no further explanation. However, we’ve seen how it relies on a prior set of sophisticated abilities: having the notion that oneself is distinct from the world – a world that contains other minds like one’s own, whose attention may have different contents from one’s own; that it’s possible to communicate meaning by gestures to modify those contents; an idea of how these gestures can be ‘about’ objects within the world; and that there needs to be agreement on how to interpret the gestures, which aren’t always as intuitive and unambiguous as we may imagine. As Tallis rather nicely puts it, the arch of ostensive definition is constructed from these building bricks, with the pointing action as the coping stone which completes it.

The theme underlying both this and my previous post is the notion of how one thing can be ‘about’ another – the notion of intentionality. This idea is presented to us in an especially stark way when it comes to the action of pointing. In the next post I intend to approach that more general theme head-on.

Consciousness 3 – The Adventures of a Naive Dualist

Commuting days until retirement: 408

A long gap since my last post: I can only plead lack of time and brain-space (or should I say mind-space?). Anyhow, here we go with Consciousness 3:

Coronation

A high point for English Christianity in the 50s: the Queen’s coronation. I can remember watching it on a relative’s TV at the age of 5

I think I must have been a schoolboy, perhaps just a teenager, when I was first aware that the society I had been born into supported two entirely different ways of looking at the world. Either you believed that the physical world around us, sticks, stones, fur, skin, bones – and of course brains – was all that existed; or you accepted one of the many varieties of belief which insisted that there was more to it than that. My mental world was formed within the comfortable surroundings of the good old Church of England, my mother and father being Christians by conviction and by social convention, respectively. The numinous existed in a cosy relationship with the powers-that-were, and parents confidently consigned their children’s dead pets to heaven, without there being quite such a Santa Claus feel to the assertion.

But, I discovered, it wasn’t hard to find the dissenting voices. The ‘melancholy long withdrawing roar’ of the ‘sea of faith’ which Matthew Arnold had complained about in the 19th century was still under way, if you listened out for it. Ever since Darwin, and generations of physicists from Newton onwards, the biological and physical worlds had appeared to get along fine without divine support; and even in my own limited world I was aware of plenty of instances of untimely deaths of innocent sufferers, which threw doubt on God’s reputedly infinite mercy.

John Robinson

John Robinson, Bishop of Woolwich (Church Times)

And then in the 1960s a brick was thrown into the calm pool of English Christianity by a certain John Robinson, the Bishop of Woolwich at the time. It was a book called Honest to God, which sparked a vigorous debate that is now largely forgotten. Drawing on the work of other radical theologians, and aware of the strong currents of atheism around him, Robinson argued for a new understanding of religion. He noted that our notion of God had moved on from the traditional old man in the sky to a more diffuse being who was ‘out there’, but considered that this was also unsatisfactory. Any God whom someone felt they had proved to be ‘out there’ “would merely be a further piece of existence, that might conceivably have not been there”. Rather, he says, we must approach from a different angle.

God is, by definition, ultimate reality. And one cannot argue whether ultimate reality exists.

My pencilled zig-zags in the margin of the book indicate that I felt there was something wrong with this at the time. Later, after studying some philosophy, I recognised it as a crude form of Anselm’s ontological argument for the existence of God, which is rather more elegant, but equally unsatisfactory. But, to be fair, this is perhaps missing the point a little. Robinson goes on to say that “one can only ask “what ultimate reality is like – whether it… is to be described in personal or impersonal categories.” His book proceeds to develop the notion of God as in some way identical with reality, rather than as a special part of it. One might cynically characterise this as a response to atheism of the form “if you can’t beat them, join them” – hence the indignation that the book stirred in religious circles.

Teenage reality

But, leaving aside the well worn blogging topic of the existence of God, there was the teenage me, still wondering about ‘ultimate reality’, and what on earth, for want of a better expression, that might be. Maybe the ‘personal’ nature of reality which Robinson espoused was a clue. I was a person, and being a person meant having thoughts, experiences – a self, or a subjective identity.  My experiences seemed to be something quite other from the objective world described by science – which, according to the ‘materialists’ of the time, was all that there was. What I was thinking of then was the topic of my previous post, Consciousness 2 – my qualia, although I didn’t know that word at the time. So yes, there were the things around us (including our own bodies and brains), our knowledge and understanding of which had been, and was, advancing at a great rate. But it seemed to me that no amount of knowledge of the mechanics of the world could ever explain these private, subjective experiences of mine (and I assumed, of others). I was always strongly motivated to believe that there was no limit to possible knowledge – however much we knew, there would always be more to understand. Materialsm, on the other hand, seemed to embody the idea of a theoretically finite limit to what could be known – a notion which gave me a sense of claustrophobia (of which more in a future post).

So I made my way about the world, thinking of my qualia as the armour to fend off the materialist assertion that physics was the whole story. I had something that was beyond their reach: I was a something of a young Cartesian, before I had learned about Descartes. It was a another few years before ‘consciousness’ became a legitimate topic of debate in philosophy and science. One commentator I have read dates this change to the appearance of Nagel’s paper What is it like to be a Bat in 1973, which I referred to in Consciousness 1. Seeing the debate emerging, I was tempted to preen myself with the horribly arrogant thought that the rest of the world had caught up with me.

The default position

Philosophers and scientists are still seeking to find ways of assimilating consciousness to physics: such physicalism, although coming in a variety of forms, is often spoken of as the default, orthodox position. But although my perspective has changed quite a lot over the years, my fundamental opposition to physicalism has not. I am still at heart the same naive dualist I was then. But I am not a dogmatic dualist – my instinct is to believe that some form of monism might ultimately be true, but beyond our present understanding. This consigns me into another much-derided category of philosophers – the so-called ‘mysterians’.

But I’d retaliate by pointing out that there is also a bit of a vacuum at the heart of the physicalist project. Thoughts and feelings, say its supporters, are just physical things or events, and we know what we mean by that, don’t we? But do we? We have always had the instinctive sense of what good old, solid matter is – but you don’t have to know any physics to realise there are problems with the notion. If something were truly solid it would entail that it was infinitely dense – so the notion of atomism, starting with the ancient Greeks, steadily took hold. But even then, atoms can’t be little solid balls, as they were once imagined – otherwise we are back with the same problem. In the 20th century, atomic physics confirmed this, and quantum theory came up with a whole zoo of particles whose behaviour entirely conflicted with our intuitive ideas gained from experience; and this is as you might expect, since we are dealing with phenomena which we could not, in principle, perceive as we perceive the things around us. So the question “What are these particles really like?” has no evident meaning. And, approaching the problem from another standpoint, where psychology joins hands with physics, it has become obvious that the world with which we are perceptually familiar is an elaborate fabrication constructed by our brains. To be sure, it appears to map on to the ‘real’ world in all sorts of ways, but has qualities (qualia?) which we supply ourselves.

Truth

So what true, demonstrable statements can be made about the nature of matter? We are left with the potently true findings – true in the the sense of explanatory and predictive power – of quantum physics. And, when you’ve peeled away all the imaginative analogies and metaphors, these can only be expressed mathematically. At this point, rather unexpectedly, I find myself handing the debate back to our friend John Robinson. In a 1963 article in The Observer newspaper, heralding the publication of Honest to God, he wrote:

Professor Herman Bondi, commenting in the BBC television programme, “The Cosmologists” on Sir James Jeans’s assertion that “God is a great mathematician”, stated quite correctly that what he should have said is “Mathematics is God”. Reality, in other words, can finally be reduced to mathematical formulae.

In case this makes Robinson sound even more heretical than he in fact was, I should note that he goes on to say that Christianity adds to this “the deeper reliability of an utterly personal love”. But I was rather gratified to find this referral to the concluding thoughts of my post by the writer I quoted at the beginning.

I’m not going to speculate any further into such unknown regions, or into religious belief, which isn’t my central topic. But I’d just like to finish with the hope that I have suggested that the ‘default position’ in current thinking about the mind is anything but natural or inevitable.

Consciousness 2 – The Colour of Nothing

Commuting days until retirement: 437

When it comes down to basics, is there just one sort of thing, or are there two sorts of thing? (We won’t worry about the possibility of even more than that.) Anyone who has done an elementary course in philosophy will know that Descartes’ investigations led him to believe that there were two sorts: mental things and physical things, and that he thus gave birth to the modern conception of dualism.

Stone lion

Lifeless

As scientific knowledge has progressed over the centuries since, it has put paid to all sorts of beliefs in mystical entities which were taken to be explanations for how things are. A good example would be vitalism, the belief in a ‘principle of life’  something that a real lion would possess and a stone lion would not. Needless to say, we now know that the real lion would have DNA, a respiratory system and so on, all of whose modes of operation we have much understanding – and so the principle of life has withered away, as surplus to needs.

Descartes mental world, however, has been harder to kill off. There seems nothing that scientific theory can grasp which is recognisable as the something it is like I discussed in my previous post. It’s rather like one of those last houses to go as Victorian terraces are cleared for a new development, with Descartes as the obstinate old tenant who stands on his rights and refuses to be rehoused. But the philosophical bulldozers are doing their best to help the builders of science, in making way for  their objectively regular modern blocks.

Gilbert Ryle led the charge in 1949, in his book The Concept of Mind. He famously characterised dualism as the doctrine of ‘the Ghost in the Machine’: to suppose that there was some mystical entity within us corresponding to our mind was to be misled by language into making a ‘category mistake’. Ryle’s standpoint fits more or less into the area of behaviourism, also previously discussed. Then, in the 1950s, identity theory arose. The contents of your mind  colors, smells  may seem different from from all that mushy stuff in your head and its workings, but in fact they are just the same thing, if perhaps seen from a different viewpoint. There’s a name, the ‘Morning Star’, for that bright star that can be seen at dawn, and another one, the ‘Evening Star’, for its equivalent at dusk; but with a little further knowledge you discover that they are one and the same.

Nowadays, while still around, the identity theory is somewhat mired in technical philosophical debate. Meanwhile brain science has made huge strides, and at the same time computing science has become mainstream. So on the one hand, it’s tempting to see the mind as the software of the brain (functionalism, very broadly), or perhaps just to attempt to show that with enough understanding of the wiring of those tightly packed nerve fibres, and whatever is chugging around them, everything can be explained. This last approach  materialism, or in its modern, science-aware form, physicalism  can take various forms, one of them being the identity theory. Or you may consider, for example, that such mental entities as beliefs, or pains, may be real enough, but are ideally explained as  or reduced to  brain/body functions. This would make you a reductionist.

But you may be more radical and simply say that these mental things don’t really exist at all: we are just kidded into thinking they do by our habitual way of talking about ourselves folk psychology, as it’s often referred to. Then you would be an eliminativist  and it’s the eliminativists I’d like to get my philosophical knife into here. Although I don’t agree with old Descartes on that much (I’ll expand in the next post), I have an certain affinity for him, and I’m willing to join him in his threatened, tumbledown house, looking out at the bulldozers ranged across the building site of 21st century Western philosophy.

Getting rid of qualia  or not

Acer leaves

My acer leaves

I think it would be fair to say that the arch-eliminativist is one Daniel Dennett, and it’s his treatment of qualia that I’d like to focus on. Qualia (singular quale) are those raw, subjective elements of which our sensory experience is composed (or as Dennett would have it, we imagine it to be composed): the vivid visual experience I’m having now of the delicately coloured acer leaves outside my window; or that smell when I burn the toast. I’m thinking of Dennett’s treatment of the topic to be found in his 1988 paper Quining Qualia, (QQ) and in Qualia Disqualified, Chapter 12 of his 1991 book Consciousness Explained (CE: with a great effort I refrain from commenting on the title). Now the task is to show that, when it comes to mental things, all that grey matter and its workings is all there is. But this is a problem, because when we look inside people’s skulls we don’t ever find the colour of acer leaves or the smell of burnt toast.

Dennett quotes an introductory book on brain science: ‘”Color” as such does not exist in the world: it exists only in the eye and brain of the beholder.’ But as he rightly points out, however good this book is on science, it has its philosophy very muddled. For one thing, the ‘eye and brain of the beholder’ are themselves part of the world – the world in which colour, we are told, does not exist. And eyes and brains have colours, too. But not like the acer leaves I’m looking at. There’s only one way to get to where Dennett wants to be: he has to strike out the qualia from the equation. They are really not there at all. That acer-colour quale I think I’m experiencing is non-existent. Really?

Argument 1: The beetle in the box

Maybe there is some help available to Dennett from one of the philosophical giants  Wittgenstein. Dennett calls it in, anyway, as support for the position that ‘the very idea of qualia is nonsense’ (CE, p.390). There is a famous passage in Wittgenstein’s Philosophical Investigations where he talks of our private sensations in an analogy:

Suppose everyone had a box with something in it: we call it a “beetle”. No one can look into anyone else’s box, and everyone says he knows what a beetle is only by looking at his beetle. Here it would be quite possible for everyone to have something different in his box … The thing in the box has no place in the language-game at all; not even as a something: for the box might even be empty. No, one can ‘divide through’ by the thing in the box; it cancels out, whatever it is.

I don’t see how this does help Dennett. It is part of Wittgenstein’s exposition known as the private language argument. He is seeking to show that language is a necessarily public activity, and that the notion of a private language known only to its one ‘speaker’ is incoherent. I think it’s significant that the example of a sensation he uses is pain, as you’ll see if you follow the link. Elsewhere Wittgenstein considers whether someone might have a private word for one of his own sensations. But, like the pain, this is just a sensation, and there’s no publicly viewable aspect to it.   But consider my acer leaves: my wife might come and join me in admiring them. We have a publicly available referent for our discussion, and if I ask her about the quality of her own sensation of the colour, she will give every appearance of knowing what I am talking about. True, I can never tell if her sensation is the same as mine, or whether it even makes sense to ask that. Nor can I tell for certain whether she really has the sensation, or is simply behaving as if she did. But I’ll leave that to Wittgenstein. His argument doesn’t seek to deny that I am acquainted with my ‘beetle’  only that it ‘has no place in the language game’. In other words, my wife and I can discuss the acer leaves and what we think of them, but we can’t discuss the precise nature of the sensation they give me – my quale. My wife would have nothing to refer to when speaking of it. In Wittgenstein’s terms, we talk about the leaves and their colour, but our intrinsically private sensations drop out of the discussion. Does this mean the qualia don’t exist? Just a moment I’ll have another look… no, mine do, anyway. Sorry, Dan.

Argument 2: Grown-up drinking

Bottled Qualia

Bottled Qualia

Another strategy open to Dennett is to point out how our supposed qualia may seem unstable in certain ways, and subject to change. He notes how beer is an acquired taste, seeming pretty unpleasant to a child, who may well take it up with gusto later in life. Can the adult be having the same qualia as the child, if the response is so different?

This strikes a chord with me. I started to sample whisky when still a teenager because it made me feel mature and sophisticated. Never mind the fact that it was disgusting  much more important to pretend to be the sort of person I wanted to be. The odd thing is  and I have often wondered about this  that I think I can remember the moment of realisation that eventually came: “Hey  I actually like this stuff!”

So what happened? Did something about these particular qualia suddenly change, rather as if I one day licked a bar of soap and found that it tasted of strawberries? Clearly not. So maybe we could say, that, although it tasted the same, it was just that I started to react to it in a different way  some neural pathway opened up in my brain that engendered a different response. There are difficulties with that idea. As Dennett puts it, in QQ:

For if it is admitted that one’s attitudes towards, or reactions to, experiences are in any way and in any degree constitutive of their experiential qualities, so that a change in reactivity amounts to or guarantees a change in the property, then those properties, those “qualitative or phenomenal features,” cease to be “intrinsic” properties, and in fact become paradigmatically extrinsic, relational properties.

He’s saying, and I agree,  that we can’t mix up subjective and objective properties in this way, otherwise the subjective elements – the qualia – are dragged off their pedestal of private ineffability and are rendered into ordinary, objectively viewable, ones. He goes on to argue, with other examples, that the concept of qualia inevitably leads to confusions of this sort, and that we can therefore banish the confusion by banishing the qualia.

So is there another way out of the dilemma, which rescues them? As with the acer leaves, my whisky-taste qualia are incontrovertibly there. Consider another type of subjective experience  everyone probably remembers something similar. You have been working, maybe in an office, for an hour or two, and suddenly an air conditioning fan is turned off. It was a fairly innocuous noise, and although it was there you simply weren’t aware of it. But now that it’s gone, you’re aware that it’s gone. As you may know, the objective, scientific term for this is ‘habituation’; your system ceases to respond to a constant stimulus. But this time I am not going to make the mistake of mixing this objective description with the subjective one. A habituated stimulus is simply removed from consciousness  your subjective qualia do change as it fades. And something like this, I would argue, is what was happening with the whisky. To a mature palate, it has a complex flavour, or to put it another way, all sorts of different, pleasurable individual qualia which can be distinguished. These put the first, primary, sharp ‘kick’ in the flavour into a new context. But probably that kick is all that the immature version of myself was experiencing. Gradually, my qualia did change as I habituated sufficiently to that kick to allow it to recede a little and allow in the other elements. There had to come some point at which I made up my mind that the stuff was worth drinking for its own sake, and not just as a means to enhance my social status.

Argument 3: Torn cardboard

Torn cardboard

Matching halves

Not convinced? Let’s look at another argument. This starts with an unexpected – and ingenious  analogy: the Rosenbergs, Soviet spies in the US in the cold war era, had a system to enable to spies to verify one another’s identity: each had a fragment of cardboard packaging, originally torn halves of the same jelly package (US brand name Jell-O). So the jagged tear in each piece would perfectly and uniquely match the other. Dennett is equating our perceptual apparatus with one of the cardboard halves; and the characteristics of the world perceived with the other. The two have co-evolved. Anatomical investigation shows how birds and bees, whose nourishment depends on the recognition of flowers and berries, have colour perception, while primarily carnivorous animals  dogs and cats for example  do not. But at the same time plants have evolved flower and berry colour to enable pollination or seed dispersal by the bees or birds. The two sides evolve, matching each other perfectly, like the cardboard fragments. And of course we are omnivores, and have colour perception too. When hunting was scarce, our ability to recognise the colour of a ripe apple could have been a life-and-death matter. And so it would have been for the apple species too, as we unwittingly propagated its seeds. As he puts it:

Why is the sky blue? Because apples are red and grapes are purple, not the other way around. (CE p378)

A lovely idea, but what’s the relevance? His deeper intention with the torn cardboard analogy is to focus on the fact that, if we look at just one of the halves on its own, we are hard put to see anything but a piece of rubbish without purpose or significance  it is given validity only by its sibling. Dennett seeks to demote colour experiences, considered on their own, to a similarly nullified status. Here’s a crucial passage. ‘Otto’ is Dennett’s imaginary defender of qualia  for present purposes he’s me:

And Otto can’t say anything more about the property he calls pink than “It’s this!” (taking himself to be pointing “inside” at a private, phenomenal property of his experience). All that move accomplishes (at best) is to point to his own idiosyncratic color-discrimination state, a move that is parallel to holding up a piece of Jell-O box and saying that it detects this shape property. Otto points to his discrimination-device, perhaps, but not to any quale that is exuded by it, or worn by it, or rendered by it, when it does its work. There are no such things. (CE p383 – my italics).

I don’t think Dennett earns the right to arrive at his concluding statement. There seem to me to be two elements at work here. One is an appeal to the Wittgensteinian beetle argument we considered (‘…taking himself to be pointing “inside”…’), which I tried to show does not do Dennett’s work for him. The second appears to be simply a circular argument: if we decide to assert that Otto is not referring any private experience but something objective (a ‘color-discrimination state’) then we have only banished his qualia by virtue of this assertion. The fact that we can’t be aware of them for ourselves does not change this. The function of the cardboard fragment is an objective one, inseparable from its identification of its counterpart, just as colour perception as an objective function is inseparable from how it evolved. But there’s nothing about the cardboard that corresponds to subjective qualia  the analogy fails. When I think of my experience of the acer leaves I am not thinking of the ‘color-discrimination state’ of my brain  I don’t know anything about that. In fact it’s only from the science I have been taught that I know that there is any such thing. (This final notion nods to another well-known argument – this time in favour of qualia – Frank Jackson’s ‘knowledge’ argument  I’ll leave you to follow the link if you’re interested.)

But this being just a blog, and this post having already been delayed too long, I’ll content myself with having commented on just three arguments from one physicalist philosopher. And so I am still there with Descartes in his tottering house, resisting its demolition. In the next post I’ll enlarge on why I am so foolhardy and perverse.

Consciousness 1 – Zombies

Commuting days until retirement: 451

Commuting at this time of year, with the lengthening mornings and evenings, gives me a chance to lose myself in the sight of tracts of England sliding across my field of vision – I think of Philip Larkin in The Whitsun Weddings:  ‘An Odeon went past, a cooling tower, and someone running up to bowl…’  (His lines tend to jump into my mind like this). It’s tempting to enlarge a scene like this into a simile for life, like the one that Larkin’s poem leads into. Of course we are not just passive observers, but the notion of life as a film show – a series of scenes progressing past your eyes – has a certain curious attractiveness.

A rather more specatcular view than any I get on my train journey. Photo: Yamaguchi Yoshiaki. Wikimedia Commons

A rather more specatcular view than any I get on my train journey.
Photo: Yamaguchi Yoshiaki. Wikimedia Commons

Now imagine that, as I sit in the train, I am not quite a human being as you think of one. Instead I’m a cleverly constructed robot who appears in every way like a human but, being a robot, has something important missing. The objects outside the train form images on some sensor in each of my pseudo-eyes, and the results may then be processed by successive layers of digital circuitry which perform ever more sophisticated interpretative functions. Perhaps these resolve the light patterns that entered my ‘eyes’ into discrete objects, and and trigger motor functions which cause my head and eyes to swivel and follow them as they pass. Much, in fact, like the real me, idly watching the scenes sliding by.

Now let’s elaborate our robot to have capabilities beyond sitting on a train and following the objects outside; now it can produce all the behaviour that any human being can.This curious offspring of a thought-experiment is what philosophers refer to as a zombie – not the sort in horror films with the disintegrating face and staring eyeballs, but a creature who may be as well behaved and courteous as any decent human being. The only difference is that, despite (we presume) the brain churning away as busily as anyone else’s, there are no actual sensations in there – none of those primary, immediate experiences with a subjective quality: the fresh green of a spring day, or the inner rapture of an orgasm. So what’s different? There are a number of possibilities, but, as you will have guessed, the one I am thinking of is that inner, subjective world of experience we all have, but assume that machines do not. This is well expressed by saying that there’s something that it is like to be me, but not something that it’s like to be a machine.(1)  The behaviour is there all right, but that’s all. In the phrase I rather like, the lights are on but nobody’s at home.

Many people who think about the question nowadays, especially those of a scientific bent, tend to conclude that, of course, we must ultimately be nothing but machines of one sort or another. We have discovered many – perhaps most – of the physical principles upon which our brains and bodies work, and we have traced their evolution over time from simple molecular entities. So there we are – machines. But conscious machines – machines that there is something it is like to be? It has frequently been debated whether or not such a machine with all these capabilities would ipso facto be conscious – whether it would have a mind. Or, in other words, whether we could in principle build a conscious machine. (There are some who speculate that we may already have done so.)

One philosophical response to this problem is that of behaviourism, a now justly neglected philosophical position.(2) If you are a behaviourist you believe that your mind, and your mental activity – your thoughts – are defined in terms of your behaviour. The well-known Turing Test constitutes a behaviourist criterion, since it is based on the principle that a computer system whose responses are indistinguishable from those of a human is taken for all practical purposes to have a mind. (I wrote about Turing a little while ago – but here I part company with him.) And for a behaviourist, the phrase ‘What it is like to be…’ can have no meaning, or at best a rather convoluted one based on what we say or do; but its meaning is plain and obvious to you or me. It’s difficult to resist repeating the old joke about behaviourism: two post-coital behaviourists lie in bed together, and one says ‘That was great for you – how was it for me?’ But I take the view of behaviourism that the joke implies – it’s absurd.

Behaviourists, however, can’t be put down as burglars or voyeurs: they don’t peer into the lighted windows to see what’s going on inside. It’s enough for them that the lights are on. For them the concept of a zombie is either meaningless or a logical impossibility.  But there is another position on the nature of the mind which is much more popular in contemporary thought, but which has a different sort of problem with the notion of a zombie. I’m thinking of eliminative materialism.

Well, as I write this post, I feel it extending indefinitely as more ideas churn through that machine I refer to as my brain. So to avoid it becoming impossibly long, and taking another three weeks to write it, I’ll stop there, and just entitle this piece as Part 1. Part 2 will take up the topic of eliminative materialism.

In the meantime I’d just like to leave one thought: I started with a snatch of Philip Larkin, and I’ve always felt that poetry is in essence a celebration of conscious experience; without consciousness I don’t believe that poetry would be possible.


(1) The phrase is mainly associated with Thomas Nagel, and his influential 1974 paper What is it Like to be a Bat? But he in turn attributes it to the English philosopher Timothy Sprigge.

(2) I’m referring to the philosophical doctrine of behaviourism – distinct from, but related to the psychological one – J B Watson, B F Skinner et al.

Right Now

Commuting days until retirement: 491

Right now I am at home – another day off – waiting for some gravel to be delivered for our front drive. Right now, you are reading this (well I hope somebody is, or will). You can see I am having trouble with tenses here, because your ‘now’ is not my ‘now’. I know you are not reading this now, because I haven’t published it. But you know you are reading it now.

BlackboardThis all might seem a bit trivial and pointless, but stay with me for a bit. The notion I am circling around is the curious status of this concept of now. Let’s approach it another way: imagine yourself back at school, in a physics lesson. This may seem either an enticing or an entirely appalling prospect to you, but please indulge my little thought experiment. The teacher has chalked a diagram up on the blackboard (well, that was the cutting edge of presentation technology when I was at school). There’s the diagram up on the right. t1 and t2 obviously represent two instants of time for the ball, in its progress down the slope.

Somehow you are managing to stay awake, just. But in your semi-stupor you find yourself putting up your hand.

‘Yes?’, says the teacher irritably, wondering how there could be any serious question to be asked so far, and expecting something entirely facetious.

‘Er – which one is now?’ you ask. The teacher could perhaps consider your question carefully, for the sake any deep conceptual problem concealed within it, but instead she wonders why she bothered to get up this morning.

Not in the curriculum

Warwick University

Warwick University

However there is a serious philosophical issue here – admittedly not in the physics curriculum, to be fair to the teacher. And the reason it’s not in the curriculum is that the concept of ‘now’ is alien to physics. ‘Now’ is entirely confined to our subjective perception of the world. Think of the earth in its nascent state, a ball of molten lava and all that. Does it even make sense to imagine there was a ‘now’ then? We can say that this red-hot lava whirlpool formed before that one did – but we can’t say that either of them is forming now. Well of course not, in the obvious sense – it was four and a half billion years ago. But you could say that there was a time when your first day at school was ‘now’; and you can also say that there was a time when the execution of Marie Antoinette was ‘now’ – for somebody, that is, even perhaps for the unfortunate woman herself. But as for the formation of the earth – there was no one around for whom it could be a ‘now’. (Small green men excepted.)  You’re thinking of it as a ‘now’, I expect, but that’s because in your imagined scenario you are in fact there, as some sort of implicit presence suspended in space, viewing the proceedings.

It’s odd to try and visualise an exclusively objective world – one without a point of view – “The View from Nowhere” as the philosopher Thomas Nagel has put it; it’s the title of one of his books. In such a world there is no ‘now’, and therefore no past and no future, but only a ‘before’ and ‘after’ relative to any arbitrary point in time. And I was always struck by the way that T. S. Eliot, in Burnt Norton, from his Four Quartets, associates ‘time past and time future’ with the poetic and spiritual, and ‘time before and time after’ with the prosaic and mundane.

Language

Our language – indeed most languages – are built around the ‘now’, in that tenses correspond to past and future. Without the subjective sense of a ‘now’, language would surely work in a very different way. Interestingly, there is an example possibly relevant to this from the Pirahã people of the Amazon, who have been studied by the controversial linguist Daniel Everett. Their relationship to the passage of time seems to be different from ours – Everett claims that they have no real sense of history or of planning for the future, and so live in a kind of perpetual present. Correspondingly, inflections in their utterances are related not to temporal comparisons, like our tenses, but to the surrounding circumstances – e.g. whether something being described is right here, or is known first-hand, or has been reported by some other person. (Everett originally went to them as a Christian missionary, but was dismayed to find that they had no interest at all in Jesus unless Everett could claim to have met him.)

So all this would seem to support a philosopher I remember reading a long time ago. I don’t remember who he was, and can no longer find the passage. But I remember the sentence “Our language has a tiresome bias in favour of time.” I think this man was from the old-school style of linguistic philosophy, which held that most philosophical problems can be resolved into confusions caused by our use of language – and so time concepts were just another example of this. But I don’t think this is at all adequate as an approach, Pirahã or no Pirahã. However my language works, I would still have a sense of the differing character of past events, which cannot be changed, and future events, which mostly cannot be known – and of course a present, a now, which is the defining division between them. I would be surprised if the experience of a Pirahã person did not include that.

Space and time

How about another attack on the problem – to make an analogy between the spatial and the temporal? The spatial equivalent of ‘now’ is ‘here’. And there doesn’t seem to be any perplexity about that. ‘Here’ is where I am, er, now. Oh dear. Maybe these aren’t so easy to separate out. Perhaps ‘here’ seems simpler because we each have our own particular ‘here’. It’s where our body is, and that’s easily seen by others. And we can change it at will. But we all share the same ‘now’, and there’s not a lot we can do to change that. There is, of course, the remote possibility of relativistic time travel. I could in some sense change my ‘now’ relative to yours – but when I come back to earth I am back in the same predicament – just one that differs slightly in degree.

But do we all share the same ‘now’?  Here’s a slightly more disturbing thought. I have made out that my own sense of ‘now’ is confined to my own private experience, and doesn’t exist in the world ‘out there’. And the same is true of you, of course. I can see and hear you, and I find from your behaviour and the things you say that you are experiencing the the same, contemporaneous events that I am. But it’s not your private experience, or your ‘now’ that I am seeing – only your body. And your body – including of course your brain – is very much a part of the world ‘out there’. It’s only your private experience which isn’t, and I can’t experience that, by definition. So how do I know that your ‘now’ is the same as mine? Do we each float around in our own isolated time bubbles?

I think perhaps there is a solution of some sort to this. If your ‘now’ is different from mine, it must therefore be either before it or after it. Let’s suppose it’s an hour after. Then if my ‘now’ is at 4.30, yours is now at 5.30. But of course there’s a problem with the now that I have put in bold. It doesn’t refer to actual time, but to a sort of meta-time by which we mark out time itself. And how could this make sense? It’s rather like asking “how fast does time flow?” when there is no other secondary, or meta-time by which we could measure the ‘speed’ of normal time.

So perhaps this last idea crumbles into nonsense. But I still believe that, in the notion of ‘now’ there is a deep problem, which is one aspect of the more general mystery of consciousness. Do you agree? Most don’t.

But right now, the gravel is here, and is spread over the drive. So at least I’ve managed to do something more practical and down-to-earth today than write this post. And that’s a little bit of my past – or what is now my past – that I can be proud of.