A Passage of Time

I have let rather too much time elapse since my last post. What are my excuses? Well, I mentioned in one post how helpful my commuter train journeys were in encouraging the right mood for blog-writing – and since retiring such journeys are few and far between. To get on the train every time I felt a blog-post coming on would not be a very pension-friendly approach, given current fares. My other excuse is the endless number of tasks around the house and garden that have been neglected until now. At least we are now starting to create a more attractive garden, and recently took delivery of a summerhouse: I am hoping that this could become another blog-friendly setting.

Time travel inventor

tvtropes.org

But since a lapse of time is my starting point, I could get back in by thinking again about the nature of time. Four years back (Right Now, 23/3/13) I speculated on the issue of why we experience a subjective ‘now’ which doesn’t seem to have a place in objective physical science. Since then I’ve come across various ruminations on the existence or non-existence of time as a real, out-there, component of the world’s fabric. I might have more to say about this in the, er, future – but what appeals to me right now is the notion of time travel. Mainly because I would welcome a chance to deal with my guilty feelings by going back in time and plugging the long gap in this blog over the past months.

I recently heard about an actual time travel experiment, carried out by no less a person that Stephen Hawking. In 2009, he held a party for time travellers. What marked it out as such was that he sent out the invitations after the party took place. I don’t know exactly who was invited; but, needless to say, the canapes remained uneaten and the champagne undrunk. I can’t help feeling that if I’d tried this, and no one had turned up, my disappointment would rather destroy any motivation to send out the invitations afterwards. Should I have sent them anyway, finally to prove time travel impossible? I can’t help feeling that I’d want to sit on my hands, and leave the possibility open for others to explore.  But the converse of this is the thought that, if the time travellers had turned up, I would be honour-bound to despatch those invites; the alternative would be some kind of universe-warping paradox. In that case I’d be tempted to try it and see what happened.

Elsewhere, in the same vein, Hawking has remarked that the impossibility of time travel into the past is demonstrated by the fact that we are not invaded by hordes of tourists from the future. But there is one rather more chilling explanation for their absence: namely that the time travel is theoretically possible, but we have no future in which to invent it. Since last year that unfortunately looks a little more likely, given the current occupant of the White House. That such a president is possible makes me wonder whether the universe is already a bit warped.

Should you wish to escape this troublesome present and escape into a different future, whatever it might hold, it can of course be done. As Einstein famously showed in 1905, it’s just a matter of inventing for yourself a spaceship that can accelerate to nearly the speed of light and taking a round trip in it. And of course this isn’t entirely science fiction: astronauts and satellites – even airliners – regularly take trips of microseconds or so into the future; and indeed our now familiar satnav devices wouldn’t work if this effect weren’t taken into account.

But the problem of course arises if you find the future you’ve travelled to isn’t one you like. (Trump, or some nepotic Trumpling,  as world president? Nuclear disaster? Both of these?) Whether you can get back again by travelling backward in time is not a question that’s really been settled. Indeed, it’s the ability to get at the past that raises all the paradoxes – most famously what would happen if you killed your grandparents or stopped them getting together.

Marty McFly with his teenage mother

Marty McFly with his teenage mother

This is a furrow well-ploughed in science fiction, of course. You may remember the Marty McFly character in the film Back to the Future, who embarks on a visit to the past enabled by his mad scientist friend. It’s one way of escaping from his dysfunctional, feckless parents, but having travelled back a generation in time he finds himself in being romantically approached by his teenage mother. He manages eventually to redirect her towards his young father, but on returning to the present finds his parents transformed into an impossibly hip and successful couple.

Then there’s Ray Bradbury’s story A Sound of Thunder, where tourists can return to hunt dinosaurs – but only those which were established to have been about to die in any case, and any bullets must then be removed from their bodies. As a further precaution, the would-be hunters are kept away from the ground by a levitating path, to prevent any other paradox-inducing changes to the past. One bolshie traveller breaks the rules however, reaches the ground, and ends up with a crushed butterfly on the sole of his boot. On returning to the present he finds that the language is subtly different, and that the man who had been the defeated fascist candidate for president has now won the election. (So, thinking of my earlier remarks, could some prehistoric butterfly crusher, yet to embark on his journey, be responsible for the current world order?)

My favourite paradox is the one portrayed in a story called The Muse by Anthony Burgess, in which – to leave a lot out – a time travelling literary researcher manages to meet William Shakespeare and question him on his work. Shakespeare’s eye alights on the traveller’s copy of the complete works, which he peruses and makes off with, intending to mine it for ideas. This seems like the ideal solution for struggling blog-writers like me, given that, having travelled forward in time and copied what I’d written on to a flash drive, I could return to the present and paste it in here. Much easier.

But these thoughts put me in mind of a more philosophical issue with time which has always fascinated me – namely whether it’s reversible. We know how to travel forward in time; however when it comes to travelling backward there are various theories as to how it might, in theory, be done, but no-one is very sure. Does this indicate a fundamental asymmetry in the way time works? Of course this is a question that has been examined in much greater detail in another context: the second law of thermodynamics, we are told, says it all.

Let’s just review those ideas. Think of running a film in reverse. Might it show anything that could never happen in real, forward time? Well of course if it were some sort of space film which showed planets orbiting the sun, or a satellite around the earth, then either direction is possible. But, back on earth, think of all those people you’d see walking backwards. Well, on the face of it, people can walk backwards, so what’s the problem? Well, here’s one of many that I could think of: imagine that one such person is enjoying a backward country walk on a path through the woods. As she approaches a protruding branch from a sapling beside the path, the branch suddenly whips sideways towards her as if to anticipate her passing, and then, laying itself against her body, unbends itself as she backward-walks by, and has then returned to its rest position as she recedes. Possible? Obviously not. But is it?

I’m going to argue against the idea that there is a fundamental ‘arrow of time’, and that despite the evident truth of the laws of thermodynamics and the irresistible tendency we observe toward increasing disorder, or entropy, there’s nothing ultimately irreversible about physical processes. I’ve deliberately chosen an example which seems to make my case harder to maintain, to see if I can explain my way out of it. You will have had the experience of walking by a branch which projects across your path, and noticing how your body bends it forwards as you pass, and seeing it spring back to its former position as you continue on. Could we envisage a sequence of events in the real world where all this happened in reverse?

Before answering that I’m going to look at a more familiar type of example. I remember being impressed many years ago by an example of the type of film I mentioned, illustrating the idea of entropy. It showed someone holding a cup of tea, and then letting go of it, with the expected results. Then the film was reversed. The mess of spilt tea and broken china on the floor drew itself together, and as the china pieces reassembled themselves into a complete cup and saucer, the tea obediently gathered itself back into the cup. As this process completed the whole assembly launched itself from the floor and back into the hands of its owner.

Obviously, that second part of the sequence would never happen in the real world. It’s an example of how, left to itself, the physical world will always progress to a state of greater disorder, or entropy. We can even express the degree of entropy mathematically, using information theory. Case closed, then – apart, perhaps, from biological evolution? And even then it can be shown that if some process – like the evolution of a more sophisticated organism – decreases entropy, it will always be balanced by a greater increase elsewhere; and so the universe’s total amount of entropy increases. The same applies to our own attempts to impose order on the world.

So how could I possibly plead for reversibility of time? Well, I tend to think that this apparent ‘arrow’ is a function of our point of view as limited creatures, and our very partial perception of the universe. I would ask you to imagine, for a moment, some far more privileged being – some sort of god, if you like – who is able to track all the universe’s individual particles and fields, and what they are up to. Should this prodigy turn her attention to our humble cup of tea, what she saw, would I think, be very different from the scene as experienced through our own senses. From her perspective, the clean lines of the china cup which we see would become less defined – lost in a turmoil of vibrating molecules, themselves constantly undergoing physical and chemical change. The distinction between the shattered cup on the floor and the unbroken one in the drinker’s hands would be less clear.

Colour blindness testWhat I’m getting at is the fact that what we think of as ‘order’ in our world is an arrangement that seems significant only from one particular point of view determined by the scale and functionality of our senses: the arrangement we think of as ‘order’ floats like an unstable mirage in a sea of chaos. As a very rough analogy, think of those patterns of coloured dots used to detect colour blindness. You can see the number in the one I’ve included only if your retinal cells function in a certain way; otherwise all you’d see would be random dots.

And, in addition to all this, think of the many arrangements which (to us) might appear to have ‘order’ – all the possible drinks in the cup, all the possible cup designs – etc, etc. But compared to all the ‘disordered’ arrangements of smashed china, splattered liquid and so forth, the number of potential configurations which would appeal to us as being ‘ordered’ is truly infinitesimal. So it follows that the likelihood of moving from a state we regard as ‘disordered’ to one of ‘order’ is unimaginably slim; but not, in principle, impossible.

So let’s imagine that one of these one-in-a-squillion chances actually comes off. There’s the smashed cup and mess of tea on the floor. It’s embedded in a maze of vibrating molecules making up the floor, the surrounding air, and so on. And, in this case it so happens that the molecular impacts between the elements of the cup and the tea, and their surroundings combine so as to nudge them all back into their ‘ordered’ configuration, and boost them back off the floor and back into the hands of the somewhat mystified drinker.

Yes, the energy is there to make that happen – it just has to come together in exactly the correct, fantastically unlikely way. I don’t know how to calculate the improbability of this, but I should imagine that to see it happen we would need to do continual trials for a time period which is some vast multiple of the age of the universe. (Think monkeys typing the works of Shakespeare, and then multiply by some large number.) In other words, of course, it just doesn’t happen in practice.

But, looked at another way, such unlikely things do happen. Think of when we originally dropped the cup, and ended up with some sort of mess on the floor – that is, out of the myriad of other possible messes that could have been created, had the cup been dropped at a slightly different angle, the floor had been dirtier, the weather had been different – and so on. How likely is that exact, particular configuration of mess that we ended up with? Fantastically unlikely, of course – but it happened. We’d never in practice be able to produce it a second time.

So of all these innumerable configurations of matter – whether or not they are one of the tiny minority that seem to us ‘ordered’ – one of them happens with each of the sorts of event we’ve been considering. The universe at large is indifferent to our notion of ‘order’, and at each juncture throws up some random selection of the unthinkably large number of possibilities. It’s just that these ordered states are so few in number compared to the disordered ones that they never in practice come about spontaneously, but only when we deliberately foster them into being, by doing such things as manufacturing teacups, or making tea.

Let’s return, then, to the branch that our walker brushes past on the woodland footpath, and give that a similar treatment. It’s a bit simpler, if anything: we just require the astounding coincidence that, as the backwards walker approaches the branch, the random Brownian motion of an unimaginably large number of air molecules just happen to combine to give the branch a series of rhythmic, increasing nudges. It appears to oscillate with increasing amplitude until one final nudge lays it against the walker’s body just as she passes. Not convinced? Well, this is just one of the truly countless possible histories of the movement of a vast number of air molecules – one which has a consequence we can see.

Remember that the original Robert Brown, of Brownian motion fame, did see random movements of pollen grains in water, and since it didn’t occur to him that the water molecules were responsible for this; he thought it was a property of the pollen grains. Should we happen to witness such an astronomically unlikely movement of the tree, we would suspect some mysterious bewitchment of the tree itself, rather than one specific and improbably combination of air molecule movements.

You’ll remember that I was earlier reflecting that we know how to travel forwards in time, but that backward time travel is more problematic. So doesn’t this indicate another asymmetry – another evidence of an arrow of time? Well I think the right way of thinking about this emerges when we are reminded that this very possibility of time travel was a consequence of a theory called ‘relativity’. So think relative. We know how to move forward in time relative to other objects in our vicinity. Equally, we know how they could move forward in time relative to us. Which of course means that we’d be moving backward relative to them. No asymmetry there.

Maybe the one asymmetry in time which can’t be analysed a way is our own subjective experience of moving constantly from a ‘past’ into a ‘future’ – as defined by our subjective ‘now’. But, as I was pointing out three years ago, this seems to be more a property of ourselves as experiencing creatures, rather than of the objective universe ‘out there’.

I’ll leave you with one more apparent asymmetry. If processes are reversible in time, why do we only have records of the past, and not records of the future? Well, I’ve gone on long enough, so in the best tradition of lazy writers, I will leave that as an exercise for the reader.

What a Coincidence!

My title is an expression you hear quite often, the exclamation mark denoting how surprising it seems when, for example, you walk into a shop and find yourself behind your friend in the queue (especially if you were just thinking about her), or if perhaps the person at the next desk in your office turns out to have the same birthday as you.

But by considering the laws of probability you can come to the conclusion that such things are less unlikely than they seem. Here’s a way of looking at it: suppose you use some method of generating random numbers, say between 0 and 100, and then plot them as marks on a scale. You’ll probably find blank areas in some parts of the scale, and tightly clustered clumps of marks in others. It’s sometimes naively assumed that, if the numbers are truly random, they should be evenly spread across the scale. But a simple argument shows this to be mistaken: there are in fact relatively few ways to arrange the marks evenly, but a myriad ways of distributing them irregularly. Therefore, by elementary probability, it is overwhelmingly likely that any random arrangement will be of the irregular and clumped sort.

randomTo satisfy myself, I’ve just done this exercise – and to make it more visual I have generated the numbers as 100 pairs of dual coordinates, so that they are spread over a square. Already it looks gratifyingly clumpy, as probability theory predicts. So, to stretch and reapply the same idea, you could say it’s quite natural that contingent events in our lives aren’t all spaced out and disjointed from one another in a way that we might naively expect, but end up being apparently juxtaposed and connected in ways that seem surprising to us.

Isaac Asimov, the science fiction writer, put it more crisply:

People are entirely too disbelieving of coincidence. They are far too ready to dismiss it and to build arcane structures of extremely rickety substance in order to avoid it. I, on the other hand, see coincidence everywhere as an inevitable consequence of the laws of probability, according to which having no unusual coincidence is far more unusual than any coincidence could possibly be. (From The Planet that Wasn’t, originally published in The Magazine of Fantasy and Science Fiction, May 1975)

All there is to it?

So there we have the standard case for reducing what may seem like outlandish and mysterious coincidences to the mere operation of random chance. I have to admit, however, that I’m not entirely convinced by it. I have repeatedly experienced coincidences in my own life, from the trivial to the really pretty surprising – in a moment I’ll describe some of them. What I have noticed is that they often don’t have the character of being just random pairs or clusters of simple happenings, as you might expect, but seem to be linked to one another in strange and apparently meaningful ways, or to associate themselves with significant life events. Is this a mere subjective illusion, or could there be some hidden, organising principle governing happenings in our lives?

Brian Inglis

Brian Inglis, from the cover of Coincidence

I don’t have an answer to that, but I’m certainly not the first to speculate about the question. This post was prompted by a book I recently read, Coincidence by Brian Inglis*. Inglis was a distinguished and well-liked journalist in the last century, having been a formative editor of The Spectator magazine and a prolific writer of articles and books. He was also a television presenter: those of a certain age may remember a long-running historical series on ITV, All Our Yesterdays, which Inglis presented. In addition, to the distaste of some, he wrote quite widely on paranormal phenomena.

The joker

In Coincidence he draws on earlier speculators about the topic, including the Austrian zoologist Paul Kammerer, who, after being suspected of scientific fraud in his research into amphibians, committed suicide in 1926. Kammerer was an enthusiastic collector of coincidence stories, and tried to provide a theoretical underpinning for them with his idea of ‘seriality’, which had some influence on Jung’s notion of synchronicity, in which meaning is placed alongside causality in its power to determine events. Kammerer also attracted the attention of Arthur Koestler, who figures in one of my previous posts. Koestler gave an account of the fraud case which was sympathetic to Kammerer, in The Case of the Midwife Toad. Koestler was also fascinated by coincidences and wrote about them in his book The Roots of Coincidence. Inglis, in his own book, recounts many accounts of surprising coincidences from ordinary lives. Many of his subjects have the feeling that there is some sort of capricious organising spirit behind these confluences of events, whom Inglis playfully personifies as ‘the joker’.

This putative joker certainly seems to have had a hand in my own life a number of times. Thinking of the subtitle of Inglis’ book (‘A Matter of Chance – or Synchronicity?‘) the latter seems to be a factor with me. I have been so struck by the apparent significance of some of my own coincidences that I have recorded quite a number of them. First, here’s a simple example which shows that ‘interlinking’ tendency which occurs so often. (Names are changed in the accounts that follow.)

My own stories

From about 35 years ago: I spend an evening with my friend Suzy. We talk for a while about our mutual acquaintance Robert, whom we have both lost touch with; neither of us have seen him for a couple of years. Two days later, I park my car in a crowded North London street and Robert walks past just as I get out of the car, and I have a conversation with him. And then, I subsequently discover, the next day Suzy meets him quite by chance on a railway station platform. I don’t know whether the odds against this could be calculated, but they would be pretty huge. Each of the meetings, so soon after the conversation, would be unlikely, especially in crowded inner London as they were. And the pair of coincidences show this strange interlinking that I mentioned. But I have more examples which are linked to one another in an even more elaborate way, as well as being attached to significant life events.

In 1982 I decided that, after nearly 14 years, it was time to leave the first company I had worked for long-term; let’s call it ‘company A’. During my time with them, a while before this, I’d shared a flat with a couple of colleagues for 5 years. At one stage we had a vacancy in the flat and advertised at work for a third tenant. A new employee of the company – we’ll call him Tony McAllister – quickly showed an interest. We felt a slight doubt about the rather pushy way he did this, pulling down our notice so that no one else would see it. But he seemed pleasant enough, and joined the flat. We should have listened to our doubts – he turned out to be definitely the most uncongenial person I have ever lived with. He consistently avoided helping with any of the housework and other tasks around the flat, and delighted in dismantling the engine of his car in the living room. There were other undesirable personal habits – I won’t trouble you with the details. Fortunately it wasn’t long before we all left the flat, for other reasons.

Back to 1982, and my search for a new job. A particularly interesting sounding opportunity came up, in a different area of work, with another large company – company B. I applied and got an interview with a man who would be my new boss if I got the job: we’ll call him Mark Cooper. He looked at my CV. “You worked at company A – did you know Tony McAllister? He’s one of my best friends.” Putting on my best glassy grin, I said that I did know him. And I did go on to get the job. Talking subsequently, we both eventually recalled that Mark had actually visited our flat once, very briefly, with Tony, and we’d met fleetingly. That would have been five years or so earlier.

About nine months into my work with company B I saw a job advertised in the paper while I was on the commuter train. I hadn’t been looking for a job, and the ad just happened to catch my eye as I turned the page. It was with a small company (company C), with requirements very relevant to what I was currently doing, and sounding really attractive – so I applied. While I was awaiting the outcome of this, I heard that my present employer, company B, was to stop investing in my current area of work, and I was moved to a different position. I didn’t like the new job at all, and so of course was pinning my hopes on the application I’d already made. However, oddly, the job I’d been given involved being relocated into a different building, and I was given an office with a window directly overlooking the building in which company C was based.

This seemed a good omen – and I subsequently was given an interview, and then a second one, with directors of company C. On the second one, my interviewer, ‘Tim Newcombe’, seemed vaguely familiar, but I couldn’t place him and thought no more of it. He evidently didn’t know me. Once again, I got the job: apparently it had been a close decision between me and one other applicant, from a field of about 50. And it wasn’t long before I found out why Tim seemed familiar: he was in fact married to someone I knew well in connection with some voluntary work I was involved with. On one occasion, I eventually realised, I had visited her house with some others and had very briefly met Tim. I went on to work for company C for nearly 12 years, until it disbanded. Subsequent to this both Tim and I worked on our own accounts, and we collaborated on a number of projects.

So far, therefore, two successive jobs where, for each, I was interviewed by someone whom I eventually realised I had already met briefly, and who had a strong connection to someone I knew. (In neither case was the connection related to the area of work, so that isn’t an explanation.)

The saga continues

A year or two after leaving company B, I heard that Mark Cooper had moved to a new job in company D, and in fact visited him there once in the line of work. Meanwhile, ten years after I had started the job in company C – and while I was still doing it – my wife and I, wanting to move to a new area, found and bought a house there (where we still live now, more than 20 years later). I then found out that the previous occupants were leaving because the father of the family had a new job – with, it turned out, company D. And on asking him more about it, it transpired that he was going to work with Mark Cooper, making an extraordinarily neat loop back to the original coincidence in the chain.

I’ve often mused on this striking series of connections, and wondered if I was fated always to encounter some bizarre coincidence every time I started new employment. However, after company C, I worked freelance for some years, and then got a job in a further company (my last before retirement). This time, there was no coincidence that I was aware of. But now, just in the last few weeks, that last job has become implicated in a further unlikely connection. This time it’s my son who has been looking for work. He told me about a promising opportunity he was going to apply for. I had a look at the company website and was surprised to see among the pictures of employees a man who had worked in the same office as me for the last four years or so – from the LinkedIn website I discovered he’d moved on a month after I retired. My son was offered an initial telephone interview – which (almost inevitably) turned out to be with this same man.

In gullible mode, I wondered to myself whether this was another significant coincidence. Well, whether I’m gullible or not, my son did go on to get the job. I hadn’t worked directly with the interviewer in question, and only knew him slightly; I don’t think he was aware of my surname, so I doubt that he realised the connection. My son certainly didn’t mention it, because he didn’t want to appear to be currying favour in any dubious way. And in fact this company that my son now works in turns out to have a historical connection with my last company – which perhaps explains the presence of his interviewer in it. But neither I nor my son were aware of any of this when he first became interested in the job.

Just one more

I’m going to try your patience with just one more of my own examples, and this involves the same son, but quite a few years back – in fact when he was due to be born. At the time our daughter was 2 years old, and if I was to attend the coming birth she would need to be babysat by someone. One friend, who we’ll call Molly, said she could do this if it was at the weekend – so we had to find someone else for a weekday birth. Another friend, Angela, volunteered. My wife finally started getting labour pains, a little overdue, one Friday evening. So it looked as if the baby would arrive over the weekend and Molly was alerted. However, untypically for a second baby, this turned out to be a protracted process. By Sunday the birth started to look imminent, and Molly took charge of my daughter. But by the evening the baby still hadn’t appeared – we had gone into hospital once but were sent home again to wait. So we needed to change plans, and my daughter was taken to Angela, where she would stay overnight.

My son was finally born in the early hours of Monday morning, which was May 8th. And then the coincidence: it turned out that both Molly and Angela had birthdays on May 8th. What’s nice about this one is that it is possible to calculate the odds. There is that often quoted statistic that if there are 23 or more people in a room there is a greater than evens chance that at least two of them will share the same birthday. 23 seems a low number – but I’ve been through the maths myself, and it is so. However in this case, it’s a much simpler calculation: the odds would be 1 in 365 x 365 (ignoring leap years for simplicity), which is 133,225 to 1 against. That’s unlikely enough – but once again, however, I don’t feel that the calculations tell the full story. The odds I’ve worked out apply where any three people are taken at random and found all to share the same birthday. In this case we have the coincidence clustered around a significant event, the actual day of birth of one of them – and that seems to me to add an extra dimension that can’t so easily be quantified.

Malicious streak

Well, there you have it – random chance, or some obscure organising principle beyond our current understanding? Needless to say, that’s speculation which splits opinion along the lines I described in my post about the ‘iPhobia’ concept. As an admitted ‘iclaustrophobe’, I prefer to keep an open mind on it. But to return to Brian Inglis’s ‘joker’: Inglis notes that this imagined character seems to display a malicious streak from time to time: he quotes an example where estranged lovers are brought together by coincidence in awkward, and ultimately disastrous circumstances. And add to that the observation of some of those looking into the coincidence phenomenon that their interest seems to attract further coincidences: when Arthur Koestler was writing about Kammerer he describes his life being suddenly beset by a “meteor shower” of coincidences, as if, he felt, Kammerer were emphasising his beliefs from beyond the grave.

With both of those points in mind, I’d like to offer one further story. It was told to me by Jane O’Grady (real name this time), and I’m grateful to her for allowing me to include it here – and also for going to some trouble to confirm the details. Jane is a writer, philosopher and teacher. One day in late 1991, she and her then husband, philosopher Ted Honderich, gave a lunch to which they invited Brian Inglis. His book on coincidences – the one I’ve just read – had been published fairly recently, and a good part of their conversation was a discussion of that topic. A little over a year later, in early 1993, Jane was teaching a philosophy A-level class. After a half-time break, one of the students failed to reappear. His continuing absence meant that Jane had to give up waiting and carry on without him. He had shown himself to be somewhat unruly, and so this behaviour seemed to her at first to be irritatingly in character.

And so when he did finally appear, with the class nearly over, Jane wondered whether to believe his proffered excuse: he said he had witnessed a man collapsing in the street and had gone to help. But it turned out to be perfectly true. Unfortunately, despite his intervention, nothing could be done and the man had died. The coincidence, as you may have guessed, lay in the identity of the dead man. He was Brian Inglis.


*Brian Inglis, Coincidence: A Matter of Chance – or Synchronicity? Hutchinson, 1990

Are We Deluded?

Commuting days until retirement: 19

My last post searched, somewhat uncertainly, for a reason to believe that we are in a meaningful sense free to make decisions – to act spontaneously in some way that is not wholly and inevitably determined by the state of the world before we act: the question of free will, in other words. In a comment, bloggingisaresponsibility referred me to the work of Sam Harris, a philosopher and neuroscientist who argues cogently for the opposite position.

Sam Harris

Sam Harris (Wikimedia/Steve Jurvetson)

Harris points to examples of cases where someone can be mistaken about how they came to a certain decision: it’s well known that under hypnosis a subject can be told to take an action in response to a prompt, after having been woken from the hypnotic trance. ‘When I clap my hands you will open the window.’ When the subject duly carries out the command, and is asked about why she took the action, she may say that the room was feeling stuffy or some such, and give every sign of genuinely believing that this was the motive.

And I can think of some slightly unnerving examples from my own personal life where it has become clear over a period of time that all the behaviour of someone I know is aimed towards a certain outcome, while the intentions that they will own up to – quite honestly, it appears – are quite different.

So I’d accept it as undeniable that we can believe ourselves to be making a free choice, when the real forces driving our actions are unknown to us. But it’s one thing to claim that we can be mistaken about what is driving us towards this or that action, and quite another to maintain that we are systematically deluded about what it is to make choices in general. So what do I mean by choices?

I argued in the last post that genuine choices are not to be identified with the sort of random, meaningless bodily movements that a scientist might be able to study and analyse in a laboratory. When we truly exercise what we might call our will, we are typically weighing up a number of alternatives and deciding what might seem to us the ‘best’ one. Typically we may be trying to arbitrate between conflicting desires: do I stick to my diet and feel healthy, or give in and be seduced by the jumbo gourmet burger and chips?  Or you can read in any newspaper about men or women who have sacrificed a lifetime of domestic happiness for the promise of the short-lived affair that satisfies their cravings. (You don’t of course read about those who made the other choice.)

I hope that gives a flavour of what it really is to exercise choice: it’s all about subjective feelings – about uncertainly picking our way through an incredibly varied mental landscape of desires, emotions, pain, pleasure, knowledge and learnt experience – and of course making conscious decisions about where to place our steps. It seems to me that the arguments of determinists such as Harris would be irrefutable if only we were insentient robots, which we are not.

How deluded are we?

But Harris has an answer to that argument. We are not just deluded about the spontaneity of our actions:

It is not that free will is simply an illusion – our experience is not merely delivering a distorted view of reality. Rather, we are mistaken about our experience. Not only are we not as free as we think we are – we do not feel as free as we think we do. Our sense of our own freedom results from our not paying close attention to what it is like to be us. The moment we pay attention, it is possible to see that free will is nowhere to be found, and our experience is perfectly compatible with this truth. Thoughts and intentions simply arise in the mind. What else could they do? The truth about us is stranger than many suppose: The illusion of free will is itself an illusion. The problem is not merely that free will makes no sense objectively (i.e., when our thoughts and actions are viewed from a third-person point of view); it makes no sense subjectively either. (From Free Will – Harris’s italics)

‘Thoughts and intentions simply arise in the mind.’ Do they? Well, we have to admit that they do, all the time. We don’t generally decide what we are going to dream about – as one example – and Harris gives many other instances of actions taken in response to thoughts that ‘just arise’. But does this cover every willed, considered decision? I don’t think it does, although Harris argues otherwise.

But the key sentence here for, me is: ‘The illusion of free will is itself an illusion.’ – and the italics indicate that it is for Harris too. We may think we have the impression that we are exercising our wills, but we don’t. The impression is an illusion too.* Does that make sense to you? It doesn’t to me. But it’s very much in the spirit of a growing movement which espouses a particular way of dealing with our subjective nature. I think Daniel Dennett must be one of the pioneers: in a post two years ago I contested his arguments that qualia, the elements that comprise our conscious experience, do not exist.

Here’s another writer, Susan Blackmore, in a compilation from the Edge website where the contributors nominate ideas which they think should become extinct. Blackmore is a psychologist and former psychic researcher turned sceptic, and her choice for the dustbin is ‘The Neural Correlates of Consciousness’. She argues that, while much cutting edge research effort is going into the search for the biological processes that are the neural counterpart of consciousness, this is a wild goose chase – they’ll never be found. Well, so far I agree, but I suspect for very different reasons.

Consciousness is not some weird and wonderful product of some brain processes but not others. Rather, it’s an illusion constructed by a clever brain and body in a complex social world. We can speak, think, refer to ourselves as agents, and so build up the false idea of a persisting self that has consciousness and free will.

There can’t be any neural correlates of consciousness, says Blackmore, because there’s nothing for the neural processes to be correlated with. So here we have it again, this strange conclusion that flies against common sense. Well of course if a philosophical or scientific idea is incompatible with common sense that doesn’t necessarily disqualify it from being worth serious consideration. But in this case I believe it goes much, much further than that.

Let’s just stop and examine what is being claimed. We believe we have a private world of subjective conscious impressions; but that belief is based on an illusion – we don’t have such a world. But an illusion is itself a subjective experience. How can the broad class of things of which illusions are one subclass be itself an illusion? The notion is simply nonsense. You could only rescue it from incoherence by saying that illusions could be described as data in a processing machine (like a brain) which embody false accounts of what they are supposed to represent.

Imagine one of those systems which reads car number plates and measures average speeds over a stretch of road. Suppose we somehow got into the works and caused the numbers to be altered before they were logged, so that no speeding tickets were issued. Could we then say that the system was suffering an illusion? It would be a very odd way of speaking – because illusions are experiences, not scrambled data. Having an illusion implies consciousness (which involves something I have written about before – intentionality).  Just as Descartes famously concluded that he couldn’t doubt the existence of his doubting, we can’t be deluded about the experience of being deluded.

History repeats

The Aristotelian Universe

The universe according to Aristotle (mysearch.org.uk)

Here’s an example of how we can have illusions about the nature of the world: it was once an unquestioned belief that our planet was stationary and the sun orbited around it. Through objective measurement and logical analysis we now know that is wrong. But people thought this because it felt like it – our beliefs start with subjective experience (which we don’t have, according to the view I’m criticising). But of course a whole established world-view was based around this illusion. We are told that when one of the proponents of the new conception – Galileo – discovered corroborating evidence through his telescope, in the form of satellites orbiting Jupiter, supporters of the status quo refused to look into the telescope. (It’s an account of which the facts may be a little different.) But it nevertheless illustrates the extremity of the measures which the believers in an established order may take in order to protect it.

So now we have a 21st century version of that phenomenon. Our objective knowledge of the brain as an electrochemical machine can’t, even in principle, explain the existence of subjective experiences. If we are not to admit that our account of the world is seriously incomplete, a quick fix is simply to deny that this messy subjectivity is anything real, and conveniently ignore whether we are making any sense in doing so.

A Princeton psychologist, Michael Graziano, who researches into consciousness was quoted in a recent issue of New Scientist magazine, referring to what philosopher David Chalmers called ‘the hard problem’ – how and why the brain should give rise to conscious awareness at all:

“There is no hard problem,” says Graziano. “There is only the question of how the brain, an information-processing device, concludes and insists it has consciousness. And that is a problem of information processing. To understand that process fully will require [scientific experiments]”**.

So this wholly incoherent notion – of conscious experience as an illusion – is taken as the premise for a scientific investigation. And look at the language: it’s not you or I who are insisting we are conscious, but ‘the brain’. In this very defensive objectivisation of the terms used lies the modern equivalent of the 17th century churchmen who supposedly turned away from the telescope. If we only take care to avoid any mention of the subjective, we can reassure ourselves that none of this inconvenient consciousness stuff really exists – only in the ravings of a heretic would such an idea be entertained. And the scientific hegemony is spared the embarrassment of a province it doesn’t look like being able to conquer.

But free will? Even If I have convinced you that our subjective nature is real, that question may still be open. But as I mentioned before, I think the determinism arguments would only have irresistible force if we were insentient creatures, and I have tried to underline the fact that we are not. Our subjective world is the most immediate and undeniable reality of our experience – indeed it is our experience. It’s there, in that world, that we seem to be free,  and in which libertarians like myself believe we are free. Not surprisingly, it’s that world whose reality Harris is determined to deny. My contention is that, in doing so, he joins others in the fraternity of uncompromising physicalists and, like them, fatally undermines his own position.


*I haven’t explicitly distinguished between what I mean by illusion and delusion. Just to be clear: an illusion is experiencing something that appears other than it is. A delusion would be when we believe it to be as it appears. So while, for example, Harris would admit to experiencing what he believes to be the illusion of freewill, he would not admit to being deluded by it. But he would of course claim that I and many others are deluded.

**A stable mind is a conscious mind, in New Scientist 11 April 2015, p10. I did find an article for the New York Times by Graziano in which he addresses more directly some of the objections I have raised. But for the sake of brevity I’ll just mention that in that article I believe he simply falls into the same conceptual errors that I have already described.

Freedom and Purpose

Commuting days until retirement: 34

Retirement, I find, involves a lot of decisions. This blog shows that the important one was taken over two years ago – but there have been doubts about that along the way. And then, as the time approaches, a whole cluster of secondary decisions loom. Do I take my pension income by this method or that method? Can I phase my retirement and continue part-time for a while? (That one was taken care of for me – the answer was no. I felt relieved; I didn’t really want to.)  So I am free to make the first of these decisions, but not the second. And that brings me to what this post is about: what it means when we say we are ‘free’ to make a decision.

I’m not referring to the trivial sense, in which we are not free if some external factor constrains us, as with my part-time decision. It’s that more thorny philosophical problem I’m chasing, namely the dilemma as to whether we can take full responsibility as the originators of our actions; or whether we should assume that they are an inevitable consequence of the way things are in the world – the world of which our bodies and brains are a part.

It’s a dilemma which seems unresolved in modern Western society: our intuitive everyday assumption is that the first is true; indeed our whole system of morals – and of law and justice – is founded on it: we are individually held responsible for our actions unless constrained by external circumstances, or perhaps some mental dysfunction that we cannot help. Yet in our increasingly secular society, majority educated opinion drifts towards the materialist view – that the traditional assumption of freedom of the will is an illusion.

Any number of books have been written on how these approaches might be reconciled; I’m not going to get far in one blog post. But it does seem to me that this concept of freedom of action is far more elusive than is often accepted, and that facile approaches to it often end up by missing the point altogether. I would just like to try and give some idea of why I think that.

Early in Ian McEwan’s novel Atonement, the child writer Briony finds herself alone in a quiet house, in a reflective frame of mind:

Briony Tallis

Briony Tallis depicted on the cover of Atonement

She raised one hand and flexed its fingers and wondered, as she had sometimes before, how this thing, this machine for gripping, this fleshy spider on the end of her arm, came to be hers, entirely at her command. Or did it have some little life of its own? She bent her finger and straightened it. The mystery was in the instant before it moved, the dividing moment between not moving and moving, when her intention took effect. It was like a wave breaking. If she could only find herself at the crest, she thought, she might find the secret of herself, that part of her that was really in charge. She brought her forefinger closer to her face and stared at it, urging it to move. It remained still because she was pretending, she was not entirely serious, and because willing it to move, or being about to move it, was not the same as actually moving it. And when she did crook it finally, the action seemed to start in the finger itself, not in some part of her mind. When did it know to move, when did she know to move it?  There was no catching herself out. It was either-or. There was no stitching, no seam, and yet she knew that behind the smooth continuous fabric was the real self – was it her soul? – which took the decision to cease pretending, and gave the final command.

I don’t know whether, at time of writing, McEwan knew of the famous (or infamous) experiments of Benjamin Libet, some 18 years before the book was published. McEwan is a keen follower of scientific and philosophical ideas, so it’s quite likely that he did. Libet, who had been a neurological researcher since the early 1960s, designed a seminal series of experiments in the early eighties in which he examined the psychophysiological processes underlying the experience McEwan evokes.

Subjects were hooked up to detectors of brain impulses, and then asked to press a key or take some other detectable action at a moment of their own choosing, during some given period of time. They were also asked to record the instant at which they consciously made the decision to take action, by registering the position of a moving spot on an oscilloscope.

The most talked about finding of these experiments was not only that there was an identifiable electrical brain impulse associated with each decision, but that it generally occurred before the reported moment of the subject’s conscious decision. And so, on the face of it, the conclusion to be drawn is that, when we imagine ourselves to be freely taking a decision, it is really being driven by some physical process of which we are unaware; ergo free will is an illusion.

Benjamin Libet

Benjamin Libet

But of course it’s not quite that simple. In the course of his experiments Libet himself found that sometimes there was an impulse looking like the initiation of an action which was not actually followed by one. It turned out that in these cases the subject had considered moving at that moment but decided against it; so it’s as if, even when there is some physical drive to action we may still have the freedom to veto it. Compare McEwan’s Briony: ‘It remained still because she was pretending, she was not entirely serious, and because willing it to move, or being about to move it, was not the same as actually moving it.’ And this description is one that I should think we can all recognise from our own experience.

There have been other criticisms: if a subject may be deluded about when their actions are initiated, how reliable can their assessment be of exactly when they made a decision? (This from arch-physicalist Daniel Dennett). We feel we are floating helplessly in some stirred-up conceptual soup where objective and subjective measurements are difficult to disentangle from one another.

But you may be wondering all this time what these random finger crookings and key pressings have to do with my original question of whether we are free to make the important decisions which can shape our lives. Well, I hope you are, because that’s really the point of my post. There’s a big difference between these rather meaningless physical actions and the sorts of voluntary decisions that really interest us. Most, if not all, significant actions we take in our lives are chosen with a purpose. Philosophical reveries like Briony’s apart, we don’t sit around considering whether to move our finger at this moment or that moment; such minor bodily movements are normally triggered quite unconsciously, and generally in the pursuit of some higher end.

Rather, before opting for one of the paths open to us, there is some mental process of weighing up and considering what the result of each alternative might be, and which outcome we think it best to bring about. This may be an almost instantaneous judgement (which way to turn the steering wheel) or a more extended consideration of, for example, whether I should arrange my finances to my own maximum advantage, or to that of my family after my death. In either case I am constrained by a complicated network of beliefs, prejudices and instincts, some of which I am probably only slightly consciously aware of, if at all.

Teasing out the meaning of what it is for a decision to be ‘free’ in this context, is evidently very difficult, and certainly not something I’m going to try and achieve here, even if I could. But what is clear is that an isolated action like crooking your finger or pressing a button at some random moment, and for no specific purpose, has very little in common with the decisions by which we order our lives. It’s extremely difficult to imagine any objective experiment which could reliably investigate the causes of those more significant choices.

David Hume

David Hume

Immanuel Kant

Immanuel Kant

So maybe we are driven towards the philosopher Hume’s view that ‘reason is, and ought only to be the slave of the passions’. But I find the Kantian view attractive – that we can objectively deduce a morally correct course of action from our own existence as rational, sentient beings. Perhaps our freedom somehow consists in our ability to navigate a course between these two – to recognize when our ‘passions’ are driving us in the ‘right’ direction, and when they are not. Or that when we have conflicting instincts, as we often do, there is the potential freedom to rationally adjudicate between them.

Some have attempted to carve out a space for freewill in a supposedly deterministic universe by pointing out the randomness of quantum events and suchlike as the putative first causes of action. But this is an obvious fallacy. If our actions were bound by such meaningless occurrences, there is no sense in which we could be considered free at all. However this perspective does, it seems to me, throw some light on the Libet experiments. If we are asked to take random, meaning-free decisions, is it surprising that we then appear to be subjugating ourselves to whatever random, purposeless events that might be taking place in our nervous system?

Ian McEwan must have had in mind the dichotomy between meaningless, consequence-free actions and significant ones, and how we can ascribe responsibility. The plot of Atonement, as its title hints, eventually hinges on the character Briony’s own sense of responsibility for those of her actions that are significant in a broader perspective. But as we are introduced to her, McEwan has her puzzling over the source of those much more limited impulses that do not spring from any sort of rationale.

Recently I wrote about Martin Gardner, a strict believer in scientific rigour but also in metaphysical truths not capable of scientific demonstration, and his approach appeals to me. Freewill, he asserts, is inseparable from consciousness:

For me, free will and consciousness are two names for the same thing. I cannot conceive of myself being self-aware without having some degree of free will. Persons completely paralyzed can decide what to think about or when to blink their eyes. Nor can I imagine myself having free will without being conscious. (From The Whys of a Philosophical Scrivener, Postscript)

At the beginning of his chapter on free will he refers to Wittgenstein’s doctrine that only those questions which can meaningfully be asked can have answers, and what remains cannot be spoken about, continuing:

The thesis of this chapter, although extremely simple and therefore annoying to most contemporary thinkers, is that the free-will problem cannot be solved because we do not know exactly how to put the question.

The chapter examines a wide range of views before restating Gardner’s own position.  ‘Indeed,’ he says, ‘it was with a feeling of enormous relief that I concluded, long ago, that free will is an unfathomable mystery.’

It will be with another feeling of enormous relief that I will soon have a taste of freedom of a kind I haven’t before experienced; but will I be truly free? Well, I will at least have more time to think (freely or otherwise) about it.

Quantum Immortality

Commuting days until retirement: 91

A theme underlying some of my recent posts has been what (if anything) happens to us when we die. I’d like to draw this together with some other thoughts from eight months ago, when I was gazing at the roof of Exeter Cathedral and musing on the possibility of multiple universes. This seemingly wild idea, beloved of fantasy and science fiction authors, is now increasingly taken seriously in the physics departments of universities as a serious model of reality. The idea of quantum immortality (explained below) is a link between these topics, and it was a book by the American physicist Max Tegmark, The Mathematical Universe*, that got me thinking about it.

Max Tegmark

Max Tegmark

I won’t spend time looking at the theory of multiple universes – or the Multiverse – at any length. I did explain briefly in my earlier post how the notion originally arose from quantum physics, and if you have an appetite for more detail there’s plenty in Wikipedia. There are a number of theoretical considerations which lead to the notion of a multiple universe: Tegmark sets out four that he supports, with illustrations, in a Scientific American article. I’m just going to focus here on two of them, which as Tegmark and others have speculated, could ultimately be different ways of looking at the same one. I’ll try to explain them very briefly.

The first approach: quantum divergence

It has been known since early in the last century that, where quantum physics allows a range of possible outcomes of some subatomic event, only one of these is actually observed. Experiments (for example the double slit experiment) suggest that the outcome is undetermined until an observation is made, whereupon one of the range of possibilities becomes the actual one that we find. In the phrase which represents the traditional ‘Copenhagen interpretation’ of this puzzle, the wave function collapses. Before this ‘collapse’, all the possibilities are simultaneously real – in the jargon, they exist ‘in superposition’.

But it was Hugh Everett in 1957 who first put forward another possibility which at first sight looks wildly outlandish now, and did so even more at the time: namely that the wave function never does collapse, but each possible outcome is realised in a different universe. It’s as if reality branches, and to observe a specific outcome is actually to find yourself in one of those branched universes.

The second approach: your distant twin

According to the most widely accepted theory of the creation of the universe, a phenomenon known as ‘inflation’ has the mathematical consequence that the cosmic space we now live in is infinite – it goes on for ever. And infinite space allows infinite possibilities. Statistics and probability undergo a radical transformation and start delivering certainties – a certainty, for example, that there is someone just like you, an unimaginable distance away, reading a blog written by someone just like me. And of course the someone who is reading may be getting bored with it and moving on to something else (just like you? – I hope not). But I can reassure myself that for all the doppelgangers out there who are getting bored there are just as many who are really fired up and preparing to click away at the ‘like’ button and write voluminous comments. (You see what fragile egos we bloggers have – in most universes, anyway.)

Pulling them together

But the point is, of course, that once again we have this bewildering multiplicity of possibilities, all of which claim a reality of their own; it all sounds strangely similar to the scenario posited by the first, quantum divergence approach. This similarity has been considered by Tegmark and other physicists, and Tegmark speculates that these two could be simply the same truth about the universe, but just approached from two different angles.

That is a very difficult concept to swallow whole; but for the moment we’ll proceed on the assumption that each of the huge variety of ramified possibilities that could follow from any one given situation does really exist, somewhere. And the differences between those possible worlds can have radical consequences for our lives, and indeed for our very existence. (As a previous post – Fate, Grim or Otherwise – illustrated.) Indeed, perhaps you could end up dead in one but still living in another.

Quantum Russian roulette

So if your existence branches into one universe where you are still living, breathing and conscious, and another where you are not, where are you going to find yourself after that critical moment? Since it doesn’t make sense to suppose you could find yourself dead, then we suppose that your conscious life continues into one of the worlds where you are alive.

This notion has been developed by Tegmark into a rather scary thought experiment (another version of which was also formulated by Hans Moravec some years earlier). Suppose we set up a sort of machine gun that fires a bullet every second. Only it is modified so that, at each second, some quantum mechanism like the decay of an atom determines, with a 50/50 probability, whether the bullet is actually fired. If it is not, the gun just produces a click. Now it’s the job of the intrepid experimenter, willing to take any risk in the cause of his work, to put his head in front of the machine gun.

According to the theory we have been describing, he can only experience those universes in which he will survive. Before placing his head by the gun, he’ll be hearing:
BangClickBangBangClickClickClickBang…  …etc

But with his head in place, it’ll be:
ClickClickClickClickClickClickClickClick…   …and so on.

Suppose he keeps his head there for half a minute, the probability of all the actions being clicks will be 230, or over a billion to one against. But it’s that one in a billion universe, with the sequence of clicks only, that he’ll find himself in. (Spare a thought for the billion plus universes in which his colleagues are dealing with the outcome, funerals are being arranged and coroners’ courts convened.)

Real immortality

Things become more disconcerting still if we move outside the laboratory into the world at large. At the moment of any given person’s death, obviously things could have been different in such a way that they might have survived that moment. In other words, there is a world in which the person continues to live – and as we have seen, that’s the one they will experience. But if this applies to every death event, then – subjectively – we must continue to live into an indefinitely extended old age. Each of us, on this account, will find herself or himself becoming the oldest person on earth.

A natural reaction to this argument is that, intuitively, it can’t be right. What if someone finds themselves on a railway track with a train bearing down on them and no time to jump out of the way? Or, for that matter, terminally ill? And indeed Tegmark points out that, typically, death is the ultimate upshot of a series of non-fatal events (cars swerving, changes in body cells), rather than a single, once-and-for-all, dead-or-alive event. So perhaps we arrive at this unsettling conclusion only by considerably oversimpifying the real situation.

But it seems to me that what is compelling about considerations of this sort is that they do lead us to take a bracing, if slightly unnerving, walk on the unstable, crumbling cliff-edge which forms the limits of our knowledge. Which always leads me to the suspicion, as it did for JBS Haldane, that the world is ‘not only queerer than we suppose, but queerer than we can suppose’. And that’s a suitable thought on which to end this blogging year.


*Tegmark, Max, Our Mathematical Universe: My Quest for the Ultimate Nature of Reality. Allen Lane/Penguin, 2014

The Mathematician and the Surgeon

Commuting days until retirement: 108

After my last post, which, among other things, compared differing attitudes to death and its aftermath (or absence of one) on the part of Arthur Koestler and George Orwell, here’s another fruitful comparison. It seemed to arise by chance from my next two commuting books, and each of the two people I’m comparing, as before, has his own characteristic perspective on that matter. Unlike my previous pair both could loosely be called scientists, and in each case the attitude expressed has a specific and revealing relationship with the writer’s work and interests.

The Mathematician

The first writer, whose book I came across by chance, has been known chiefly for mathematical puzzles and games. Martin Gardner was born in Oklahoma USA in 1914; his father was an oil geologist, and it was a conventionally Christian household. Although not trained as a mathematician, and going into a career as a journalist and writer, Gardner developed a fascination with mathematical problems and puzzles which informed his career – hence the justification for his half of my title.

Martin Gardner

Gardner as a young man (Wikimedia)

This interest continued to feed the constant books and articles he wrote, and he was eventually asked to write the Scientific American column Mathematical Games which ran from 1956 until the mid 1980s, and for which he became best known; his enthusiasm and sense of fun shines through the writing of these columns. At the same time he was increasingly concerned with the many types of fringe beliefs that had no scientific foundation, and was a founder member of PSICOPS,  the organisation dedicated to the exposing and debunking of pseudoscience. Back in February last year I mentioned one of its other well-known members, the flamboyant and self-publicising James Randi. By contrast, Gardner was mild-mannered and shy, averse from public speaking and never courting publicity. He died in 2010, leaving behind him many admirers and a two-yearly convention – the ‘Gathering for Gardner‘.

Before learning more about him recently, and reading one of his books, I had known his name from the Mathematical Games column, and heard of his rigid rejection of things unscientific. I imagined some sort of skinflint atheist, probably with a hard-nosed contempt for any fanciful or imaginative leanings – however sane and unexceptionable they might be – towards what might be thought of as things of the soul.

How wrong I was. His book that I’ve recently read, The Whys of a Philosophical Scrivener, consists of a series of chapters with titles of the form ‘Why I am not a…’ and he starts by dismissing solipsism (who wouldn’t?) and various forms of relativism; it’s a little more unexpected that determinism also gets short shrift. But in fact by this stage he has already declared that

I myself am a theist (as some readers may be surprised to learn).

I was surprised, and also intrigued. Things were going in an interesting direction. But before getting to the meat of his theism he spends a good deal of time dealing with various political and economic creeds. The book was written in the mid 80s, not long before the collapse of communism, which he seems to be anticipating (Why I am not a Marxist) . But equally he has little time for Reagan or Thatcher, laying bare the vacuity of their over-simplistic political nostrums (Why I am not a Smithian).

Soon after this, however, he is striding into the longer grass of religious belief: Why I am not a Polytheist; Why I am not a Pantheist; – so what is he? The next chapter heading is a significant one: Why I do not Believe the Existence of God can be Demonstrated. This is the key, it seems to me, to Gardner’s attitude – one to which I find myself sympathetic. Near the beginning of the book we find:

My own view is that emotions are the only grounds for metaphysical leaps.

I was intrigued by the appearance of the emotions in this context: here is a man whose day job is bound up with his fascination for the powers of reason, but who is nevertheless acutely conscious of the limits of reason. He refers to himself as a ‘fideist’ – one who believes in a god purely on the basis of faith, rather than any form of demonstration, either empirical or through abstract logic. And if those won’t provide a basis for faith, what else is there but our feelings? This puts Gardner nicely at odds with the modish atheists of today, like Dawkins, who never tires of telling us that he too could believe if only the evidence were there.

But at the same time he is squarely in a religious tradition which holds that ultimate things are beyond the instruments of observation and logic that are so vital to the secular, scientific world of today. I can remember my own mother – unlike Gardner a conventional Christian believer – being very definite on that point. And it reminds me of some of the writings of Wittgenstein; Gardner does in fact refer to him,  in the context of the freewill question. I’ll let him explain:

A famous section at the close of Ludwig Wittgenstein’s Tractatus Logico-Philosophicus asserts that when an answer cannot be put into words, neither can the question; that if a question can be framed at all, it is possible to answer it; and that what we cannot speak about we should consign to silence. The thesis of this chapter, although extremely simple and therefore annoying to most contemporary thinkers, is that the free-will problem cannot be solved because we do not know exactly how to put the question.

This mirrors some of my own thoughts about that particular philosophical problem – a far more slippery one than those on either side of it often claim, in my opinion (I think that may be a topic for a future post). I can add that Gardner was also on the unfashionable side of the question which came up in my previous post – that of an afterlife; and again he holds this out as a matter of faith rather than reason. He explores the philosophy of personal identity and continuity in some detail, always concluding with the sentiment ‘I do not know. Do not ask me.’ His underlying instinct seems to be that there has to something more than our bodily existence, given that our inner lives are so inexplicable from the objective point of view – so much more than our physical existence. ‘By faith, I hope and believe that you and I will not disappear for ever when we die.’ By contrast, Arthur Koestler, you may remember,  wrote in his suicide note of ‘tentative hopes for a depersonalised afterlife’ – but, as it turned out, these hopes were based partly on the sort of parapsychological evidence which was anathema to Gardner.

And of course Gardner was acutely aware of another related mystery – that of consciousness, which he finds inseparable from the issue of free will:

For me, free will and consciousness are two names for the same thing. I cannot conceive of myself being self-aware without having some degree of free will… Nor can I imagine myself having free will without being conscious.

He expresses utter dissatisfaction with the approach of arch-physicalists such as Daniel Dennett, who,  as he says,  ‘explains consciousness by denying that it exists’. (I attempted to puncture this particular balloon in an earlier post.)

Martin Gardner

Gardner in later life (Konrad Jacobs / Wikimedia)

Gardner places himself squarely within the ranks of the ‘mysterians’ – a deliberately derisive label applied by their opponents to those thinkers who conclude that these matters are mysteries which are probably beyond our capacity to solve. Among their ranks is Noam Chomsky: Gardner cites a 1983 interview with the grand old man of linguistics,  in which he expresses his attitude to the free will problem (scroll down to see the relevant passage).

The Surgeon

And so to the surgeon of my title, and if you’ve read one of my other blog posts you will already have met him – he’s a neurosurgeon named Henry Marsh, and I wrote a post based on a review of his book Do No Harm. Well, now I’ve read the book, and found it as impressive and moving as the review suggested. Unlike many in his profession, Marsh is a deeply humble man who is disarmingly honest in his account about the emotional impact of the work he does. He is simultaneously compelled towards,  and fearful of, the enormous power of the neurosurgeon both to save and to destroy. His narrative swings between tragedy and elation, by way of high farce when he describes some of the more ill-conceived management ‘initiatives’ at his hospital.

A neurosurgical operation

A neurosurgical operation (Mainz University Medical Centre)

The interesting point of comparison with Gardner is that Marsh – a man who daily manipulates what we might call physical mind-stuff – the brain itself – is also awed and mystified by its powers:

There are one hundred billion nerve cells in our brains. Does each one have a fragment of consciousness within it? How many nerve cells do we require to be conscious or to feel pain? Or does consciousness and thought reside in the electrochemical impulses that join these billions of cells together? Is a snail aware? Does it feel pain when you crush it underfoot? Nobody knows.

The same sense of mystery and wonder as Gardner’s; but approached from a different perspective:

Neuroscience tells us that it is highly improbable that we have souls, as everything we think and feel is no more or no less than the electrochemical chatter of our nerve cells… Many people deeply resent this view of things, which not only deprives us of life after death but also seems to downgrade thought to mere electrochemistry and reduces us to mere automata, to machines. Such people are profoundly mistaken, since what it really does is upgrade matter into something infinitely mysterious that we do not understand.

Henry Marsh

Henry Marsh

This of course is the perspective of a practical man – one who is emphatically working at the coal face of neurology, and far more familiar with the actual material of brain tissue than armchair speculators like me. While I was reading his book, although deeply impressed by this man’s humanity and integrity, what disrespectfully came to mind was a piece of irreverent humour once told to me by a director of a small company I used to work for which was closely connected to the medical industry. It was a sort of a handy cut-out-and-keep guide to the different types of medical practitioner:

Surgeons do everything and know nothing. Physicians know everything and do nothing. Psychiatrists know nothing and do nothing.  Pathologists know everything and do everything – but the patient’s dead, so it’s too late.

Grossly unfair to all to all of them, of course, but nonetheless funny, and perhaps containing a certain grain of truth. Marsh, belonging to the first category, perhaps embodies some of the aversion from dry theory that this caricature hints at: what matters to him ultimately, as a surgeon, is the sheer down-to-earth physicality of his work, guided by the gut instincts of his humanity. We hear from him about some members of his profession who seem aloof from the enormity of the dangers it embodies, and seem able to proceed calmly and objectively with what he sees almost as the detachment of the psychopath.

Common ground

What Marsh and Gardner seem to have in common is the instinct that dry, objective reasoning only takes you so far. Both trust the power of their own emotions, and their sense of awe. Both, I feel, are attempting to articulate the same insight, but from widely differing standpoints.

Two passages, one from each book, seem to crystallize both the similarities and differences between the respective approaches of the two men, both of whom seem to me admirably sane and perceptive, if radically divergent in many respects. First Gardner, emphasising in a Wittgensteinian way how describing how things appear to be is perhaps a more useful activity than attempting to pursue any ultimate reasons:

There is a road that joins the empirical knowledge of science with the formal knowledge of logic and mathematics. No road connects rational knowledge with the affirmations of the heart. On this point fideists are in complete agreement. It is one of the reasons why a fideist, Christian or otherwise, can admire the writings of logical empiricists more than the writings of philosophers who struggle to defend spurious metaphysical arguments.

And now Marsh – mystified, as we have seen, as to how the brain-stuff he manipulates daily can be the seat of all experience – having a go at reading a little philosophy in the spare time between sessions in the operating theatre:

As a practical brain surgeon I have always found the philosophy of the so-called ‘Mind-Brain Problem’ confusing and ultimately a waste of time. It has never seemed a problem to me, only a source of awe, amazement and profound surprise that my consciousness, my very sense of self, the self which feels as free as air, which was trying to read the book but instead was watching the clouds through the high windows, the self which is now writing these words, is in fact the electrochemical chatter of one hundred billion nerve cells. The author of the book appeared equally amazed by the ‘Mind-Brain Problem’, but as I started to read his list of theories – functionalism, epiphenomenalism, emergent materialism, dualistic interactionism or was it interactionistic dualism? – I quickly drifted off to sleep, waiting for the nurse to come and wake me, telling me it was time to return to the theatre and start operating on the old man’s brain.

I couldn’t help noticing that these two men – one unconventionally religious and the other not religious at all – seem between them to embody those twin traditional pillars of the religious life: faith and works.

On Being Set Free

Commuting days until retirement: 133

The underlying theme of this blog is retirement, and it will be fairly obvious to most of my readers by now – perhaps indeed to all three of you – that I’m looking forward to it. It draws closer; I can almost hear the ‘Happy retirement’ wishes from colleagues – some expressed perhaps through ever-so-slightly gritted teeth as they look forward to many more years in harness, while I am put out to graze. But of course there’s another side to that: they will also be keeping silent about the thought that being put out to graze also carries with it the not too distant prospect of the knacker’s yard – something they rarely think about in relation to themselves.

Because in fact the people I work with are generally a lot younger than I am – in a few cases younger than my children. No one in my part of the business has ever actually retired, as opposed to leaving for another job. My feeling is that to stand up and announce that I am going to retire will be to introduce something alien and faintly distasteful into the prevailing culture, like telling everyone about your arthritis at a 21st birthday party.

The revolving telescope

For most of my colleagues, retirement,  like death, is something that happens to other people. In my experience, it’s around the mid to late 20s that such matters first impinge on the consciousness – indistinct and out of focus at first, something on the edge of the visual field. It’s no coincidence, I think, that it’s around that same time that one’s perspective on life reverses, and the general sense that you’d like to be older and more in command of things starts to give way to an awareness of vanishing youth. The natural desire for what is out of reach reorientates its outlook, swinging through 180 degrees like a telescope on a revolving stand.

But I find that, having reached the sort of age I am now, it’s doesn’t do to turn your back on what approaches. It’s now sufficiently close that it is the principal factor defining the shape of the space you now have available in which to organise your life,  and you do much better not to pretend it isn’t there, but to be realistically aware. We have all known those who nevertheless keep their backs resolutely turned, and they often cut somewhat pathetic figures: a particular example I remember was a man (who would almost certainly be dead by now) who didn’t seem to accept his failing prowess at tennis as an inevitable corollary of age, but rather a series of inexplicable failures that he should blame himself for. And there are all those celebrities you see with skin stretched ever tighter over their facial bones as they bring in the friendly figure of the plastic surgeon to obscure the view of where they are headed.

Perhaps Ray Kurzweil, who featured in my previous post, is another example, with his 250 supplement tablets each day and his faith in the abilities of technology to provide him with some sort of synthetic afterlife.  Given that he has achieved a generous measure of success in his natural life, he perhaps has less need than most of us to seek a further one; but maybe it works the other way, and a well-upholstered ego is more likely to feel a continued existence as its right.

Enjoying the view

Old and Happy

Happiness is not the preserve of the young (Wikimedia Commons)

But the fact is that for most of us the impending curtailment of our time on earth brings a surprising sense of freedom. With nothing left to strive for – no anxiety about whether this or that ambition will be realised – some sort of summit is achieved. The effort is over,  and we can relax and enjoy the view. More than one survey has found that people in their seventies are nowadays collectively happier than any other age group: here are reports of three separate studies between 2011 and 2014, in Psychology Today, The Connexion, and the Daily Mail. Those adverts for pension providers and so on, showing apparently radiant wrinkly couples feeding the ducks with their grandchildren, aren’t quite as wide of the mark as you might think.

Speaking for myself, I’ve never been excessively troubled by feelings of ambition, and have probably enjoyed a relatively stress-free, if perhaps less prosperous, life as a result. And the prospect of an existence where I am no longer even expected to show such aspirations is part of the attraction of retirement. But of course there remain those for whom the fact of extinction gives rise to wholly negative feelings, but who are at the same time brave enough to face it fair and square, without any psychological or cosmetic props. A prime example in recent literature is Philip Larkin, who seems to make frequent appearances in this blog. While famously afraid of death, he wrote luminously about it. Here, in his poem The Old Fools he evokes images of the extreme old age which he never, in fact, reached himself:

Philip Larkin

Philip Larkin (Fay Godwin)

Perhaps being old is having lighted rooms
Inside your head, and people in them, acting.
People you know, yet can’t quite name; each looms
Like a deep loss restored, from known doors turning,
Setting down a lamp, smiling from a stair, extracting
A known book from the shelves; or sometimes only
The rooms themselves, chairs and a fire burning,
The blown bush at the window, or the sun’s
Faint friendliness on the wall some lonely
Rain-ceased midsummer evening.

Dream and reality seem to fuse at this ultimate extremity of conscious experience as Larkin portrays it; and it’s the snuffing out of consciousness that a certain instinct in us finds difficult to take – indeed, to believe in. Larkin, by nature a pessimist, certainly believed in it,  and dreaded it. But cultural traditions of many kinds have not accepted extinction as inevitable: we are not obliviously functioning machines but the subjects of experiences like the ones Larkin writes about. As such we have immortal souls which transcend the gross physical world, it has been held – so why should we not survive death? (Indeed, according some creeds, why should we not have existed before birth?)

Timid hopes

Well, whatever immortal souls might be, I find it difficult to make out a case for individual survival, and this is perhaps the majority view in the secular culture I inhabit. It seems pretty clear to me that my own distinguishing characteristics are indissolubly linked to my physical body: damage to the brain, we know, can can change the personality, and perhaps rob us of our memories and past experience, which most quintessentially define us as individuals. But even though our consciousness can be temporarily wiped out by sleep or anaesthetics, there remains the sense (for me, anyway) that since we have no notion whatever of how we could provide an account of it in physical terms,  there is the faint suggestion that some aspect of our experience could be independent of our bodily existence.

You may or may not accept both of these beliefs – the temporality of the individual and the transcendence of consciousness. But if you do,  then the possibility seems to arise of some kind of disembodied,  collective sentience,  beyond our normal existence. And this train of thought always reminds me of the writer Arthur Koestler, who died by suicide in 1983 at the age of 77. An outspoken advocate of voluntary euthanasia, he’d been suffering in later life from Parkinson’s disease, and had then contracted a progressive, incurable form of leukaemia. His suicide note (which turned out to have been written several months before his death) included the following passage:

I wish my friends to know that I am leaving their company in a peaceful frame of mind, with some timid hopes for a de-personalised after-life beyond due confines of space, time and matter and beyond the limits of our comprehension. This ‘oceanic feeling’ has often sustained me at difficult moments, and does so now, while I am writing this.

Death sentence

In fact Koestler had, since he was quite young, been more closely acquainted with death than most of us. Born in Hungary, during his earlier career as a journalist and political writer he twice visited Spain during its civil war in the 1930s. He made his first visit as an undercover investigator of the Fascist movement, being himself at that time an enthusiastic supporter of communism. A little later he returned to report from the Republican side,  but was in Malaga when it was captured by Fascist troops. By now Franco had come to know of his anti-fascist writing, and he was imprisoned in Seville under sentence of death.

Koestler portrayed on the cover of the book

Koestler portrayed on the cover of the book

In his account of this experience, Dialogue with Death, he describes how prisoners would try to block their ears to avoid the nightly sound of a telephone call to the prison, when a list of prisoner names would be dictated and the men later led out and shot. His book is illuminating on the psychology of these conditions,  and the violent emotional ups and downs he experienced:

One of my magic remedies was a certain quotation from a certain work of Thomas Mann’s; its efficacy never failed. Sometimes, during an attack of fear, I repeated the same verse thirty or forty times, for almost an hour, until a mild state of trance came on and the attack passed. I knew it was the method of the prayer-mill, of the African tom-tom, of the age-old magic of sounds. Yet in spite of my knowing it, it worked…
I had found out that the human spirit is able to call upon certain aids of which, in normal circumstances, it has no knowledge, and the existence of which it only discovers in itself in abnormal circumstances. They act, according to the particular case, either as merciful narcotics or ecstatic stimulants. The technique which I developed under the pressure of the death-sentence consisted in the skilful exploitation of these aids. I knew, by the way, that at the decisive moment when I should have to face the wall, these mental devices would act automatically, without any conscious effort on my part. Thus I had actually no fear of the moment of execution; I only feared the fear which would precede that moment.

That there are emotional ‘ups’ at all seems surprising,  but later he expands on one of them:

Often when I wake at night I am homesick for my cell in the death-house in Seville and, strangely enough, I feel that I have never been so free as I was then. This is a very strange feeling indeed. We lived an unusual life on that patio; the constant nearness of death weighed down and at the same time lightened our existence. Most of us were not afraid of death, only of the act of dying; and there were times when we overcame even this fear. At such moments we were free – men without shadows, dismissed from the ranks of the mortal; it was the most complete experience of freedom that can be granted a man.

Perhaps, in a diluted, much less intense form, the happiness of the over 70s revealed by the surveys I mentioned has something in common with this.

Koestler was possibly the only writer of the front rank ever to be held under sentence of death, and the experience informed his novel Darkness at Noon. It is the second in a trilogy of politically themed novels, and its protagonist, Rubashov, has been imprisoned by the authorities of an unnamed totalitarian state which appears to be a very thinly disguised portrayal of Stalinist Russia. Rubashov has been one of the first generation of revolutionaries in a movement which has hardened into an authoritarian despotism, and its leader, referred to only as ‘Number One’ is apparently eliminating rivals.  Worn down by the interrogation conducted by a younger, hard-line apparatchik, Rubashov comes to accept that he has somehow criminally acted against ‘the revolution’, and eventually goes meekly to his execution.

Shades of Orwell

By the time of writing the novel, Koestler, like so many intellectuals of that era, had made the journey from an initial enthusiasm for Soviet communism to disillusion with,  and opposition to it. And reading Darkness at Noon, I was of course constantly reminded of Orwell’s Nineteen Eighty-Four, and the capitulation of Winston Smith as he comes to love Big Brother. Darkness at Noon predates 1984 by nine years,  and nowadays has been somewhat eclipsed by Orwell’s much more well known novel. The two authors had met briefly during the Spanish civil war, where Orwell was actively involved in fighting against fascism, and met again and discussed politics around the end of the war. It seems clear that Orwell, having written his own satire on the Russian revolution in Animal Farm, eventually wrote 1984 under the conscious influence of Koestler’s novel. But they are of course very different characters: you get the feeling that to Orwell, with his both-feet-on-the-ground Englishness, Koestler might have seemed a rather flighty and exotic creature.

Orwell (aka Eric Blair) from the photo on his press pass (NUJ/Wikimedia Commons)

Orwell (aka Eric Blair) from the photo on his press pass (Wikimedia Commons)

In fact,  during the period between the publications of Darkness at Noon and 1984, Orwell wrote an essay on Arthur Koestler – probably while he was still at work on Animal Farm. His view of Koestler’s output is mixed: on one hand he admires Koestler as a prime example of the continental writers on politics whose views have been forged by hard experience in this era of political oppression – as opposed to English commentators who merely strike attitudes towards the turmoil in Europe and the East, while viewing it from a relatively safe distance. Darkness at Noon he regards as a ‘masterpiece’ – its common ground with 1984 is not, it seems, a coincidence. (Orwell’s review of Darkness at Noon in the New Statesman is also available.)

On the other hand he finds much of Koestler’s work unsatisfactory, a mere vehicle for his aspirations towards a better society. Orwell quotes Koestler’s description of himself as a ‘short-term pessimist’,  but also detects a utopian undercurrent which he feels is unrealistic. His own views are expressed as something more like long-term pessimism, doubting whether man can ever replace the chaos of the mid-twentieth century with a society that is both stable and benign:

Nothing is in sight except a welter of lies, hatred, cruelty and ignorance, and beyond our present troubles loom vaster ones which are only now entering into the European consciousness. It is quite possible that man’s major problems will NEVER be solved. But it is also unthinkable! Who is there who dares to look at the world of today and say to himself, “It will always be like this: even in a million years it cannot get appreciably better?” So you get the quasi-mystical belief that for the present there is no remedy, all political action is useless, but that somewhere in space and time human life will cease to be the miserable brutish thing it now is. The only easy way out is that of the religious believer, who regards this life merely as a preparation for the next. But few thinking people now believe in life after death, and the number of those who do is probably diminishing.

In death as in life

Orwell’s remarks neatly return me to the topic I have diverged from. If we compare the deaths of the two men, they seem to align with their differing attitudes in life. Both died in the grip of a disease – Orwell succumbing to tuberculosis after his final, gloomy novel was completed, and Koestler escaping his leukaemia by suicide but still expressing ‘timid hopes’.

After the war Koestler had adopted England as his country and henceforth wrote only in English – most of his previous work had been in German. In  being allowed a longer life than Orwell to pursue his writing, he had moved on from politics to write widely in philosophy and the history of ideas, although never really being a member of the intellectual establishment. These are areas which you feel would always have been outside the range of the more down-to-earth Orwell, who was strongly moral,  but severely practical. Orwell goes on to say, in the essay I quoted: ‘The real problem is how to restore the religious attitude while accepting death as final.’ This so much reflects his attitudes – he habitually enjoyed attending Anglican church services, but without being a believer. He continues, epigramatically:

Men can only be happy when they do not assume that the object of life is happiness. It is most unlikely, however, that Koestler would accept this. There is a well-marked hedonistic strain in his writings, and his failure to find a political position after breaking with Stalinism is a result of this.

Again, we strongly feel the tension between their respective characters: Orwell, with his English caution, and Koestler with his continental adventurism. In fact, Koestler had a reputation as something of an egotist and aggressive womaniser. Even his suicide reflected this: it was a double suicide with his third wife, who was over 20 years younger than he was and in good health. Her accompanying note explained that she couldn’t continue her life without him. Friends confirmed that she had entirely subjected her life to his: but to what extent this was a case of bullying,  as some claimed, will never be known.

Of course there was much common ground between the two men: both were always on the political left, and both,  as you might expect, were firmly opposed to capital punishment: anyone who needs convincing should read Orwell’s autobiographical essay A Hanging. And Koestler wrote a more prosaic piece – a considered refutation of the arguments for judicial killing – in his book Reflections on Hanging; it was written in the 1950s, when, on Koestler’s own account, some dozen hangings were occurring in Britain each year.

But while Orwell faced his death stoically, Koestler continued his dalliance with the notion of some form of hereafter; you feel that, as with Kurzweil, a well-developed ego did not easliy accept the thought of extinction. In writing this post, I discovered that he had been one of a number of intellectual luminaries who contributed to a collection of essays under the title Life after Death,  published in the 1970s. Keen to find a more detailed statement of his views, I actually found his piece rather disappointing. First I’ll sketch in a bit of background to clarify where I think he is coming from.

Back in Victorian times there was much interest in evidence of ‘survival’ – seances and table-rapping sessions were popular, and fraudulent mediums were prospering. Reasons for this are not hard to find: traditional religion, while strong, faced challenges. Steam-powered technology was burgeoning, the world increasingly seemed to be a wholly mechanical affair,  and Darwinism had arrived to encourage the trend towards materialism. In 1882 the Society for Psychical Research was formed, becoming a focus both for those who were anxious to subvert the materialist world view, and those who wanted to investigate the phenomena objectively and seek intellectual clarity.

But it wasn’t long before the revolution in physics, with relativity and quantum theory, exploded the mechanical certainties of the Victorians. At the same time millions suffered premature deaths in two world wars, giving ample motivation to believe that those lost somehow still existed and could maybe even be contacted.

Arthur Koestler

Koestler in later life (Eric Koch/Wikimedia Commons)

This seems to be the background against which Koestler’s ideas about the possibility of an afterlife had developed. He leans a lot on the philosophical writings of the quantum physicist Edwin Schrodinger, and seeks to base a duality of mind and matter on the wave/particle duality of quantum theory. There’s a lot of talk about psi fields and suchlike – the sort of terminology which was already sounding dated at the time he was writing.  The essay seemed to me to be rather backward looking, sitting more comfortably with the inchoate fringe beliefs of the mid 20th century than the confident secularism of Western Europe today.

A rebel to the end

I think Koestler was well aware of the way things were going, but with characteristic truculence reacted against them. He wrote a good deal on topics that clash with mainstream science, such as the significance of coincidence, and in his will used his legacy to establish a department of parapsychology,  which was set up at Edinburgh University, and still exists.

This was clearly a deliberate attempt to cock a snook at the establishment, and while he was not an attractive character in many ways I do find this defiant stance makes me warm to him a little. While I am sure I would have found Orwell more decent and congenial to know personally, Koestler is the more intellectually exciting of the two. I think Orwell might have found Koestler’s notion of the sense of freedom when facing death difficult to understand – but maybe this might have changed had he survived into his seventies. And in a general sense I share Koestler’s instinct that in human consciousness there is far more yet to understand than we have yet been able to, as it were, get our minds around.

Retirement, for me, will certainly bring freedom – not only freedom from the strained atmosphere of worldly ambition and corporate business-speak (itself an Orwellian development) but more of my own time to reflect further on the matters I’ve spoken of here.

iPhobia

Commuting days until retirement: 238

If you have ever spoken at any length to someone who is suffering with a diagnosed mental illness − depression, say, or obsessive compulsive disorder − you may have come to feel that what they are experiencing differs only in degree from your own mental life, rather than being something fundamentally different (assuming, of course, that you are lucky enough not to have been similarly ill yourself). It’s as if mental illness, for the most part, is not something entirely alien to the ‘normal’ life of the mind, but just a distortion of it. Rather than the presence of a new unwelcome intruder, it’s more that the familiar elements of mental functioning have lost their usual proportion to one another. If you spoke to someone who was suffering from paranoid feelings of persecution, you might just feel an echo of them in the back of your own mind: those faint impulses that are immediately squashed by the power of your ability to draw logical common-sense conclusions from what you see about you. Or perhaps you might encounter someone who compulsively and repeatedly checks that they are safe from intrusion; but we all sometimes experience that need to reassure ourselves that a door is locked, when we know perfectly well that it is really.

That uncomfortably close affinity between true mental illness and everyday neurotic tics is nowhere more obvious than with phobias. A phobia serious enough to be clinically significant can make it impossible for the sufferer to cope with everyday situations; while on the other hand nearly every family has a member (usually female, but not always) who can’t go near the bath with a spider in it, as well as a member (usually male, but not always) who nonchalantly picks the creature up and ejects it from the house. (I remember that my own parents went against these sexual stereotypes.) But the phobias I want to focus on here are those two familiar opposites − claustrophobia and agoraphobia.

We are all phobics

In some degree, virtually all of us suffer from them, and perfectly rationally so. Anyone would fear, say, being buried alive, or, at the other extreme, being launched into some limitless space without hand or foothold, or any point of reference. And between the extremes, most of us have some degree of bias one way or the other. Especially so − and this is the central point of my post − in an intellectual sense. I want to suggest that there is such a phenomenon as an intellectual phobia: let’s call it an iphobia. My meaning is not, as the Urban Dictionary would have it, an extreme hatred of Apple products, or a morbid fear of breaking your iPhone. Rather, I want to suggest that there are two species of thinkers: iagorophobes and iclaustrophobes, if you’ll allow me such ugly words.

A typical iagorophobe will in most cases cleave to scientific orthodoxy. Not for her the wide open spaces of uncontrolled, rudderless, speculative thinking. She’s reassured by a rigid theoretical framework, comforted by predictability; any unexplained phenomenon demands to be brought into the fold of existing theory, for any other way, it seems to her, lies madness. But for the iclaustrophobe, on the other hand, it’s intolerable to be caged inside that inflexible framework. Telepathy? Precognition? Significant coincidence? Of course they exist; there is ample anecdotal evidence. If scientific orthodoxy can’t embrace them, then so much the worse for it − the incompatibility merely reflects our ignorance. To this the iagorophobe would retort that we have no logical grounds whatever for such beliefs. If we have nothing but anecdotal evidence, we have no predictability; and phenomena that can’t be predicted can’t therefore be falsified, so any such beliefs fall foul of the Popperian criterion of scientific validity. But why, asks the iclaustrophobe, do we have to be constrained by some arbitrary set of rules? These things are out there − they happen. Deal with it. And so the debate goes.

Archetypal iPhobics

Widening the arena more than somewhat, perhaps the archetypal iclaustrophobe was Plato. For him, the notion that what we see was all we would ever get was anathema – and he eloquently expressed his iclaustrophobic response to it in his parable of the cave. For him true reality was immeasurably greater than the world of our everyday existence. And of course he is often contrasted with his pupil Aristotle, for whom what we can see is, in itself, an inexhaustibly fascinating guide to the nature of our world − no further reality need be posited. And Aristotle, of course, is the progenitor of the syllogism and deductive logic. In Raphael’s famous fresco The School of Athens, the relevant detail of which you see below, Plato, on the left, indicates his world of forms beyond our immediate reality by pointing heavenward, while Aristotle’s gesture emphasises the earth, and the here and now. Raphael has them exchanging disputatious glances, which for me express the hostility that exists between the opposed iphobic world-views to this day.

School of Athens

Detail from Raphael’s School of Athens in the Vatican, Rome (Wikimedia Commons)

iPhobia today

It’s not surprising that there is such hostility; I want to suggest that we are talking not of a mere intellectual disagreement, but a situation where each side insists on a reality to which the other has a strong (i)phobic reaction. Let’s look at a specific present-day example, from within the WordPress forums. There’s a blog called Why Evolution is True, which I’d recommend as a good read. It’s written by Jerry Coyne, a distinguished American professor of biology. His title is obviously aimed principally at the flourishing belief in creationism which exists in the US − Coyne has extensively criticised the so-called Intelligent Design theory. (In in my view, that controversy is not a dispute between the two iphobias I have described, but between two forms of iagoraphobia. The creationists, I would contend, are locked up in an intellectual ghetto of their own making, since venturing outside it would fatally threaten their grip on their frenziedly held, narrowly based faith.)

Jerry Coyne

Jerry Coyne (Zooterkin/Wikimedia Commons)

But I want to focus on another issue highlighted in the blog, which in this case is a conflict between the two phobias. A year or so ago Coyne took issue with the fact that the maverick scientist Rupert Sheldrake was given a platform to explain his ideas in the TED forum. Note Coyne’s use of the hate word ‘woo’, often used by the orthodox in science as an insulting reference to the unorthodox. They would defend it, mostly with justification, as characterising what is mystical or wildly speculative, and without evidential basis − but I’d claim there’s more to it than that: it’s also the iagorophobe’s cry of revulsion.

Rupert Sheldrake

Rupert Sheldrake (Zereshk/Wikimedia Commons)

Coyne has strongly attacked Sheldrake on more than one occasion: is there anything that can be said in Sheldrake’s defence? As a scientist he has an impeccable pedigree, having a Cambridge doctorate and fellowship in biology. It seems that he developed his unorthodox ideas early on in his career, central among which is his notion of ‘morphic resonance’, whereby animal and human behaviour, and much else besides, is influenced by previous similar behaviour. It’s an idea that I’ve always found interesting to speculate about − but it’s obviously also a red rag to the iagorophobic bull. We can also mention that he has been careful to describe how his theories can be experimentally confirmed or falsified, thus claiming scientific status for them. He also invokes his ideas to explain aspects of the formation of organisms that, in to date, haven’t been explained by the action of DNA. But increasing knowledge of the significance of what was formerly thought of as ‘junk DNA’ is going a long way to filling these explanatory gaps, so Sheldrake’s position looks particularly weak here. And in his TED talks he not only defends his own ideas, but attacks many of the accepted tenets of current scientific theory.

However, I’d like to return to the debate over whether Sheldrake should be denied his TED platform. Coyne’s comments led to a reconsideration of the matter by the TED editors, who opened a public forum for discussion on the matter. The ultimate, not unreasonable, decision was that the talks were kept available, but separately from the mainstream content. Coyne said he was surprised by the level of invective arising from the discussion; but I’d say this is because we have here a direct confrontation between iclaustrophobes and iagorophobes − not merely a polite debate, but a forum where each side taunts the other with notions for which the opponents have a visceral revulsion. And it has always been so; for me the iphobia concept explains the rampant hostility which always characterises debates of this type − as if the participants are not merely facing opposed ideas, but respective visions which invoke in each a deeply rooted fear.

I should say at this point that I don’t claim any godlike objectivity in this matter; I’m happy to come out of the closet as an iclaustrophobe myself. This doesn’t mean in my case that I take on board any amount of New Age mumbo-jumbo; I try to exercise rational scepticism where it’s called for. But as an example, let’s go back to Sheldrake: he’s written a book about the observation that housebound dogs sometimes appear to show marked  excitement at the moment that their distant owner sets off to return home, although there’s no way they could have knowledge of the owner’s actions at that moment. I have no idea whether there’s anything in this − but the fact is that if it were shown to be true nothing would give me greater pleasure. I love mystery and inexplicable facts, and for me they make the world a more intriguing and stimulating place. But of course Coyne isn’t the only commentator who has dismissed the theory out of hand as intolerable woo. I don’t expect this matter to be settled in the foreseeable future, if only because it would be career suicide for any mainstream scientist to investigate it.

Science and iPhobia

Why should such a course of action be so damaging to an investigator? Let’s start by putting the argument that it’s a desirable state of affairs that such research should be eschewed by the mainstream. The success of the scientific enterprise is largely due to the rigorous methodology it has developed; progress has resulted from successive, well-founded steps of theorising and experimental testing. If scientists were to spend their time investigating every wild theory that was proposed their efforts would become undirected and diffuse, and progress would be stalled. I can see the sense in this, and any self-respecting iagorophobe would endorse it. But against this, we can argue that progress in science often results from bold, unexpected ideas that come out of the blue (some examples in a moment). While this more restrictive outlook lends coherence to the scientific agenda, it can, just occasionally, exclude valuable insights. To explain why the restrictive approach holds sway I would look at the how a person’s psychological make-up might influence their career choice. Most iagorophobes are likely to be attracted to the logical, internally consistent framework they would be working with as part of a scientific career; while those of an iclaustrophobic profile might be attracted in an artistic direction. Hence science’s inbuilt resistance to out-of-the-blue ideas.

Albert Einstein

Albert Einstein (Wikimedia Commons)

I may come from the iclaustrophobe camp, but I don’t want to claim that only people of that profile are responsible for great scientific innovations. Take Einstein, who may have had an early fantasy of riding on a light beam, but it was one which led him through rigorous mathematical steps to a vastly coherent and revolutionary conception. His essential iagorophbia is seen in his revulsion from the notion of quantum indeterminacy − his ‘God does not play dice’. Relativity, despite being wholly novel in its time, is often spoken of as a ‘classical’ theory, in the sense that it retains the mathematical precision and predictability of the Newtonian schema which preceded it.

Niels Bohr

Niels Bohr (Wikimedia Commons)

There was a long-standing debate between him and Niels Bohr, the progenitor of the so-called Copenhagen interpretation of quantum theory, which held that different sub-atomic scenarios coexisted in ‘superposition’ until an observation was made and the wave function collapsed. Bohr, it seems to me, with his willingness to entertain wildly counter-intuitive ideas, was a good example of an iclaustrophobe; so it’s hardly surprising that the debate between him and Einstein was so irreconcilable − although it’s to the credit of both that their mutual respect never faltered..

Over to you

Are you an iclaustrophobe or an iagorophobe? A Plato or an Aristotle? A Sheldrake or a Coyne? A Bohr or an Einstein? Or perhaps not particularly either? I’d welcome comments from either side, or neither.

The Vault of Heaven

Commuting days until retirement: 250

Exeter Cathedral roof

The roof of Exeter Cathedral (Wanner-Laufer, Wikimedia Commons)

Thoughts are sometimes generated out of random conjunctions in time between otherwise unrelated events. Last week we were on holiday in Dorset, and depressing weather for the first couple of days drove us into the nearest city – Exeter, where we visited the cathedral. I had never seen it before and was more struck than I had expected to be. Stone and wood carvings created over the past 600 years decorate thrones, choir stalls and tombs, the latter bearing epitaphs ranging in tone from the stern to the whimsical. All this lies beneath the marvellous fifteeenth century vaulted roof – the most extensive known of the period, I learnt. Looking at this, and the cathedral’s astronomical clock dating from the same century, I imagined myself seeing them as a contemporary member of the congregation would have, and tried to share the medieval conception of the universe above that roof, reflected in the dial of the clock.

Astronomical Clock

The Astronomical Clock at Exeter Cathedral (Wikimedia Commons)

The other source of these thoughts was the book I happened to have finished that day: Max Tegmark’s Our Mathematical Universe*. He’s an MIT physics professor who puts forward the view (previously also hinted at in this blog) that reality is at bottom simply a mathematical object. He admits that it’s a minority view, scoffed at by many of his colleagues – but I have long felt a strong affinity for the idea. I have reservations about some aspects of the Tegmark view of reality, but not one of its central planks – the belief that we live in one universe among a host of others. Probably to most people the thought is just a piece of science fiction fantasy – and has certainly been exploited for all it’s worth by fiction authors in recent years. But in fact it is steadily gaining traction among professional scientists and philosophers as a true description of the universe – or rather multiverse, as it’s usually called in this context.

Nowadays there is a whole raft of differing notions of a multiverse, each deriving from separate theoretical considerations. Tegmark combines four different ones in the synthesis he presents in the book. But I think I am right in saying that the first time such an idea appeared in anything like a mainstream scientific context was the PhD thesis of a 1950s student at Princeton in the USA – Hugh Everett.

The thesis appeared in 1957; its purpose was to present an alternative treatment of the quantum phenomenon known as the collapse of the wave function. A combination of theoretical and experimental results had come together to suggest that subatomic particles (or waves – the duality was a central idea here) existed as a cloud of possibilities, until interacted with, or observed. The position of an electron, for example could be defined with a mathematical function – the wave function of Schrodinger – which assigned only a probability to each putative location. If, however, we were to put this to the test – to measure its location in practice, we would have to do this by means of some interaction, and the answer that would come back would be one specific position among the cloud of possibilities. But by carrying out such procedures repeatedly, it was shown that the probability of any specific result was given by the wave function. The approach to these results which became most widely accepted was the so-called ‘Copenhagen interpreatation’ of Bohr and others, which held that all the possible locations co-existed in ‘superposition’ until the measurement was made and the wave function ‘collapsed’. Hence some of the more famous statements about the quantum world: Einstein’s dissatisfaction with the idea that ‘God plays dice’; and Schrodinger’s well-known thought experiment aimed to test the Copenhagen interpretation to destruction – the cat which is presumed to be simultaneously dead and alive until its containing box is opened and the result determined.

Everett proposed that there was no such thing as the collapse of the wave function. Rather, each of the possible outcomes was represented in one real universe; it was as if the universe ‘branched’ into a number of equally real versions, and you, the observer, found yourself in just one of them. Of course, it followed that many copies of you each found themselves in slightly different circumstances, unlike the unfortunate cat which presumably only experienced those universes in which it lived. Needless to say, although Everett’s ideas were encouraged at the time by a handful of colleagues (Bryce DeWitt, John Wheeler) they were regarded for many years as a scientific curiosity and not taken further. Everett himself moved away from theoretical physics, and involved himself in practical technology, later developing an enjoyment of programming. He smoked and drank heavily and became obese, dying at the age of 51. Tegmark implies that this was at least partly a result of his neglect by the theoretical physics community – but there’s also evidence that his choices of career path and lifestyle derived from his natural inclinations.

During the last two decades of the 20th century, however, the multiverse idea began to be taken more seriously, and had some enthusiastic proponents such as the British theorist David Deutsch and indeed Tegmark himself.  In his book, Tegmark cites a couple of straw polls he took among theoretical physicists attending talks he gave, in 1997 and again in 2010. In the first case, out of a response of 48, 13 endorse the Copenhagen interpretation, and 8 the multiverse idea. (The remainder are mostly undecided, with a few endorsing alternative approaches). In 2010 there are 35 respondents, of whom none at all go for Copenhagen, and 16 for the multiverse. (Undecideds remain about the same – to 16 from 18). This seems to show a decisive rise in support for multiple universes; although I do wonder whether it also reflects which physicists who were prepared to attend Tegmark’s talks, his views having become more well known by 2010. It so happens that the drop in the respondent numbers – 13 – is the same as the disappearing support for the Copenhagen interpreation.

Nevertheless, it’s fair to say that the notion of a multiple universe as a reality has now entered the mainstream of theoretical science in a way that it had not done half a century ago. There’s an argument, I thought as I looked at that cathedral roof, that cosmology has been transformed even more radically in my lifetime than it had been in the preceding 500 years. The skill of the medieval stonemasons as they constructed the multiple rib vaults, and the wonder of the medieval congregation as they marvelled at the completed roof, were consciously directed to the higher vault of heaven that overarched the world of their time. Today those repeated radiating patterns might be seen as a metaphor for the multiple worlds that we are, perhaps, beginning dimly to discern.


*Tegmark, Max, Our Mathematical Universe: My Quest for the Ultimate Nature of Reality. Allen Lane/Penguin, 2014

Read All About It (Part 2)

Commuting days until retirement: 285

You’ll remember, if you have paid me the compliment of reading my previous post, that we started with that crumbling copy of the works of Shakespeare, incongruously finding itself on the moon. I diverged from the debate that I had inherited from my brother and sister-in-law, to discuss what this suggested regarding ‘aboutness’, or intentionality. But now I’m going to get back to what their disagreement was. The specific question at issue was this: was the value – the intrinsic merit we ascribe to the contents of that book – going to be locked within it for all time and all places, or would its value perish with the human race, or indeed wither away as a result of its remote location? More broadly, is value of this sort – literary merit – something absolute and unchangeable, or a quality which exists only in relation to the opinion of certain people?

I went on to distinguish between ‘book’ as physical object in time and space, and ‘book’ regarded as a collection of ideas and their expression in language, and not therefore entirely rooted in any particular spatial or temporal location. It’s the latter, the abstract creation, which we ascribe value to. So immediately it looks as if the location of this particular object is neither here nor there, and and the belief in absolutism gains support. If a work we admire is great regardless of where it is in time or space, and then surely it is great for all times and all places?

But then, in looking at the quality of ‘aboutness’, or intentionality, we concluded that nothing possessed it except by virtue of being created – or understood by – a conscious being such as a human. So, if it can derive intentionality only through the cognition of human beings, it looks as if the same is true for literary merit, and we seem to have landed up in a relativist position. On this view, to assert that something has a certain value is only to express an opinion, my opinion; if you like, it’s more a statement about me than about the work in question. Any idea of absolute literary merit dissolves away, to be replaced by a multitude of statements reflecting only the dispositions of individuals. And of course there may be as many opinions of a piece of work as readers or viewers – and perhaps more, given changes over time. Which isn’t to mention the creator herself or himself; anyone who has ever attempted to write anything with more pretensions than an email or a postcard will know how a writer’s opinion of their own work ricochets feverishly up and down from self-satisfaction to despair.

The dilemma: absolute or relative?

How do we reconcile these two opposed positions, each of which seems to flow from one of the conclusions in Part 1? I want to try and approach this question by way of a small example; I’m going to retrieve our Shakespeare from the moon and pick out a small passage. This is from near the start of Hamlet; it’s the ghost of Hamlet’s father speaking, starting to convey to his son a flavour of the evil that has been done:

I could a tale unfold whose lightest word
Would harrow up thy soul, freeze thy young blood,
Make thy two eyes, like stars, start from their spheres,
Thy knotted and combined locks to part
And each particular hair to stand on end,
Like quills upon the fretful porpentine.

This conveys a message very similar to something you’ll have heard quite often if you watch TV news:

This report contains scenes which some viewers may find upsetting.

So which of these two quotes has more literary value? Obviously a somewhat absurd example, since one is a piece of poetry that’s alive with fizzing imagery, and the other a plain statement with no poetic pretensions at all (although I would find it very gratifying if BBC newsreaders tried using the former). The point I want to make is that, in the first place, a passage will qualify as poetry through its use of the techniques we see here – imagery contributing to the subtle rhythm and shape of the passage, culminating in the completely unexpected and almost comical image of the porcupine.

Of course much poetry will try to use these techniques, and opinion will usually vary on how successful it is – on whether the poetry is good, bad or indifferent. And of course each opinion will depend on its owner’s prejudices and previous experiences; there’s a big helping of relativism here. But when it happens that a body of work, like the one I have taken my example from, becomes revered throughout a culture over a long period of time – well, it looks like as if we have something like an absolute quality here. Particularly so, given that the plays have long been popular, even in translation, across many cultures.

Britain’s Royal Shakespeare company has recently been introducing his work to primary school children from the age of five or so, and have found that they respond to it well, despite (or maybe because of) the complex language (a report here). I can vouch for this: one of the reasons I chose the passage I did was that I can remember quoting it to my son when he was around that age, and and he loved it, being particularly taken with the ‘porpentine’.

So when something appeals to young, unprejudiced children, there’s certainly a case for claiming that it reflects the absolute truth about some set of qualities possessed by our race. You may object that I am missing the point of consigning Shakespeare to the moon – that it would be nothing more than a puzzle to some future civilisation, human-descended or otherwise, and therefore of only relative value. Well, in the last post I brought in the example of the forty thousand year old Spanish cave art, which I’ve reproduced again here.

Cave painting

A 40,000 year old cave painting in the El Castillo Cave in Puente Viesgo, Spain (www.spain.info)

In looking at this, we are in very much the same position as those future beings who are ignorant of Shakespeare. Here’s something whose meaning is opaque to us, and if we saw it transcribed on to paper we might dismiss it as the random doodlings of a child. But I argued before that there are reasons to suppose it was of immense significance to its creators. And if so, it may represent some absolute truth about them. It’s valuable to us as it was valuable to them – but in admittedly in our case for rather different reasons. But there’s a link – we value it, I’d argue, because they did.  The fact that we are ignorant of what it meant to them does not render it of purely relative value; it goes without saying that there are many absolute truths about the universe of which we are ignorant. And one of them is the significance of that painting for its creators.

We live in a disputatious age, and people are now much more likely to argue that any opinion, however widely held, is merely relative. (Although the view that any opinion is relative sounds suspiciously absolute).  The BBC has a long-running radio programme of which most people will be aware, called Desert Island Discs. After choosing the eight records they would want to have with them on a lonely desert island, and they are invited to select a single book, “apart from Shakespeare and the Bible, which are already provided”. Given this permanent provision, many people find the programme rather quaint and out of touch with the modern age. But of course when the programme began, even more people than now would have chosen one of those items if it were not provided. They have been, if you like, the sacred texts of Western culture, our myths.

A myth, as is often pointed out, is not simply an untrue story, but expresses truth on a deeper level than its surface meaning. Many of Shakespeare’s plots are derived from traditional, myth-like stories, and I don’t need to rehearse here any of what has been said about the truth content of the Bible. It will be objected, of course, that since fewer people would now want these works for their desert island, that there is a strong case for believing that the sacred, or not-so-sacred, status of the works is a purely relative matter. Yes – but only to an extent. There’s no escaping their central position in the history and origins of our culture. Thinking of that crumbling book, as it nestles in the lunar dust, it seems to me that the truths it contains possess – if in a rather different way – some of the absolute truths about the universe that are also to be found in the chemical composition of the dust around it. Maybe those future discoverers will be able to decode one but not the other; but that is a fact about them, and not about the Shakespeare.

(Any comments supporting either absolutism or relativism welcome.)