What a Coincidence!

My title is an expression you hear quite often, the exclamation mark denoting how surprising it seems when, for example, you walk into a shop and find yourself behind your friend in the queue (especially if you were just thinking about her), or if perhaps the person at the next desk in your office turns out to have the same birthday as you.

But by considering the laws of probability you can come to the conclusion that such things are less unlikely than they seem. Here’s a way of looking at it: suppose you use some method of generating random numbers, say between 0 and 100, and then plot them as marks on a scale. You’ll probably find blank areas in some parts of the scale, and tightly clustered clumps of marks in others. It’s sometimes naively assumed that, if the numbers are truly random, they should be evenly spread across the scale. But a simple argument shows this to be mistaken: there are in fact relatively few ways to arrange the marks evenly, but a myriad ways of distributing them irregularly. Therefore, by elementary probability, it is overwhelmingly likely that any random arrangement will be of the irregular and clumped sort.

randomTo satisfy myself, I’ve just done this exercise – and to make it more visual I have generated the numbers as 100 pairs of dual coordinates, so that they are spread over a square. Already it looks gratifyingly clumpy, as probability theory predicts. So, to stretch and reapply the same idea, you could say it’s quite natural that contingent events in our lives aren’t all spaced out and disjointed from one another in a way that we might naively expect, but end up being apparently juxtaposed and connected in ways that seem surprising to us.

Isaac Asimov, the science fiction writer, put it more crisply:

People are entirely too disbelieving of coincidence. They are far too ready to dismiss it and to build arcane structures of extremely rickety substance in order to avoid it. I, on the other hand, see coincidence everywhere as an inevitable consequence of the laws of probability, according to which having no unusual coincidence is far more unusual than any coincidence could possibly be. (From The Planet that Wasn’t, originally published in The Magazine of Fantasy and Science Fiction, May 1975)

All there is to it?

So there we have the standard case for reducing what may seem like outlandish and mysterious coincidences to the mere operation of random chance. I have to admit, however, that I’m not entirely convinced by it. I have repeatedly experienced coincidences in my own life, from the trivial to the really pretty surprising – in a moment I’ll describe some of them. What I have noticed is that they often don’t have the character of being just random pairs or clusters of simple happenings, as you might expect, but seem to be linked to one another in strange and apparently meaningful ways, or to associate themselves with significant life events. Is this a mere subjective illusion, or could there be some hidden, organising principle governing happenings in our lives?

Brian Inglis

Brian Inglis, from the cover of Coincidence

I don’t have an answer to that, but I’m certainly not the first to speculate about the question. This post was prompted by a book I recently read, Coincidence by Brian Inglis*. Inglis was a distinguished and well-liked journalist in the last century, having been a formative editor of The Spectator magazine and a prolific writer of articles and books. He was also a television presenter: those of a certain age may remember a long-running historical series on ITV, All Our Yesterdays, which Inglis presented. In addition, to the distaste of some, he wrote quite widely on paranormal phenomena.

The joker

In Coincidence he draws on earlier speculators about the topic, including the Austrian zoologist Paul Kammerer, who, after being suspected of scientific fraud in his research into amphibians, committed suicide in 1926. Kammerer was an enthusiastic collector of coincidence stories, and tried to provide a theoretical underpinning for them with his idea of ‘seriality’, which had some influence on Jung’s notion of synchronicity, in which meaning is placed alongside causality in its power to determine events. Kammerer also attracted the attention of Arthur Koestler, who figures in one of my previous posts. Koestler gave an account of the fraud case which was sympathetic to Kammerer, in The Case of the Midwife Toad. Koestler was also fascinated by coincidences and wrote about them in his book The Roots of Coincidence. Inglis, in his own book, recounts many accounts of surprising coincidences from ordinary lives. Many of his subjects have the feeling that there is some sort of capricious organising spirit behind these confluences of events, whom Inglis playfully personifies as ‘the joker’.

This putative joker certainly seems to have had a hand in my own life a number of times. Thinking of the subtitle of Inglis’ book (‘A Matter of Chance – or Synchronicity?‘) the latter seems to be a factor with me. I have been so struck by the apparent significance of some of my own coincidences that I have recorded quite a number of them. First, here’s a simple example which shows that ‘interlinking’ tendency which occurs so often. (Names are changed in the accounts that follow.)

My own stories

From about 35 years ago: I spend an evening with my friend Suzy. We talk for a while about our mutual acquaintance Robert, whom we have both lost touch with; neither of us have seen him for a couple of years. Two days later, I park my car in a crowded North London street and Robert walks past just as I get out of the car, and I have a conversation with him. And then, I subsequently discover, the next day Suzy meets him quite by chance on a railway station platform. I don’t know whether the odds against this could be calculated, but they would be pretty huge. Each of the meetings, so soon after the conversation, would be unlikely, especially in crowded inner London as they were. And the pair of coincidences show this strange interlinking that I mentioned. But I have more examples which are linked to one another in an even more elaborate way, as well as being attached to significant life events.

In 1982 I decided that, after nearly 14 years, it was time to leave the first company I had worked for long-term; let’s call it ‘company A’. During my time with them, a while before this, I’d shared a flat with a couple of colleagues for 5 years. At one stage we had a vacancy in the flat and advertised at work for a third tenant. A new employee of the company – we’ll call him Tony McAllister – quickly showed an interest. We felt a slight doubt about the rather pushy way he did this, pulling down our notice so that no one else would see it. But he seemed pleasant enough, and joined the flat. We should have listened to our doubts – he turned out to be definitely the most uncongenial person I have ever lived with. He consistently avoided helping with any of the housework and other tasks around the flat, and delighted in dismantling the engine of his car in the living room. There were other undesirable personal habits – I won’t trouble you with the details. Fortunately it wasn’t long before we all left the flat, for other reasons.

Back to 1982, and my search for a new job. A particularly interesting sounding opportunity came up, in a different area of work, with another large company – company B. I applied and got an interview with a man who would be my new boss if I got the job: we’ll call him Mark Cooper. He looked at my CV. “You worked at company A – did you know Tony McAllister? He’s one of my best friends.” Putting on my best glassy grin, I said that I did know him. And I did go on to get the job. Talking subsequently, we both eventually recalled that Mark had actually visited our flat once, very briefly, with Tony, and we’d met fleetingly. That would have been five years or so earlier.

About nine months into my work with company B I saw a job advertised in the paper while I was on the commuter train. I hadn’t been looking for a job, and the ad just happened to catch my eye as I turned the page. It was with a small company (company C), with requirements very relevant to what I was currently doing, and sounding really attractive – so I applied. While I was awaiting the outcome of this, I heard that my present employer, company B, was to stop investing in my current area of work, and I was moved to a different position. I didn’t like the new job at all, and so of course was pinning my hopes on the application I’d already made. However, oddly, the job I’d been given involved being relocated into a different building, and I was given an office with a window directly overlooking the building in which company C was based.

This seemed a good omen – and I subsequently was given an interview, and then a second one, with directors of company C. On the second one, my interviewer, ‘Tim Newcombe’, seemed vaguely familiar, but I couldn’t place him and thought no more of it. He evidently didn’t know me. Once again, I got the job: apparently it had been a close decision between me and one other applicant, from a field of about 50. And it wasn’t long before I found out why Tim seemed familiar: he was in fact married to someone I knew well in connection with some voluntary work I was involved with. On one occasion, I eventually realised, I had visited her house with some others and had very briefly met Tim. I went on to work for company C for nearly 12 years, until it disbanded. Subsequent to this both Tim and I worked on our own accounts, and we collaborated on a number of projects.

So far, therefore, two successive jobs where, for each, I was interviewed by someone whom I eventually realised I had already met briefly, and who had a strong connection to someone I knew. (In neither case was the connection related to the area of work, so that isn’t an explanation.)

The saga continues

A year or two after leaving company B, I heard that Mark Cooper had moved to a new job in company D, and in fact visited him there once in the line of work. Meanwhile, ten years after I had started the job in company C – and while I was still doing it – my wife and I, wanting to move to a new area, found and bought a house there (where we still live now, more than 20 years later). I then found out that the previous occupants were leaving because the father of the family had a new job – with, it turned out, company D. And on asking him more about it, it transpired that he was going to work with Mark Cooper, making an extraordinarily neat loop back to the original coincidence in the chain.

I’ve often mused on this striking series of connections, and wondered if I was fated always to encounter some bizarre coincidence every time I started new employment. However, after company C, I worked freelance for some years, and then got a job in a further company (my last before retirement). This time, there was no coincidence that I was aware of. But now, just in the last few weeks, that last job has become implicated in a further unlikely connection. This time it’s my son who has been looking for work. He told me about a promising opportunity he was going to apply for. I had a look at the company website and was surprised to see among the pictures of employees a man who had worked in the same office as me for the last four years or so – from the LinkedIn website I discovered he’d moved on a month after I retired. My son was offered an initial telephone interview – which (almost inevitably) turned out to be with this same man.

In gullible mode, I wondered to myself whether this was another significant coincidence. Well, whether I’m gullible or not, my son did go on to get the job. I hadn’t worked directly with the interviewer in question, and only knew him slightly; I don’t think he was aware of my surname, so I doubt that he realised the connection. My son certainly didn’t mention it, because he didn’t want to appear to be currying favour in any dubious way. And in fact this company that my son now works in turns out to have a historical connection with my last company – which perhaps explains the presence of his interviewer in it. But neither I nor my son were aware of any of this when he first became interested in the job.

Just one more

I’m going to try your patience with just one more of my own examples, and this involves the same son, but quite a few years back – in fact when he was due to be born. At the time our daughter was 2 years old, and if I was to attend the coming birth she would need to be babysat by someone. One friend, who we’ll call Molly, said she could do this if it was at the weekend – so we had to find someone else for a weekday birth. Another friend, Angela, volunteered. My wife finally started getting labour pains, a little overdue, one Friday evening. So it looked as if the baby would arrive over the weekend and Molly was alerted. However, untypically for a second baby, this turned out to be a protracted process. By Sunday the birth started to look imminent, and Molly took charge of my daughter. But by the evening the baby still hadn’t appeared – we had gone into hospital once but were sent home again to wait. So we needed to change plans, and my daughter was taken to Angela, where she would stay overnight.

My son was finally born in the early hours of Monday morning, which was May 8th. And then the coincidence: it turned out that both Molly and Angela had birthdays on May 8th. What’s nice about this one is that it is possible to calculate the odds. There is that often quoted statistic that if there are 23 or more people in a room there is a greater than evens chance that at least two of them will share the same birthday. 23 seems a low number – but I’ve been through the maths myself, and it is so. However in this case, it’s a much simpler calculation: the odds would be 1 in 365 x 365 (ignoring leap years for simplicity), which is 133,225 to 1 against. That’s unlikely enough – but once again, however, I don’t feel that the calculations tell the full story. The odds I’ve worked out apply where any three people are taken at random and found all to share the same birthday. In this case we have the coincidence clustered around a significant event, the actual day of birth of one of them – and that seems to me to add an extra dimension that can’t so easily be quantified.

Malicious streak

Well, there you have it – random chance, or some obscure organising principle beyond our current understanding? Needless to say, that’s speculation which splits opinion along the lines I described in my post about the ‘iPhobia’ concept. As an admitted ‘iclaustrophobe’, I prefer to keep an open mind on it. But to return to Brian Inglis’s ‘joker’: Inglis notes that this imagined character seems to display a malicious streak from time to time: he quotes an example where estranged lovers are brought together by coincidence in awkward, and ultimately disastrous circumstances. And add to that the observation of some of those looking into the coincidence phenomenon that their interest seems to attract further coincidences: when Arthur Koestler was writing about Kammerer he describes his life being suddenly beset by a “meteor shower” of coincidences, as if, he felt, Kammerer were emphasising his beliefs from beyond the grave.

With both of those points in mind, I’d like to offer one further story. It was told to me by Jane O’Grady (real name this time), and I’m grateful to her for allowing me to include it here – and also for going to some trouble to confirm the details. Jane is a writer, philosopher and teacher. One day in late 1991, she and her then husband, philosopher Ted Honderich, gave a lunch to which they invited Brian Inglis. His book on coincidences – the one I’ve just read – had been published fairly recently, and a good part of their conversation was a discussion of that topic. A little over a year later, in early 1993, Jane was teaching a philosophy A-level class. After a half-time break, one of the students failed to reappear. His continuing absence meant that Jane had to give up waiting and carry on without him. He had shown himself to be somewhat unruly, and so this behaviour seemed to her at first to be irritatingly in character.

And so when he did finally appear, with the class nearly over, Jane wondered whether to believe his proffered excuse: he said he had witnessed a man collapsing in the street and had gone to help. But it turned out to be perfectly true. Unfortunately, despite his intervention, nothing could be done and the man had died. The coincidence, as you may have guessed, lay in the identity of the dead man. He was Brian Inglis.


*Brian Inglis, Coincidence: A Matter of Chance – or Synchronicity? Hutchinson, 1990

Are We Deluded?

Commuting days until retirement: 19

My last post searched, somewhat uncertainly, for a reason to believe that we are in a meaningful sense free to make decisions – to act spontaneously in some way that is not wholly and inevitably determined by the state of the world before we act: the question of free will, in other words. In a comment, bloggingisaresponsibility referred me to the work of Sam Harris, a philosopher and neuroscientist who argues cogently for the opposite position.

Sam Harris

Sam Harris (Wikimedia/Steve Jurvetson)

Harris points to examples of cases where someone can be mistaken about how they came to a certain decision: it’s well known that under hypnosis a subject can be told to take an action in response to a prompt, after having been woken from the hypnotic trance. ‘When I clap my hands you will open the window.’ When the subject duly carries out the command, and is asked about why she took the action, she may say that the room was feeling stuffy or some such, and give every sign of genuinely believing that this was the motive.

And I can think of some slightly unnerving examples from my own personal life where it has become clear over a period of time that all the behaviour of someone I know is aimed towards a certain outcome, while the intentions that they will own up to – quite honestly, it appears – are quite different.

So I’d accept it as undeniable that we can believe ourselves to be making a free choice, when the real forces driving our actions are unknown to us. But it’s one thing to claim that we can be mistaken about what is driving us towards this or that action, and quite another to maintain that we are systematically deluded about what it is to make choices in general. So what do I mean by choices?

I argued in the last post that genuine choices are not to be identified with the sort of random, meaningless bodily movements that a scientist might be able to study and analyse in a laboratory. When we truly exercise what we might call our will, we are typically weighing up a number of alternatives and deciding what might seem to us the ‘best’ one. Typically we may be trying to arbitrate between conflicting desires: do I stick to my diet and feel healthy, or give in and be seduced by the jumbo gourmet burger and chips?  Or you can read in any newspaper about men or women who have sacrificed a lifetime of domestic happiness for the promise of the short-lived affair that satisfies their cravings. (You don’t of course read about those who made the other choice.)

I hope that gives a flavour of what it really is to exercise choice: it’s all about subjective feelings – about uncertainly picking our way through an incredibly varied mental landscape of desires, emotions, pain, pleasure, knowledge and learnt experience – and of course making conscious decisions about where to place our steps. It seems to me that the arguments of determinists such as Harris would be irrefutable if only we were insentient robots, which we are not.

How deluded are we?

But Harris has an answer to that argument. We are not just deluded about the spontaneity of our actions:

It is not that free will is simply an illusion – our experience is not merely delivering a distorted view of reality. Rather, we are mistaken about our experience. Not only are we not as free as we think we are – we do not feel as free as we think we do. Our sense of our own freedom results from our not paying close attention to what it is like to be us. The moment we pay attention, it is possible to see that free will is nowhere to be found, and our experience is perfectly compatible with this truth. Thoughts and intentions simply arise in the mind. What else could they do? The truth about us is stranger than many suppose: The illusion of free will is itself an illusion. The problem is not merely that free will makes no sense objectively (i.e., when our thoughts and actions are viewed from a third-person point of view); it makes no sense subjectively either. (From Free Will – Harris’s italics)

‘Thoughts and intentions simply arise in the mind.’ Do they? Well, we have to admit that they do, all the time. We don’t generally decide what we are going to dream about – as one example – and Harris gives many other instances of actions taken in response to thoughts that ‘just arise’. But does this cover every willed, considered decision? I don’t think it does, although Harris argues otherwise.

But the key sentence here for, me is: ‘The illusion of free will is itself an illusion.’ – and the italics indicate that it is for Harris too. We may think we have the impression that we are exercising our wills, but we don’t. The impression is an illusion too.* Does that make sense to you? It doesn’t to me. But it’s very much in the spirit of a growing movement which espouses a particular way of dealing with our subjective nature. I think Daniel Dennett must be one of the pioneers: in a post two years ago I contested his arguments that qualia, the elements that comprise our conscious experience, do not exist.

Here’s another writer, Susan Blackmore, in a compilation from the Edge website where the contributors nominate ideas which they think should become extinct. Blackmore is a psychologist and former psychic researcher turned sceptic, and her choice for the dustbin is ‘The Neural Correlates of Consciousness’. She argues that, while much cutting edge research effort is going into the search for the biological processes that are the neural counterpart of consciousness, this is a wild goose chase – they’ll never be found. Well, so far I agree, but I suspect for very different reasons.

Consciousness is not some weird and wonderful product of some brain processes but not others. Rather, it’s an illusion constructed by a clever brain and body in a complex social world. We can speak, think, refer to ourselves as agents, and so build up the false idea of a persisting self that has consciousness and free will.

There can’t be any neural correlates of consciousness, says Blackmore, because there’s nothing for the neural processes to be correlated with. So here we have it again, this strange conclusion that flies against common sense. Well of course if a philosophical or scientific idea is incompatible with common sense that doesn’t necessarily disqualify it from being worth serious consideration. But in this case I believe it goes much, much further than that.

Let’s just stop and examine what is being claimed. We believe we have a private world of subjective conscious impressions; but that belief is based on an illusion – we don’t have such a world. But an illusion is itself a subjective experience. How can the broad class of things of which illusions are one subclass be itself an illusion? The notion is simply nonsense. You could only rescue it from incoherence by saying that illusions could be described as data in a processing machine (like a brain) which embody false accounts of what they are supposed to represent.

Imagine one of those systems which reads car number plates and measures average speeds over a stretch of road. Suppose we somehow got into the works and caused the numbers to be altered before they were logged, so that no speeding tickets were issued. Could we then say that the system was suffering an illusion? It would be a very odd way of speaking – because illusions are experiences, not scrambled data. Having an illusion implies consciousness (which involves something I have written about before – intentionality).  Just as Descartes famously concluded that he couldn’t doubt the existence of his doubting, we can’t be deluded about the experience of being deluded.

History repeats

The Aristotelian Universe

The universe according to Aristotle (mysearch.org.uk)

Here’s an example of how we can have illusions about the nature of the world: it was once an unquestioned belief that our planet was stationary and the sun orbited around it. Through objective measurement and logical analysis we now know that is wrong. But people thought this because it felt like it – our beliefs start with subjective experience (which we don’t have, according to the view I’m criticising). But of course a whole established world-view was based around this illusion. We are told that when one of the proponents of the new conception – Galileo – discovered corroborating evidence through his telescope, in the form of satellites orbiting Jupiter, supporters of the status quo refused to look into the telescope. (It’s an account of which the facts may be a little different.) But it nevertheless illustrates the extremity of the measures which the believers in an established order may take in order to protect it.

So now we have a 21st century version of that phenomenon. Our objective knowledge of the brain as an electrochemical machine can’t, even in principle, explain the existence of subjective experiences. If we are not to admit that our account of the world is seriously incomplete, a quick fix is simply to deny that this messy subjectivity is anything real, and conveniently ignore whether we are making any sense in doing so.

A Princeton psychologist, Michael Graziano, who researches into consciousness was quoted in a recent issue of New Scientist magazine, referring to what philosopher David Chalmers called ‘the hard problem’ – how and why the brain should give rise to conscious awareness at all:

“There is no hard problem,” says Graziano. “There is only the question of how the brain, an information-processing device, concludes and insists it has consciousness. And that is a problem of information processing. To understand that process fully will require [scientific experiments]”**.

So this wholly incoherent notion – of conscious experience as an illusion – is taken as the premise for a scientific investigation. And look at the language: it’s not you or I who are insisting we are conscious, but ‘the brain’. In this very defensive objectivisation of the terms used lies the modern equivalent of the 17th century churchmen who supposedly turned away from the telescope. If we only take care to avoid any mention of the subjective, we can reassure ourselves that none of this inconvenient consciousness stuff really exists – only in the ravings of a heretic would such an idea be entertained. And the scientific hegemony is spared the embarrassment of a province it doesn’t look like being able to conquer.

But free will? Even If I have convinced you that our subjective nature is real, that question may still be open. But as I mentioned before, I think the determinism arguments would only have irresistible force if we were insentient creatures, and I have tried to underline the fact that we are not. Our subjective world is the most immediate and undeniable reality of our experience – indeed it is our experience. It’s there, in that world, that we seem to be free,  and in which libertarians like myself believe we are free. Not surprisingly, it’s that world whose reality Harris is determined to deny. My contention is that, in doing so, he joins others in the fraternity of uncompromising physicalists and, like them, fatally undermines his own position.


*I haven’t explicitly distinguished between what I mean by illusion and delusion. Just to be clear: an illusion is experiencing something that appears other than it is. A delusion would be when we believe it to be as it appears. So while, for example, Harris would admit to experiencing what he believes to be the illusion of freewill, he would not admit to being deluded by it. But he would of course claim that I and many others are deluded.

**A stable mind is a conscious mind, in New Scientist 11 April 2015, p10. I did find an article for the New York Times by Graziano in which he addresses more directly some of the objections I have raised. But for the sake of brevity I’ll just mention that in that article I believe he simply falls into the same conceptual errors that I have already described.

Freedom and Purpose

Commuting days until retirement: 34

Retirement, I find, involves a lot of decisions. This blog shows that the important one was taken over two years ago – but there have been doubts about that along the way. And then, as the time approaches, a whole cluster of secondary decisions loom. Do I take my pension income by this method or that method? Can I phase my retirement and continue part-time for a while? (That one was taken care of for me – the answer was no. I felt relieved; I didn’t really want to.)  So I am free to make the first of these decisions, but not the second. And that brings me to what this post is about: what it means when we say we are ‘free’ to make a decision.

I’m not referring to the trivial sense, in which we are not free if some external factor constrains us, as with my part-time decision. It’s that more thorny philosophical problem I’m chasing, namely the dilemma as to whether we can take full responsibility as the originators of our actions; or whether we should assume that they are an inevitable consequence of the way things are in the world – the world of which our bodies and brains are a part.

It’s a dilemma which seems unresolved in modern Western society: our intuitive everyday assumption is that the first is true; indeed our whole system of morals – and of law and justice – is founded on it: we are individually held responsible for our actions unless constrained by external circumstances, or perhaps some mental dysfunction that we cannot help. Yet in our increasingly secular society, majority educated opinion drifts towards the materialist view – that the traditional assumption of freedom of the will is an illusion.

Any number of books have been written on how these approaches might be reconciled; I’m not going to get far in one blog post. But it does seem to me that this concept of freedom of action is far more elusive than is often accepted, and that facile approaches to it often end up by missing the point altogether. I would just like to try and give some idea of why I think that.

Early in Ian McEwan’s novel Atonement, the child writer Briony finds herself alone in a quiet house, in a reflective frame of mind:

Briony Tallis

Briony Tallis depicted on the cover of Atonement

She raised one hand and flexed its fingers and wondered, as she had sometimes before, how this thing, this machine for gripping, this fleshy spider on the end of her arm, came to be hers, entirely at her command. Or did it have some little life of its own? She bent her finger and straightened it. The mystery was in the instant before it moved, the dividing moment between not moving and moving, when her intention took effect. It was like a wave breaking. If she could only find herself at the crest, she thought, she might find the secret of herself, that part of her that was really in charge. She brought her forefinger closer to her face and stared at it, urging it to move. It remained still because she was pretending, she was not entirely serious, and because willing it to move, or being about to move it, was not the same as actually moving it. And when she did crook it finally, the action seemed to start in the finger itself, not in some part of her mind. When did it know to move, when did she know to move it?  There was no catching herself out. It was either-or. There was no stitching, no seam, and yet she knew that behind the smooth continuous fabric was the real self – was it her soul? – which took the decision to cease pretending, and gave the final command.

I don’t know whether, at time of writing, McEwan knew of the famous (or infamous) experiments of Benjamin Libet, some 18 years before the book was published. McEwan is a keen follower of scientific and philosophical ideas, so it’s quite likely that he did. Libet, who had been a neurological researcher since the early 1960s, designed a seminal series of experiments in the early eighties in which he examined the psychophysiological processes underlying the experience McEwan evokes.

Subjects were hooked up to detectors of brain impulses, and then asked to press a key or take some other detectable action at a moment of their own choosing, during some given period of time. They were also asked to record the instant at which they consciously made the decision to take action, by registering the position of a moving spot on an oscilloscope.

The most talked about finding of these experiments was not only that there was an identifiable electrical brain impulse associated with each decision, but that it generally occurred before the reported moment of the subject’s conscious decision. And so, on the face of it, the conclusion to be drawn is that, when we imagine ourselves to be freely taking a decision, it is really being driven by some physical process of which we are unaware; ergo free will is an illusion.

Benjamin Libet

Benjamin Libet

But of course it’s not quite that simple. In the course of his experiments Libet himself found that sometimes there was an impulse looking like the initiation of an action which was not actually followed by one. It turned out that in these cases the subject had considered moving at that moment but decided against it; so it’s as if, even when there is some physical drive to action we may still have the freedom to veto it. Compare McEwan’s Briony: ‘It remained still because she was pretending, she was not entirely serious, and because willing it to move, or being about to move it, was not the same as actually moving it.’ And this description is one that I should think we can all recognise from our own experience.

There have been other criticisms: if a subject may be deluded about when their actions are initiated, how reliable can their assessment be of exactly when they made a decision? (This from arch-physicalist Daniel Dennett). We feel we are floating helplessly in some stirred-up conceptual soup where objective and subjective measurements are difficult to disentangle from one another.

But you may be wondering all this time what these random finger crookings and key pressings have to do with my original question of whether we are free to make the important decisions which can shape our lives. Well, I hope you are, because that’s really the point of my post. There’s a big difference between these rather meaningless physical actions and the sorts of voluntary decisions that really interest us. Most, if not all, significant actions we take in our lives are chosen with a purpose. Philosophical reveries like Briony’s apart, we don’t sit around considering whether to move our finger at this moment or that moment; such minor bodily movements are normally triggered quite unconsciously, and generally in the pursuit of some higher end.

Rather, before opting for one of the paths open to us, there is some mental process of weighing up and considering what the result of each alternative might be, and which outcome we think it best to bring about. This may be an almost instantaneous judgement (which way to turn the steering wheel) or a more extended consideration of, for example, whether I should arrange my finances to my own maximum advantage, or to that of my family after my death. In either case I am constrained by a complicated network of beliefs, prejudices and instincts, some of which I am probably only slightly consciously aware of, if at all.

Teasing out the meaning of what it is for a decision to be ‘free’ in this context, is evidently very difficult, and certainly not something I’m going to try and achieve here, even if I could. But what is clear is that an isolated action like crooking your finger or pressing a button at some random moment, and for no specific purpose, has very little in common with the decisions by which we order our lives. It’s extremely difficult to imagine any objective experiment which could reliably investigate the causes of those more significant choices.

David Hume

David Hume

Immanuel Kant

Immanuel Kant

So maybe we are driven towards the philosopher Hume’s view that ‘reason is, and ought only to be the slave of the passions’. But I find the Kantian view attractive – that we can objectively deduce a morally correct course of action from our own existence as rational, sentient beings. Perhaps our freedom somehow consists in our ability to navigate a course between these two – to recognize when our ‘passions’ are driving us in the ‘right’ direction, and when they are not. Or that when we have conflicting instincts, as we often do, there is the potential freedom to rationally adjudicate between them.

Some have attempted to carve out a space for freewill in a supposedly deterministic universe by pointing out the randomness of quantum events and suchlike as the putative first causes of action. But this is an obvious fallacy. If our actions were bound by such meaningless occurrences, there is no sense in which we could be considered free at all. However this perspective does, it seems to me, throw some light on the Libet experiments. If we are asked to take random, meaning-free decisions, is it surprising that we then appear to be subjugating ourselves to whatever random, purposeless events that might be taking place in our nervous system?

Ian McEwan must have had in mind the dichotomy between meaningless, consequence-free actions and significant ones, and how we can ascribe responsibility. The plot of Atonement, as its title hints, eventually hinges on the character Briony’s own sense of responsibility for those of her actions that are significant in a broader perspective. But as we are introduced to her, McEwan has her puzzling over the source of those much more limited impulses that do not spring from any sort of rationale.

Recently I wrote about Martin Gardner, a strict believer in scientific rigour but also in metaphysical truths not capable of scientific demonstration, and his approach appeals to me. Freewill, he asserts, is inseparable from consciousness:

For me, free will and consciousness are two names for the same thing. I cannot conceive of myself being self-aware without having some degree of free will. Persons completely paralyzed can decide what to think about or when to blink their eyes. Nor can I imagine myself having free will without being conscious. (From The Whys of a Philosophical Scrivener, Postscript)

At the beginning of his chapter on free will he refers to Wittgenstein’s doctrine that only those questions which can meaningfully be asked can have answers, and what remains cannot be spoken about, continuing:

The thesis of this chapter, although extremely simple and therefore annoying to most contemporary thinkers, is that the free-will problem cannot be solved because we do not know exactly how to put the question.

The chapter examines a wide range of views before restating Gardner’s own position.  ‘Indeed,’ he says, ‘it was with a feeling of enormous relief that I concluded, long ago, that free will is an unfathomable mystery.’

It will be with another feeling of enormous relief that I will soon have a taste of freedom of a kind I haven’t before experienced; but will I be truly free? Well, I will at least have more time to think (freely or otherwise) about it.

Quantum Immortality

Commuting days until retirement: 91

A theme underlying some of my recent posts has been what (if anything) happens to us when we die. I’d like to draw this together with some other thoughts from eight months ago, when I was gazing at the roof of Exeter Cathedral and musing on the possibility of multiple universes. This seemingly wild idea, beloved of fantasy and science fiction authors, is now increasingly taken seriously in the physics departments of universities as a serious model of reality. The idea of quantum immortality (explained below) is a link between these topics, and it was a book by the American physicist Max Tegmark, The Mathematical Universe*, that got me thinking about it.

Max Tegmark

Max Tegmark

I won’t spend time looking at the theory of multiple universes – or the Multiverse – at any length. I did explain briefly in my earlier post how the notion originally arose from quantum physics, and if you have an appetite for more detail there’s plenty in Wikipedia. There are a number of theoretical considerations which lead to the notion of a multiple universe: Tegmark sets out four that he supports, with illustrations, in a Scientific American article. I’m just going to focus here on two of them, which as Tegmark and others have speculated, could ultimately be different ways of looking at the same one. I’ll try to explain them very briefly.

The first approach: quantum divergence

It has been known since early in the last century that, where quantum physics allows a range of possible outcomes of some subatomic event, only one of these is actually observed. Experiments (for example the double slit experiment) suggest that the outcome is undetermined until an observation is made, whereupon one of the range of possibilities becomes the actual one that we find. In the phrase which represents the traditional ‘Copenhagen interpretation’ of this puzzle, the wave function collapses. Before this ‘collapse’, all the possibilities are simultaneously real – in the jargon, they exist ‘in superposition’.

But it was Hugh Everett in 1957 who first put forward another possibility which at first sight looks wildly outlandish now, and did so even more at the time: namely that the wave function never does collapse, but each possible outcome is realised in a different universe. It’s as if reality branches, and to observe a specific outcome is actually to find yourself in one of those branched universes.

The second approach: your distant twin

According to the most widely accepted theory of the creation of the universe, a phenomenon known as ‘inflation’ has the mathematical consequence that the cosmic space we now live in is infinite – it goes on for ever. And infinite space allows infinite possibilities. Statistics and probability undergo a radical transformation and start delivering certainties – a certainty, for example, that there is someone just like you, an unimaginable distance away, reading a blog written by someone just like me. And of course the someone who is reading may be getting bored with it and moving on to something else (just like you? – I hope not). But I can reassure myself that for all the doppelgangers out there who are getting bored there are just as many who are really fired up and preparing to click away at the ‘like’ button and write voluminous comments. (You see what fragile egos we bloggers have – in most universes, anyway.)

Pulling them together

But the point is, of course, that once again we have this bewildering multiplicity of possibilities, all of which claim a reality of their own; it all sounds strangely similar to the scenario posited by the first, quantum divergence approach. This similarity has been considered by Tegmark and other physicists, and Tegmark speculates that these two could be simply the same truth about the universe, but just approached from two different angles.

That is a very difficult concept to swallow whole; but for the moment we’ll proceed on the assumption that each of the huge variety of ramified possibilities that could follow from any one given situation does really exist, somewhere. And the differences between those possible worlds can have radical consequences for our lives, and indeed for our very existence. (As a previous post – Fate, Grim or Otherwise – illustrated.) Indeed, perhaps you could end up dead in one but still living in another.

Quantum Russian roulette

So if your existence branches into one universe where you are still living, breathing and conscious, and another where you are not, where are you going to find yourself after that critical moment? Since it doesn’t make sense to suppose you could find yourself dead, then we suppose that your conscious life continues into one of the worlds where you are alive.

This notion has been developed by Tegmark into a rather scary thought experiment (another version of which was also formulated by Hans Moravec some years earlier). Suppose we set up a sort of machine gun that fires a bullet every second. Only it is modified so that, at each second, some quantum mechanism like the decay of an atom determines, with a 50/50 probability, whether the bullet is actually fired. If it is not, the gun just produces a click. Now it’s the job of the intrepid experimenter, willing to take any risk in the cause of his work, to put his head in front of the machine gun.

According to the theory we have been describing, he can only experience those universes in which he will survive. Before placing his head by the gun, he’ll be hearing:
BangClickBangBangClickClickClickBang…  …etc

But with his head in place, it’ll be:
ClickClickClickClickClickClickClickClick…   …and so on.

Suppose he keeps his head there for half a minute, the probability of all the actions being clicks will be 230, or over a billion to one against. But it’s that one in a billion universe, with the sequence of clicks only, that he’ll find himself in. (Spare a thought for the billion plus universes in which his colleagues are dealing with the outcome, funerals are being arranged and coroners’ courts convened.)

Real immortality

Things become more disconcerting still if we move outside the laboratory into the world at large. At the moment of any given person’s death, obviously things could have been different in such a way that they might have survived that moment. In other words, there is a world in which the person continues to live – and as we have seen, that’s the one they will experience. But if this applies to every death event, then – subjectively – we must continue to live into an indefinitely extended old age. Each of us, on this account, will find herself or himself becoming the oldest person on earth.

A natural reaction to this argument is that, intuitively, it can’t be right. What if someone finds themselves on a railway track with a train bearing down on them and no time to jump out of the way? Or, for that matter, terminally ill? And indeed Tegmark points out that, typically, death is the ultimate upshot of a series of non-fatal events (cars swerving, changes in body cells), rather than a single, once-and-for-all, dead-or-alive event. So perhaps we arrive at this unsettling conclusion only by considerably oversimpifying the real situation.

But it seems to me that what is compelling about considerations of this sort is that they do lead us to take a bracing, if slightly unnerving, walk on the unstable, crumbling cliff-edge which forms the limits of our knowledge. Which always leads me to the suspicion, as it did for JBS Haldane, that the world is ‘not only queerer than we suppose, but queerer than we can suppose’. And that’s a suitable thought on which to end this blogging year.


*Tegmark, Max, Our Mathematical Universe: My Quest for the Ultimate Nature of Reality. Allen Lane/Penguin, 2014

The Mathematician and the Surgeon

Commuting days until retirement: 108

After my last post, which, among other things, compared differing attitudes to death and its aftermath (or absence of one) on the part of Arthur Koestler and George Orwell, here’s another fruitful comparison. It seemed to arise by chance from my next two commuting books, and each of the two people I’m comparing, as before, has his own characteristic perspective on that matter. Unlike my previous pair both could loosely be called scientists, and in each case the attitude expressed has a specific and revealing relationship with the writer’s work and interests.

The Mathematician

The first writer, whose book I came across by chance, has been known chiefly for mathematical puzzles and games. Martin Gardner was born in Oklahoma USA in 1914; his father was an oil geologist, and it was a conventionally Christian household. Although not trained as a mathematician, and going into a career as a journalist and writer, Gardner developed a fascination with mathematical problems and puzzles which informed his career – hence the justification for his half of my title.

Martin Gardner

Gardner as a young man (Wikimedia)

This interest continued to feed the constant books and articles he wrote, and he was eventually asked to write the Scientific American column Mathematical Games which ran from 1956 until the mid 1980s, and for which he became best known; his enthusiasm and sense of fun shines through the writing of these columns. At the same time he was increasingly concerned with the many types of fringe beliefs that had no scientific foundation, and was a founder member of PSICOPS,  the organisation dedicated to the exposing and debunking of pseudoscience. Back in February last year I mentioned one of its other well-known members, the flamboyant and self-publicising James Randi. By contrast, Gardner was mild-mannered and shy, averse from public speaking and never courting publicity. He died in 2010, leaving behind him many admirers and a two-yearly convention – the ‘Gathering for Gardner‘.

Before learning more about him recently, and reading one of his books, I had known his name from the Mathematical Games column, and heard of his rigid rejection of things unscientific. I imagined some sort of skinflint atheist, probably with a hard-nosed contempt for any fanciful or imaginative leanings – however sane and unexceptionable they might be – towards what might be thought of as things of the soul.

How wrong I was. His book that I’ve recently read, The Whys of a Philosophical Scrivener, consists of a series of chapters with titles of the form ‘Why I am not a…’ and he starts by dismissing solipsism (who wouldn’t?) and various forms of relativism; it’s a little more unexpected that determinism also gets short shrift. But in fact by this stage he has already declared that

I myself am a theist (as some readers may be surprised to learn).

I was surprised, and also intrigued. Things were going in an interesting direction. But before getting to the meat of his theism he spends a good deal of time dealing with various political and economic creeds. The book was written in the mid 80s, not long before the collapse of communism, which he seems to be anticipating (Why I am not a Marxist) . But equally he has little time for Reagan or Thatcher, laying bare the vacuity of their over-simplistic political nostrums (Why I am not a Smithian).

Soon after this, however, he is striding into the longer grass of religious belief: Why I am not a Polytheist; Why I am not a Pantheist; – so what is he? The next chapter heading is a significant one: Why I do not Believe the Existence of God can be Demonstrated. This is the key, it seems to me, to Gardner’s attitude – one to which I find myself sympathetic. Near the beginning of the book we find:

My own view is that emotions are the only grounds for metaphysical leaps.

I was intrigued by the appearance of the emotions in this context: here is a man whose day job is bound up with his fascination for the powers of reason, but who is nevertheless acutely conscious of the limits of reason. He refers to himself as a ‘fideist’ – one who believes in a god purely on the basis of faith, rather than any form of demonstration, either empirical or through abstract logic. And if those won’t provide a basis for faith, what else is there but our feelings? This puts Gardner nicely at odds with the modish atheists of today, like Dawkins, who never tires of telling us that he too could believe if only the evidence were there.

But at the same time he is squarely in a religious tradition which holds that ultimate things are beyond the instruments of observation and logic that are so vital to the secular, scientific world of today. I can remember my own mother – unlike Gardner a conventional Christian believer – being very definite on that point. And it reminds me of some of the writings of Wittgenstein; Gardner does in fact refer to him,  in the context of the freewill question. I’ll let him explain:

A famous section at the close of Ludwig Wittgenstein’s Tractatus Logico-Philosophicus asserts that when an answer cannot be put into words, neither can the question; that if a question can be framed at all, it is possible to answer it; and that what we cannot speak about we should consign to silence. The thesis of this chapter, although extremely simple and therefore annoying to most contemporary thinkers, is that the free-will problem cannot be solved because we do not know exactly how to put the question.

This mirrors some of my own thoughts about that particular philosophical problem – a far more slippery one than those on either side of it often claim, in my opinion (I think that may be a topic for a future post). I can add that Gardner was also on the unfashionable side of the question which came up in my previous post – that of an afterlife; and again he holds this out as a matter of faith rather than reason. He explores the philosophy of personal identity and continuity in some detail, always concluding with the sentiment ‘I do not know. Do not ask me.’ His underlying instinct seems to be that there has to something more than our bodily existence, given that our inner lives are so inexplicable from the objective point of view – so much more than our physical existence. ‘By faith, I hope and believe that you and I will not disappear for ever when we die.’ By contrast, Arthur Koestler, you may remember,  wrote in his suicide note of ‘tentative hopes for a depersonalised afterlife’ – but, as it turned out, these hopes were based partly on the sort of parapsychological evidence which was anathema to Gardner.

And of course Gardner was acutely aware of another related mystery – that of consciousness, which he finds inseparable from the issue of free will:

For me, free will and consciousness are two names for the same thing. I cannot conceive of myself being self-aware without having some degree of free will… Nor can I imagine myself having free will without being conscious.

He expresses utter dissatisfaction with the approach of arch-physicalists such as Daniel Dennett, who,  as he says,  ‘explains consciousness by denying that it exists’. (I attempted to puncture this particular balloon in an earlier post.)

Martin Gardner

Gardner in later life (Konrad Jacobs / Wikimedia)

Gardner places himself squarely within the ranks of the ‘mysterians’ – a deliberately derisive label applied by their opponents to those thinkers who conclude that these matters are mysteries which are probably beyond our capacity to solve. Among their ranks is Noam Chomsky: Gardner cites a 1983 interview with the grand old man of linguistics,  in which he expresses his attitude to the free will problem (scroll down to see the relevant passage).

The Surgeon

And so to the surgeon of my title, and if you’ve read one of my other blog posts you will already have met him – he’s a neurosurgeon named Henry Marsh, and I wrote a post based on a review of his book Do No Harm. Well, now I’ve read the book, and found it as impressive and moving as the review suggested. Unlike many in his profession, Marsh is a deeply humble man who is disarmingly honest in his account about the emotional impact of the work he does. He is simultaneously compelled towards,  and fearful of, the enormous power of the neurosurgeon both to save and to destroy. His narrative swings between tragedy and elation, by way of high farce when he describes some of the more ill-conceived management ‘initiatives’ at his hospital.

A neurosurgical operation

A neurosurgical operation (Mainz University Medical Centre)

The interesting point of comparison with Gardner is that Marsh – a man who daily manipulates what we might call physical mind-stuff – the brain itself – is also awed and mystified by its powers:

There are one hundred billion nerve cells in our brains. Does each one have a fragment of consciousness within it? How many nerve cells do we require to be conscious or to feel pain? Or does consciousness and thought reside in the electrochemical impulses that join these billions of cells together? Is a snail aware? Does it feel pain when you crush it underfoot? Nobody knows.

The same sense of mystery and wonder as Gardner’s; but approached from a different perspective:

Neuroscience tells us that it is highly improbable that we have souls, as everything we think and feel is no more or no less than the electrochemical chatter of our nerve cells… Many people deeply resent this view of things, which not only deprives us of life after death but also seems to downgrade thought to mere electrochemistry and reduces us to mere automata, to machines. Such people are profoundly mistaken, since what it really does is upgrade matter into something infinitely mysterious that we do not understand.

Henry Marsh

Henry Marsh

This of course is the perspective of a practical man – one who is emphatically working at the coal face of neurology, and far more familiar with the actual material of brain tissue than armchair speculators like me. While I was reading his book, although deeply impressed by this man’s humanity and integrity, what disrespectfully came to mind was a piece of irreverent humour once told to me by a director of a small company I used to work for which was closely connected to the medical industry. It was a sort of a handy cut-out-and-keep guide to the different types of medical practitioner:

Surgeons do everything and know nothing. Physicians know everything and do nothing. Psychiatrists know nothing and do nothing.  Pathologists know everything and do everything – but the patient’s dead, so it’s too late.

Grossly unfair to all to all of them, of course, but nonetheless funny, and perhaps containing a certain grain of truth. Marsh, belonging to the first category, perhaps embodies some of the aversion from dry theory that this caricature hints at: what matters to him ultimately, as a surgeon, is the sheer down-to-earth physicality of his work, guided by the gut instincts of his humanity. We hear from him about some members of his profession who seem aloof from the enormity of the dangers it embodies, and seem able to proceed calmly and objectively with what he sees almost as the detachment of the psychopath.

Common ground

What Marsh and Gardner seem to have in common is the instinct that dry, objective reasoning only takes you so far. Both trust the power of their own emotions, and their sense of awe. Both, I feel, are attempting to articulate the same insight, but from widely differing standpoints.

Two passages, one from each book, seem to crystallize both the similarities and differences between the respective approaches of the two men, both of whom seem to me admirably sane and perceptive, if radically divergent in many respects. First Gardner, emphasising in a Wittgensteinian way how describing how things appear to be is perhaps a more useful activity than attempting to pursue any ultimate reasons:

There is a road that joins the empirical knowledge of science with the formal knowledge of logic and mathematics. No road connects rational knowledge with the affirmations of the heart. On this point fideists are in complete agreement. It is one of the reasons why a fideist, Christian or otherwise, can admire the writings of logical empiricists more than the writings of philosophers who struggle to defend spurious metaphysical arguments.

And now Marsh – mystified, as we have seen, as to how the brain-stuff he manipulates daily can be the seat of all experience – having a go at reading a little philosophy in the spare time between sessions in the operating theatre:

As a practical brain surgeon I have always found the philosophy of the so-called ‘Mind-Brain Problem’ confusing and ultimately a waste of time. It has never seemed a problem to me, only a source of awe, amazement and profound surprise that my consciousness, my very sense of self, the self which feels as free as air, which was trying to read the book but instead was watching the clouds through the high windows, the self which is now writing these words, is in fact the electrochemical chatter of one hundred billion nerve cells. The author of the book appeared equally amazed by the ‘Mind-Brain Problem’, but as I started to read his list of theories – functionalism, epiphenomenalism, emergent materialism, dualistic interactionism or was it interactionistic dualism? – I quickly drifted off to sleep, waiting for the nurse to come and wake me, telling me it was time to return to the theatre and start operating on the old man’s brain.

I couldn’t help noticing that these two men – one unconventionally religious and the other not religious at all – seem between them to embody those twin traditional pillars of the religious life: faith and works.

On Being Set Free

Commuting days until retirement: 133

The underlying theme of this blog is retirement, and it will be fairly obvious to most of my readers by now – perhaps indeed to all three of you – that I’m looking forward to it. It draws closer; I can almost hear the ‘Happy retirement’ wishes from colleagues – some expressed perhaps through ever-so-slightly gritted teeth as they look forward to many more years in harness, while I am put out to graze. But of course there’s another side to that: they will also be keeping silent about the thought that being put out to graze also carries with it the not too distant prospect of the knacker’s yard – something they rarely think about in relation to themselves.

Because in fact the people I work with are generally a lot younger than I am – in a few cases younger than my children. No one in my part of the business has ever actually retired, as opposed to leaving for another job. My feeling is that to stand up and announce that I am going to retire will be to introduce something alien and faintly distasteful into the prevailing culture, like telling everyone about your arthritis at a 21st birthday party.

The revolving telescope

For most of my colleagues, retirement,  like death, is something that happens to other people. In my experience, it’s around the mid to late 20s that such matters first impinge on the consciousness – indistinct and out of focus at first, something on the edge of the visual field. It’s no coincidence, I think, that it’s around that same time that one’s perspective on life reverses, and the general sense that you’d like to be older and more in command of things starts to give way to an awareness of vanishing youth. The natural desire for what is out of reach reorientates its outlook, swinging through 180 degrees like a telescope on a revolving stand.

But I find that, having reached the sort of age I am now, it’s doesn’t do to turn your back on what approaches. It’s now sufficiently close that it is the principal factor defining the shape of the space you now have available in which to organise your life,  and you do much better not to pretend it isn’t there, but to be realistically aware. We have all known those who nevertheless keep their backs resolutely turned, and they often cut somewhat pathetic figures: a particular example I remember was a man (who would almost certainly be dead by now) who didn’t seem to accept his failing prowess at tennis as an inevitable corollary of age, but rather a series of inexplicable failures that he should blame himself for. And there are all those celebrities you see with skin stretched ever tighter over their facial bones as they bring in the friendly figure of the plastic surgeon to obscure the view of where they are headed.

Perhaps Ray Kurzweil, who featured in my previous post, is another example, with his 250 supplement tablets each day and his faith in the abilities of technology to provide him with some sort of synthetic afterlife.  Given that he has achieved a generous measure of success in his natural life, he perhaps has less need than most of us to seek a further one; but maybe it works the other way, and a well-upholstered ego is more likely to feel a continued existence as its right.

Enjoying the view

Old and Happy

Happiness is not the preserve of the young (Wikimedia Commons)

But the fact is that for most of us the impending curtailment of our time on earth brings a surprising sense of freedom. With nothing left to strive for – no anxiety about whether this or that ambition will be realised – some sort of summit is achieved. The effort is over,  and we can relax and enjoy the view. More than one survey has found that people in their seventies are nowadays collectively happier than any other age group: here are reports of three separate studies between 2011 and 2014, in Psychology Today, The Connexion, and the Daily Mail. Those adverts for pension providers and so on, showing apparently radiant wrinkly couples feeding the ducks with their grandchildren, aren’t quite as wide of the mark as you might think.

Speaking for myself, I’ve never been excessively troubled by feelings of ambition, and have probably enjoyed a relatively stress-free, if perhaps less prosperous, life as a result. And the prospect of an existence where I am no longer even expected to show such aspirations is part of the attraction of retirement. But of course there remain those for whom the fact of extinction gives rise to wholly negative feelings, but who are at the same time brave enough to face it fair and square, without any psychological or cosmetic props. A prime example in recent literature is Philip Larkin, who seems to make frequent appearances in this blog. While famously afraid of death, he wrote luminously about it. Here, in his poem The Old Fools he evokes images of the extreme old age which he never, in fact, reached himself:

Philip Larkin

Philip Larkin (Fay Godwin)

Perhaps being old is having lighted rooms
Inside your head, and people in them, acting.
People you know, yet can’t quite name; each looms
Like a deep loss restored, from known doors turning,
Setting down a lamp, smiling from a stair, extracting
A known book from the shelves; or sometimes only
The rooms themselves, chairs and a fire burning,
The blown bush at the window, or the sun’s
Faint friendliness on the wall some lonely
Rain-ceased midsummer evening.

Dream and reality seem to fuse at this ultimate extremity of conscious experience as Larkin portrays it; and it’s the snuffing out of consciousness that a certain instinct in us finds difficult to take – indeed, to believe in. Larkin, by nature a pessimist, certainly believed in it,  and dreaded it. But cultural traditions of many kinds have not accepted extinction as inevitable: we are not obliviously functioning machines but the subjects of experiences like the ones Larkin writes about. As such we have immortal souls which transcend the gross physical world, it has been held – so why should we not survive death? (Indeed, according some creeds, why should we not have existed before birth?)

Timid hopes

Well, whatever immortal souls might be, I find it difficult to make out a case for individual survival, and this is perhaps the majority view in the secular culture I inhabit. It seems pretty clear to me that my own distinguishing characteristics are indissolubly linked to my physical body: damage to the brain, we know, can can change the personality, and perhaps rob us of our memories and past experience, which most quintessentially define us as individuals. But even though our consciousness can be temporarily wiped out by sleep or anaesthetics, there remains the sense (for me, anyway) that since we have no notion whatever of how we could provide an account of it in physical terms,  there is the faint suggestion that some aspect of our experience could be independent of our bodily existence.

You may or may not accept both of these beliefs – the temporality of the individual and the transcendence of consciousness. But if you do,  then the possibility seems to arise of some kind of disembodied,  collective sentience,  beyond our normal existence. And this train of thought always reminds me of the writer Arthur Koestler, who died by suicide in 1983 at the age of 77. An outspoken advocate of voluntary euthanasia, he’d been suffering in later life from Parkinson’s disease, and had then contracted a progressive, incurable form of leukaemia. His suicide note (which turned out to have been written several months before his death) included the following passage:

I wish my friends to know that I am leaving their company in a peaceful frame of mind, with some timid hopes for a de-personalised after-life beyond due confines of space, time and matter and beyond the limits of our comprehension. This ‘oceanic feeling’ has often sustained me at difficult moments, and does so now, while I am writing this.

Death sentence

In fact Koestler had, since he was quite young, been more closely acquainted with death than most of us. Born in Hungary, during his earlier career as a journalist and political writer he twice visited Spain during its civil war in the 1930s. He made his first visit as an undercover investigator of the Fascist movement, being himself at that time an enthusiastic supporter of communism. A little later he returned to report from the Republican side,  but was in Malaga when it was captured by Fascist troops. By now Franco had come to know of his anti-fascist writing, and he was imprisoned in Seville under sentence of death.

Koestler portrayed on the cover of the book

Koestler portrayed on the cover of the book

In his account of this experience, Dialogue with Death, he describes how prisoners would try to block their ears to avoid the nightly sound of a telephone call to the prison, when a list of prisoner names would be dictated and the men later led out and shot. His book is illuminating on the psychology of these conditions,  and the violent emotional ups and downs he experienced:

One of my magic remedies was a certain quotation from a certain work of Thomas Mann’s; its efficacy never failed. Sometimes, during an attack of fear, I repeated the same verse thirty or forty times, for almost an hour, until a mild state of trance came on and the attack passed. I knew it was the method of the prayer-mill, of the African tom-tom, of the age-old magic of sounds. Yet in spite of my knowing it, it worked…
I had found out that the human spirit is able to call upon certain aids of which, in normal circumstances, it has no knowledge, and the existence of which it only discovers in itself in abnormal circumstances. They act, according to the particular case, either as merciful narcotics or ecstatic stimulants. The technique which I developed under the pressure of the death-sentence consisted in the skilful exploitation of these aids. I knew, by the way, that at the decisive moment when I should have to face the wall, these mental devices would act automatically, without any conscious effort on my part. Thus I had actually no fear of the moment of execution; I only feared the fear which would precede that moment.

That there are emotional ‘ups’ at all seems surprising,  but later he expands on one of them:

Often when I wake at night I am homesick for my cell in the death-house in Seville and, strangely enough, I feel that I have never been so free as I was then. This is a very strange feeling indeed. We lived an unusual life on that patio; the constant nearness of death weighed down and at the same time lightened our existence. Most of us were not afraid of death, only of the act of dying; and there were times when we overcame even this fear. At such moments we were free – men without shadows, dismissed from the ranks of the mortal; it was the most complete experience of freedom that can be granted a man.

Perhaps, in a diluted, much less intense form, the happiness of the over 70s revealed by the surveys I mentioned has something in common with this.

Koestler was possibly the only writer of the front rank ever to be held under sentence of death, and the experience informed his novel Darkness at Noon. It is the second in a trilogy of politically themed novels, and its protagonist, Rubashov, has been imprisoned by the authorities of an unnamed totalitarian state which appears to be a very thinly disguised portrayal of Stalinist Russia. Rubashov has been one of the first generation of revolutionaries in a movement which has hardened into an authoritarian despotism, and its leader, referred to only as ‘Number One’ is apparently eliminating rivals.  Worn down by the interrogation conducted by a younger, hard-line apparatchik, Rubashov comes to accept that he has somehow criminally acted against ‘the revolution’, and eventually goes meekly to his execution.

Shades of Orwell

By the time of writing the novel, Koestler, like so many intellectuals of that era, had made the journey from an initial enthusiasm for Soviet communism to disillusion with,  and opposition to it. And reading Darkness at Noon, I was of course constantly reminded of Orwell’s Nineteen Eighty-Four, and the capitulation of Winston Smith as he comes to love Big Brother. Darkness at Noon predates 1984 by nine years,  and nowadays has been somewhat eclipsed by Orwell’s much more well known novel. The two authors had met briefly during the Spanish civil war, where Orwell was actively involved in fighting against fascism, and met again and discussed politics around the end of the war. It seems clear that Orwell, having written his own satire on the Russian revolution in Animal Farm, eventually wrote 1984 under the conscious influence of Koestler’s novel. But they are of course very different characters: you get the feeling that to Orwell, with his both-feet-on-the-ground Englishness, Koestler might have seemed a rather flighty and exotic creature.

Orwell (aka Eric Blair) from the photo on his press pass (NUJ/Wikimedia Commons)

Orwell (aka Eric Blair) from the photo on his press pass (Wikimedia Commons)

In fact,  during the period between the publications of Darkness at Noon and 1984, Orwell wrote an essay on Arthur Koestler – probably while he was still at work on Animal Farm. His view of Koestler’s output is mixed: on one hand he admires Koestler as a prime example of the continental writers on politics whose views have been forged by hard experience in this era of political oppression – as opposed to English commentators who merely strike attitudes towards the turmoil in Europe and the East, while viewing it from a relatively safe distance. Darkness at Noon he regards as a ‘masterpiece’ – its common ground with 1984 is not, it seems, a coincidence. (Orwell’s review of Darkness at Noon in the New Statesman is also available.)

On the other hand he finds much of Koestler’s work unsatisfactory, a mere vehicle for his aspirations towards a better society. Orwell quotes Koestler’s description of himself as a ‘short-term pessimist’,  but also detects a utopian undercurrent which he feels is unrealistic. His own views are expressed as something more like long-term pessimism, doubting whether man can ever replace the chaos of the mid-twentieth century with a society that is both stable and benign:

Nothing is in sight except a welter of lies, hatred, cruelty and ignorance, and beyond our present troubles loom vaster ones which are only now entering into the European consciousness. It is quite possible that man’s major problems will NEVER be solved. But it is also unthinkable! Who is there who dares to look at the world of today and say to himself, “It will always be like this: even in a million years it cannot get appreciably better?” So you get the quasi-mystical belief that for the present there is no remedy, all political action is useless, but that somewhere in space and time human life will cease to be the miserable brutish thing it now is. The only easy way out is that of the religious believer, who regards this life merely as a preparation for the next. But few thinking people now believe in life after death, and the number of those who do is probably diminishing.

In death as in life

Orwell’s remarks neatly return me to the topic I have diverged from. If we compare the deaths of the two men, they seem to align with their differing attitudes in life. Both died in the grip of a disease – Orwell succumbing to tuberculosis after his final, gloomy novel was completed, and Koestler escaping his leukaemia by suicide but still expressing ‘timid hopes’.

After the war Koestler had adopted England as his country and henceforth wrote only in English – most of his previous work had been in German. In  being allowed a longer life than Orwell to pursue his writing, he had moved on from politics to write widely in philosophy and the history of ideas, although never really being a member of the intellectual establishment. These are areas which you feel would always have been outside the range of the more down-to-earth Orwell, who was strongly moral,  but severely practical. Orwell goes on to say, in the essay I quoted: ‘The real problem is how to restore the religious attitude while accepting death as final.’ This so much reflects his attitudes – he habitually enjoyed attending Anglican church services, but without being a believer. He continues, epigramatically:

Men can only be happy when they do not assume that the object of life is happiness. It is most unlikely, however, that Koestler would accept this. There is a well-marked hedonistic strain in his writings, and his failure to find a political position after breaking with Stalinism is a result of this.

Again, we strongly feel the tension between their respective characters: Orwell, with his English caution, and Koestler with his continental adventurism. In fact, Koestler had a reputation as something of an egotist and aggressive womaniser. Even his suicide reflected this: it was a double suicide with his third wife, who was over 20 years younger than he was and in good health. Her accompanying note explained that she couldn’t continue her life without him. Friends confirmed that she had entirely subjected her life to his: but to what extent this was a case of bullying,  as some claimed, will never be known.

Of course there was much common ground between the two men: both were always on the political left, and both,  as you might expect, were firmly opposed to capital punishment: anyone who needs convincing should read Orwell’s autobiographical essay A Hanging. And Koestler wrote a more prosaic piece – a considered refutation of the arguments for judicial killing – in his book Reflections on Hanging; it was written in the 1950s, when, on Koestler’s own account, some dozen hangings were occurring in Britain each year.

But while Orwell faced his death stoically, Koestler continued his dalliance with the notion of some form of hereafter; you feel that, as with Kurzweil, a well-developed ego did not easliy accept the thought of extinction. In writing this post, I discovered that he had been one of a number of intellectual luminaries who contributed to a collection of essays under the title Life after Death,  published in the 1970s. Keen to find a more detailed statement of his views, I actually found his piece rather disappointing. First I’ll sketch in a bit of background to clarify where I think he is coming from.

Back in Victorian times there was much interest in evidence of ‘survival’ – seances and table-rapping sessions were popular, and fraudulent mediums were prospering. Reasons for this are not hard to find: traditional religion, while strong, faced challenges. Steam-powered technology was burgeoning, the world increasingly seemed to be a wholly mechanical affair,  and Darwinism had arrived to encourage the trend towards materialism. In 1882 the Society for Psychical Research was formed, becoming a focus both for those who were anxious to subvert the materialist world view, and those who wanted to investigate the phenomena objectively and seek intellectual clarity.

But it wasn’t long before the revolution in physics, with relativity and quantum theory, exploded the mechanical certainties of the Victorians. At the same time millions suffered premature deaths in two world wars, giving ample motivation to believe that those lost somehow still existed and could maybe even be contacted.

Arthur Koestler

Koestler in later life (Eric Koch/Wikimedia Commons)

This seems to be the background against which Koestler’s ideas about the possibility of an afterlife had developed. He leans a lot on the philosophical writings of the quantum physicist Edwin Schrodinger, and seeks to base a duality of mind and matter on the wave/particle duality of quantum theory. There’s a lot of talk about psi fields and suchlike – the sort of terminology which was already sounding dated at the time he was writing.  The essay seemed to me to be rather backward looking, sitting more comfortably with the inchoate fringe beliefs of the mid 20th century than the confident secularism of Western Europe today.

A rebel to the end

I think Koestler was well aware of the way things were going, but with characteristic truculence reacted against them. He wrote a good deal on topics that clash with mainstream science, such as the significance of coincidence, and in his will used his legacy to establish a department of parapsychology,  which was set up at Edinburgh University, and still exists.

This was clearly a deliberate attempt to cock a snook at the establishment, and while he was not an attractive character in many ways I do find this defiant stance makes me warm to him a little. While I am sure I would have found Orwell more decent and congenial to know personally, Koestler is the more intellectually exciting of the two. I think Orwell might have found Koestler’s notion of the sense of freedom when facing death difficult to understand – but maybe this might have changed had he survived into his seventies. And in a general sense I share Koestler’s instinct that in human consciousness there is far more yet to understand than we have yet been able to, as it were, get our minds around.

Retirement, for me, will certainly bring freedom – not only freedom from the strained atmosphere of worldly ambition and corporate business-speak (itself an Orwellian development) but more of my own time to reflect further on the matters I’ve spoken of here.

iPhobia

Commuting days until retirement: 238

If you have ever spoken at any length to someone who is suffering with a diagnosed mental illness − depression, say, or obsessive compulsive disorder − you may have come to feel that what they are experiencing differs only in degree from your own mental life, rather than being something fundamentally different (assuming, of course, that you are lucky enough not to have been similarly ill yourself). It’s as if mental illness, for the most part, is not something entirely alien to the ‘normal’ life of the mind, but just a distortion of it. Rather than the presence of a new unwelcome intruder, it’s more that the familiar elements of mental functioning have lost their usual proportion to one another. If you spoke to someone who was suffering from paranoid feelings of persecution, you might just feel an echo of them in the back of your own mind: those faint impulses that are immediately squashed by the power of your ability to draw logical common-sense conclusions from what you see about you. Or perhaps you might encounter someone who compulsively and repeatedly checks that they are safe from intrusion; but we all sometimes experience that need to reassure ourselves that a door is locked, when we know perfectly well that it is really.

That uncomfortably close affinity between true mental illness and everyday neurotic tics is nowhere more obvious than with phobias. A phobia serious enough to be clinically significant can make it impossible for the sufferer to cope with everyday situations; while on the other hand nearly every family has a member (usually female, but not always) who can’t go near the bath with a spider in it, as well as a member (usually male, but not always) who nonchalantly picks the creature up and ejects it from the house. (I remember that my own parents went against these sexual stereotypes.) But the phobias I want to focus on here are those two familiar opposites − claustrophobia and agoraphobia.

We are all phobics

In some degree, virtually all of us suffer from them, and perfectly rationally so. Anyone would fear, say, being buried alive, or, at the other extreme, being launched into some limitless space without hand or foothold, or any point of reference. And between the extremes, most of us have some degree of bias one way or the other. Especially so − and this is the central point of my post − in an intellectual sense. I want to suggest that there is such a phenomenon as an intellectual phobia: let’s call it an iphobia. My meaning is not, as the Urban Dictionary would have it, an extreme hatred of Apple products, or a morbid fear of breaking your iPhone. Rather, I want to suggest that there are two species of thinkers: iagorophobes and iclaustrophobes, if you’ll allow me such ugly words.

A typical iagorophobe will in most cases cleave to scientific orthodoxy. Not for her the wide open spaces of uncontrolled, rudderless, speculative thinking. She’s reassured by a rigid theoretical framework, comforted by predictability; any unexplained phenomenon demands to be brought into the fold of existing theory, for any other way, it seems to her, lies madness. But for the iclaustrophobe, on the other hand, it’s intolerable to be caged inside that inflexible framework. Telepathy? Precognition? Significant coincidence? Of course they exist; there is ample anecdotal evidence. If scientific orthodoxy can’t embrace them, then so much the worse for it − the incompatibility merely reflects our ignorance. To this the iagorophobe would retort that we have no logical grounds whatever for such beliefs. If we have nothing but anecdotal evidence, we have no predictability; and phenomena that can’t be predicted can’t therefore be falsified, so any such beliefs fall foul of the Popperian criterion of scientific validity. But why, asks the iclaustrophobe, do we have to be constrained by some arbitrary set of rules? These things are out there − they happen. Deal with it. And so the debate goes.

Archetypal iPhobics

Widening the arena more than somewhat, perhaps the archetypal iclaustrophobe was Plato. For him, the notion that what we see was all we would ever get was anathema – and he eloquently expressed his iclaustrophobic response to it in his parable of the cave. For him true reality was immeasurably greater than the world of our everyday existence. And of course he is often contrasted with his pupil Aristotle, for whom what we can see is, in itself, an inexhaustibly fascinating guide to the nature of our world − no further reality need be posited. And Aristotle, of course, is the progenitor of the syllogism and deductive logic. In Raphael’s famous fresco The School of Athens, the relevant detail of which you see below, Plato, on the left, indicates his world of forms beyond our immediate reality by pointing heavenward, while Aristotle’s gesture emphasises the earth, and the here and now. Raphael has them exchanging disputatious glances, which for me express the hostility that exists between the opposed iphobic world-views to this day.

School of Athens

Detail from Raphael’s School of Athens in the Vatican, Rome (Wikimedia Commons)

iPhobia today

It’s not surprising that there is such hostility; I want to suggest that we are talking not of a mere intellectual disagreement, but a situation where each side insists on a reality to which the other has a strong (i)phobic reaction. Let’s look at a specific present-day example, from within the WordPress forums. There’s a blog called Why Evolution is True, which I’d recommend as a good read. It’s written by Jerry Coyne, a distinguished American professor of biology. His title is obviously aimed principally at the flourishing belief in creationism which exists in the US − Coyne has extensively criticised the so-called Intelligent Design theory. (In in my view, that controversy is not a dispute between the two iphobias I have described, but between two forms of iagoraphobia. The creationists, I would contend, are locked up in an intellectual ghetto of their own making, since venturing outside it would fatally threaten their grip on their frenziedly held, narrowly based faith.)

Jerry Coyne

Jerry Coyne (Zooterkin/Wikimedia Commons)

But I want to focus on another issue highlighted in the blog, which in this case is a conflict between the two phobias. A year or so ago Coyne took issue with the fact that the maverick scientist Rupert Sheldrake was given a platform to explain his ideas in the TED forum. Note Coyne’s use of the hate word ‘woo’, often used by the orthodox in science as an insulting reference to the unorthodox. They would defend it, mostly with justification, as characterising what is mystical or wildly speculative, and without evidential basis − but I’d claim there’s more to it than that: it’s also the iagorophobe’s cry of revulsion.

Rupert Sheldrake

Rupert Sheldrake (Zereshk/Wikimedia Commons)

Coyne has strongly attacked Sheldrake on more than one occasion: is there anything that can be said in Sheldrake’s defence? As a scientist he has an impeccable pedigree, having a Cambridge doctorate and fellowship in biology. It seems that he developed his unorthodox ideas early on in his career, central among which is his notion of ‘morphic resonance’, whereby animal and human behaviour, and much else besides, is influenced by previous similar behaviour. It’s an idea that I’ve always found interesting to speculate about − but it’s obviously also a red rag to the iagorophobic bull. We can also mention that he has been careful to describe how his theories can be experimentally confirmed or falsified, thus claiming scientific status for them. He also invokes his ideas to explain aspects of the formation of organisms that, in to date, haven’t been explained by the action of DNA. But increasing knowledge of the significance of what was formerly thought of as ‘junk DNA’ is going a long way to filling these explanatory gaps, so Sheldrake’s position looks particularly weak here. And in his TED talks he not only defends his own ideas, but attacks many of the accepted tenets of current scientific theory.

However, I’d like to return to the debate over whether Sheldrake should be denied his TED platform. Coyne’s comments led to a reconsideration of the matter by the TED editors, who opened a public forum for discussion on the matter. The ultimate, not unreasonable, decision was that the talks were kept available, but separately from the mainstream content. Coyne said he was surprised by the level of invective arising from the discussion; but I’d say this is because we have here a direct confrontation between iclaustrophobes and iagorophobes − not merely a polite debate, but a forum where each side taunts the other with notions for which the opponents have a visceral revulsion. And it has always been so; for me the iphobia concept explains the rampant hostility which always characterises debates of this type − as if the participants are not merely facing opposed ideas, but respective visions which invoke in each a deeply rooted fear.

I should say at this point that I don’t claim any godlike objectivity in this matter; I’m happy to come out of the closet as an iclaustrophobe myself. This doesn’t mean in my case that I take on board any amount of New Age mumbo-jumbo; I try to exercise rational scepticism where it’s called for. But as an example, let’s go back to Sheldrake: he’s written a book about the observation that housebound dogs sometimes appear to show marked  excitement at the moment that their distant owner sets off to return home, although there’s no way they could have knowledge of the owner’s actions at that moment. I have no idea whether there’s anything in this − but the fact is that if it were shown to be true nothing would give me greater pleasure. I love mystery and inexplicable facts, and for me they make the world a more intriguing and stimulating place. But of course Coyne isn’t the only commentator who has dismissed the theory out of hand as intolerable woo. I don’t expect this matter to be settled in the foreseeable future, if only because it would be career suicide for any mainstream scientist to investigate it.

Science and iPhobia

Why should such a course of action be so damaging to an investigator? Let’s start by putting the argument that it’s a desirable state of affairs that such research should be eschewed by the mainstream. The success of the scientific enterprise is largely due to the rigorous methodology it has developed; progress has resulted from successive, well-founded steps of theorising and experimental testing. If scientists were to spend their time investigating every wild theory that was proposed their efforts would become undirected and diffuse, and progress would be stalled. I can see the sense in this, and any self-respecting iagorophobe would endorse it. But against this, we can argue that progress in science often results from bold, unexpected ideas that come out of the blue (some examples in a moment). While this more restrictive outlook lends coherence to the scientific agenda, it can, just occasionally, exclude valuable insights. To explain why the restrictive approach holds sway I would look at the how a person’s psychological make-up might influence their career choice. Most iagorophobes are likely to be attracted to the logical, internally consistent framework they would be working with as part of a scientific career; while those of an iclaustrophobic profile might be attracted in an artistic direction. Hence science’s inbuilt resistance to out-of-the-blue ideas.

Albert Einstein

Albert Einstein (Wikimedia Commons)

I may come from the iclaustrophobe camp, but I don’t want to claim that only people of that profile are responsible for great scientific innovations. Take Einstein, who may have had an early fantasy of riding on a light beam, but it was one which led him through rigorous mathematical steps to a vastly coherent and revolutionary conception. His essential iagorophbia is seen in his revulsion from the notion of quantum indeterminacy − his ‘God does not play dice’. Relativity, despite being wholly novel in its time, is often spoken of as a ‘classical’ theory, in the sense that it retains the mathematical precision and predictability of the Newtonian schema which preceded it.

Niels Bohr

Niels Bohr (Wikimedia Commons)

There was a long-standing debate between him and Niels Bohr, the progenitor of the so-called Copenhagen interpretation of quantum theory, which held that different sub-atomic scenarios coexisted in ‘superposition’ until an observation was made and the wave function collapsed. Bohr, it seems to me, with his willingness to entertain wildly counter-intuitive ideas, was a good example of an iclaustrophobe; so it’s hardly surprising that the debate between him and Einstein was so irreconcilable − although it’s to the credit of both that their mutual respect never faltered..

Over to you

Are you an iclaustrophobe or an iagorophobe? A Plato or an Aristotle? A Sheldrake or a Coyne? A Bohr or an Einstein? Or perhaps not particularly either? I’d welcome comments from either side, or neither.