A Passage of Time

I have let rather too much time elapse since my last post. What are my excuses? Well, I mentioned in one post how helpful my commuter train journeys were in encouraging the right mood for blog-writing – and since retiring such journeys are few and far between. To get on the train every time I felt a blog-post coming on would not be a very pension-friendly approach, given current fares. My other excuse is the endless number of tasks around the house and garden that have been neglected until now. At least we are now starting to create a more attractive garden, and recently took delivery of a summerhouse: I am hoping that this could become another blog-friendly setting.

Time travel inventor

tvtropes.org

But since a lapse of time is my starting point, I could get back in by thinking again about the nature of time. Four years back (Right Now, 23/3/13) I speculated on the issue of why we experience a subjective ‘now’ which doesn’t seem to have a place in objective physical science. Since then I’ve come across various ruminations on the existence or non-existence of time as a real, out-there, component of the world’s fabric. I might have more to say about this in the, er, future – but what appeals to me right now is the notion of time travel. Mainly because I would welcome a chance to deal with my guilty feelings by going back in time and plugging the long gap in this blog over the past months.

I recently heard about an actual time travel experiment, carried out by no less a person that Stephen Hawking. In 2009, he held a party for time travellers. What marked it out as such was that he sent out the invitations after the party took place. I don’t know exactly who was invited; but, needless to say, the canapes remained uneaten and the champagne undrunk. I can’t help feeling that if I’d tried this, and no one had turned up, my disappointment would rather destroy any motivation to send out the invitations afterwards. Should I have sent them anyway, finally to prove time travel impossible? I can’t help feeling that I’d want to sit on my hands, and leave the possibility open for others to explore.  But the converse of this is the thought that, if the time travellers had turned up, I would be honour-bound to despatch those invites; the alternative would be some kind of universe-warping paradox. In that case I’d be tempted to try it and see what happened.

Elsewhere, in the same vein, Hawking has remarked that the impossibility of time travel into the past is demonstrated by the fact that we are not invaded by hordes of tourists from the future. But there is one rather more chilling explanation for their absence: namely that the time travel is theoretically possible, but we have no future in which to invent it. Since last year that unfortunately looks a little more likely, given the current occupant of the White House. That such a president is possible makes me wonder whether the universe is already a bit warped.

Should you wish to escape this troublesome present and escape into a different future, whatever it might hold, it can of course be done. As Einstein famously showed in 1905, it’s just a matter of inventing for yourself a spaceship that can accelerate to nearly the speed of light and taking a round trip in it. And of course this isn’t entirely science fiction: astronauts and satellites – even airliners – regularly take trips of microseconds or so into the future; and indeed our now familiar satnav devices wouldn’t work if this effect weren’t taken into account.

But the problem of course arises if you find the future you’ve travelled to isn’t one you like. (Trump, or some nepotic Trumpling,  as world president? Nuclear disaster? Both of these?) Whether you can get back again by travelling backward in time is not a question that’s really been settled. Indeed, it’s the ability to get at the past that raises all the paradoxes – most famously what would happen if you killed your grandparents or stopped them getting together.

Marty McFly with his teenage mother

Marty McFly with his teenage mother

This is a furrow well-ploughed in science fiction, of course. You may remember the Marty McFly character in the film Back to the Future, who embarks on a visit to the past enabled by his mad scientist friend. It’s one way of escaping from his dysfunctional, feckless parents, but having travelled back a generation in time he finds himself in being romantically approached by his teenage mother. He manages eventually to redirect her towards his young father, but on returning to the present finds his parents transformed into an impossibly hip and successful couple.

Then there’s Ray Bradbury’s story A Sound of Thunder, where tourists can return to hunt dinosaurs – but only those which were established to have been about to die in any case, and any bullets must then be removed from their bodies. As a further precaution, the would-be hunters are kept away from the ground by a levitating path, to prevent any other paradox-inducing changes to the past. One bolshie traveller breaks the rules however, reaches the ground, and ends up with a crushed butterfly on the sole of his boot. On returning to the present he finds that the language is subtly different, and that the man who had been the defeated fascist candidate for president has now won the election. (So, thinking of my earlier remarks, could some prehistoric butterfly crusher, yet to embark on his journey, be responsible for the current world order?)

My favourite paradox is the one portrayed in a story called The Muse by Anthony Burgess, in which – to leave a lot out – a time travelling literary researcher manages to meet William Shakespeare and question him on his work. Shakespeare’s eye alights on the traveller’s copy of the complete works, which he peruses and makes off with, intending to mine it for ideas. This seems like the ideal solution for struggling blog-writers like me, given that, having travelled forward in time and copied what I’d written on to a flash drive, I could return to the present and paste it in here. Much easier.

But these thoughts put me in mind of a more philosophical issue with time which has always fascinated me – namely whether it’s reversible. We know how to travel forward in time; however when it comes to travelling backward there are various theories as to how it might, in theory, be done, but no-one is very sure. Does this indicate a fundamental asymmetry in the way time works? Of course this is a question that has been examined in much greater detail in another context: the second law of thermodynamics, we are told, says it all.

Let’s just review those ideas. Think of running a film in reverse. Might it show anything that could never happen in real, forward time? Well of course if it were some sort of space film which showed planets orbiting the sun, or a satellite around the earth, then either direction is possible. But, back on earth, think of all those people you’d see walking backwards. Well, on the face of it, people can walk backwards, so what’s the problem? Well, here’s one of many that I could think of: imagine that one such person is enjoying a backward country walk on a path through the woods. As she approaches a protruding branch from a sapling beside the path, the branch suddenly whips sideways towards her as if to anticipate her passing, and then, laying itself against her body, unbends itself as she backward-walks by, and has then returned to its rest position as she recedes. Possible? Obviously not. But is it?

I’m going to argue against the idea that there is a fundamental ‘arrow of time’, and that despite the evident truth of the laws of thermodynamics and the irresistible tendency we observe toward increasing disorder, or entropy, there’s nothing ultimately irreversible about physical processes. I’ve deliberately chosen an example which seems to make my case harder to maintain, to see if I can explain my way out of it. You will have had the experience of walking by a branch which projects across your path, and noticing how your body bends it forwards as you pass, and seeing it spring back to its former position as you continue on. Could we envisage a sequence of events in the real world where all this happened in reverse?

Before answering that I’m going to look at a more familiar type of example. I remember being impressed many years ago by an example of the type of film I mentioned, illustrating the idea of entropy. It showed someone holding a cup of tea, and then letting go of it, with the expected results. Then the film was reversed. The mess of spilt tea and broken china on the floor drew itself together, and as the china pieces reassembled themselves into a complete cup and saucer, the tea obediently gathered itself back into the cup. As this process completed the whole assembly launched itself from the floor and back into the hands of its owner.

Obviously, that second part of the sequence would never happen in the real world. It’s an example of how, left to itself, the physical world will always progress to a state of greater disorder, or entropy. We can even express the degree of entropy mathematically, using information theory. Case closed, then – apart, perhaps, from biological evolution? And even then it can be shown that if some process – like the evolution of a more sophisticated organism – decreases entropy, it will always be balanced by a greater increase elsewhere; and so the universe’s total amount of entropy increases. The same applies to our own attempts to impose order on the world.

So how could I possibly plead for reversibility of time? Well, I tend to think that this apparent ‘arrow’ is a function of our point of view as limited creatures, and our very partial perception of the universe. I would ask you to imagine, for a moment, some far more privileged being – some sort of god, if you like – who is able to track all the universe’s individual particles and fields, and what they are up to. Should this prodigy turn her attention to our humble cup of tea, what she saw, would I think, be very different from the scene as experienced through our own senses. From her perspective, the clean lines of the china cup which we see would become less defined – lost in a turmoil of vibrating molecules, themselves constantly undergoing physical and chemical change. The distinction between the shattered cup on the floor and the unbroken one in the drinker’s hands would be less clear.

Colour blindness testWhat I’m getting at is the fact that what we think of as ‘order’ in our world is an arrangement that seems significant only from one particular point of view determined by the scale and functionality of our senses: the arrangement we think of as ‘order’ floats like an unstable mirage in a sea of chaos. As a very rough analogy, think of those patterns of coloured dots used to detect colour blindness. You can see the number in the one I’ve included only if your retinal cells function in a certain way; otherwise all you’d see would be random dots.

And, in addition to all this, think of the many arrangements which (to us) might appear to have ‘order’ – all the possible drinks in the cup, all the possible cup designs – etc, etc. But compared to all the ‘disordered’ arrangements of smashed china, splattered liquid and so forth, the number of potential configurations which would appeal to us as being ‘ordered’ is truly infinitesimal. So it follows that the likelihood of moving from a state we regard as ‘disordered’ to one of ‘order’ is unimaginably slim; but not, in principle, impossible.

So let’s imagine that one of these one-in-a-squillion chances actually comes off. There’s the smashed cup and mess of tea on the floor. It’s embedded in a maze of vibrating molecules making up the floor, the surrounding air, and so on. And, in this case it so happens that the molecular impacts between the elements of the cup and the tea, and their surroundings combine so as to nudge them all back into their ‘ordered’ configuration, and boost them back off the floor and back into the hands of the somewhat mystified drinker.

Yes, the energy is there to make that happen – it just has to come together in exactly the correct, fantastically unlikely way. I don’t know how to calculate the improbability of this, but I should imagine that to see it happen we would need to do continual trials for a time period which is some vast multiple of the age of the universe. (Think monkeys typing the works of Shakespeare, and then multiply by some large number.) In other words, of course, it just doesn’t happen in practice.

But, looked at another way, such unlikely things do happen. Think of when we originally dropped the cup, and ended up with some sort of mess on the floor – that is, out of the myriad of other possible messes that could have been created, had the cup been dropped at a slightly different angle, the floor had been dirtier, the weather had been different – and so on. How likely is that exact, particular configuration of mess that we ended up with? Fantastically unlikely, of course – but it happened. We’d never in practice be able to produce it a second time.

So of all these innumerable configurations of matter – whether or not they are one of the tiny minority that seem to us ‘ordered’ – one of them happens with each of the sorts of event we’ve been considering. The universe at large is indifferent to our notion of ‘order’, and at each juncture throws up some random selection of the unthinkably large number of possibilities. It’s just that these ordered states are so few in number compared to the disordered ones that they never in practice come about spontaneously, but only when we deliberately foster them into being, by doing such things as manufacturing teacups, or making tea.

Let’s return, then, to the branch that our walker brushes past on the woodland footpath, and give that a similar treatment. It’s a bit simpler, if anything: we just require the astounding coincidence that, as the backwards walker approaches the branch, the random Brownian motion of an unimaginably large number of air molecules just happen to combine to give the branch a series of rhythmic, increasing nudges. It appears to oscillate with increasing amplitude until one final nudge lays it against the walker’s body just as she passes. Not convinced? Well, this is just one of the truly countless possible histories of the movement of a vast number of air molecules – one which has a consequence we can see.

Remember that the original Robert Brown, of Brownian motion fame, did see random movements of pollen grains in water, and since it didn’t occur to him that the water molecules were responsible for this; he thought it was a property of the pollen grains. Should we happen to witness such an astronomically unlikely movement of the tree, we would suspect some mysterious bewitchment of the tree itself, rather than one specific and improbably combination of air molecule movements.

You’ll remember that I was earlier reflecting that we know how to travel forwards in time, but that backward time travel is more problematic. So doesn’t this indicate another asymmetry – another evidence of an arrow of time? Well I think the right way of thinking about this emerges when we are reminded that this very possibility of time travel was a consequence of a theory called ‘relativity’. So think relative. We know how to move forward in time relative to other objects in our vicinity. Equally, we know how they could move forward in time relative to us. Which of course means that we’d be moving backward relative to them. No asymmetry there.

Maybe the one asymmetry in time which can’t be analysed a way is our own subjective experience of moving constantly from a ‘past’ into a ‘future’ – as defined by our subjective ‘now’. But, as I was pointing out three years ago, this seems to be more a property of ourselves as experiencing creatures, rather than of the objective universe ‘out there’.

I’ll leave you with one more apparent asymmetry. If processes are reversible in time, why do we only have records of the past, and not records of the future? Well, I’ve gone on long enough, so in the best tradition of lazy writers, I will leave that as an exercise for the reader.

Are We Deluded?

Commuting days until retirement: 19

My last post searched, somewhat uncertainly, for a reason to believe that we are in a meaningful sense free to make decisions – to act spontaneously in some way that is not wholly and inevitably determined by the state of the world before we act: the question of free will, in other words. In a comment, bloggingisaresponsibility referred me to the work of Sam Harris, a philosopher and neuroscientist who argues cogently for the opposite position.

Sam Harris

Sam Harris (Wikimedia/Steve Jurvetson)

Harris points to examples of cases where someone can be mistaken about how they came to a certain decision: it’s well known that under hypnosis a subject can be told to take an action in response to a prompt, after having been woken from the hypnotic trance. ‘When I clap my hands you will open the window.’ When the subject duly carries out the command, and is asked about why she took the action, she may say that the room was feeling stuffy or some such, and give every sign of genuinely believing that this was the motive.

And I can think of some slightly unnerving examples from my own personal life where it has become clear over a period of time that all the behaviour of someone I know is aimed towards a certain outcome, while the intentions that they will own up to – quite honestly, it appears – are quite different.

So I’d accept it as undeniable that we can believe ourselves to be making a free choice, when the real forces driving our actions are unknown to us. But it’s one thing to claim that we can be mistaken about what is driving us towards this or that action, and quite another to maintain that we are systematically deluded about what it is to make choices in general. So what do I mean by choices?

I argued in the last post that genuine choices are not to be identified with the sort of random, meaningless bodily movements that a scientist might be able to study and analyse in a laboratory. When we truly exercise what we might call our will, we are typically weighing up a number of alternatives and deciding what might seem to us the ‘best’ one. Typically we may be trying to arbitrate between conflicting desires: do I stick to my diet and feel healthy, or give in and be seduced by the jumbo gourmet burger and chips?  Or you can read in any newspaper about men or women who have sacrificed a lifetime of domestic happiness for the promise of the short-lived affair that satisfies their cravings. (You don’t of course read about those who made the other choice.)

I hope that gives a flavour of what it really is to exercise choice: it’s all about subjective feelings – about uncertainly picking our way through an incredibly varied mental landscape of desires, emotions, pain, pleasure, knowledge and learnt experience – and of course making conscious decisions about where to place our steps. It seems to me that the arguments of determinists such as Harris would be irrefutable if only we were insentient robots, which we are not.

How deluded are we?

But Harris has an answer to that argument. We are not just deluded about the spontaneity of our actions:

It is not that free will is simply an illusion – our experience is not merely delivering a distorted view of reality. Rather, we are mistaken about our experience. Not only are we not as free as we think we are – we do not feel as free as we think we do. Our sense of our own freedom results from our not paying close attention to what it is like to be us. The moment we pay attention, it is possible to see that free will is nowhere to be found, and our experience is perfectly compatible with this truth. Thoughts and intentions simply arise in the mind. What else could they do? The truth about us is stranger than many suppose: The illusion of free will is itself an illusion. The problem is not merely that free will makes no sense objectively (i.e., when our thoughts and actions are viewed from a third-person point of view); it makes no sense subjectively either. (From Free Will – Harris’s italics)

‘Thoughts and intentions simply arise in the mind.’ Do they? Well, we have to admit that they do, all the time. We don’t generally decide what we are going to dream about – as one example – and Harris gives many other instances of actions taken in response to thoughts that ‘just arise’. But does this cover every willed, considered decision? I don’t think it does, although Harris argues otherwise.

But the key sentence here for, me is: ‘The illusion of free will is itself an illusion.’ – and the italics indicate that it is for Harris too. We may think we have the impression that we are exercising our wills, but we don’t. The impression is an illusion too.* Does that make sense to you? It doesn’t to me. But it’s very much in the spirit of a growing movement which espouses a particular way of dealing with our subjective nature. I think Daniel Dennett must be one of the pioneers: in a post two years ago I contested his arguments that qualia, the elements that comprise our conscious experience, do not exist.

Here’s another writer, Susan Blackmore, in a compilation from the Edge website where the contributors nominate ideas which they think should become extinct. Blackmore is a psychologist and former psychic researcher turned sceptic, and her choice for the dustbin is ‘The Neural Correlates of Consciousness’. She argues that, while much cutting edge research effort is going into the search for the biological processes that are the neural counterpart of consciousness, this is a wild goose chase – they’ll never be found. Well, so far I agree, but I suspect for very different reasons.

Consciousness is not some weird and wonderful product of some brain processes but not others. Rather, it’s an illusion constructed by a clever brain and body in a complex social world. We can speak, think, refer to ourselves as agents, and so build up the false idea of a persisting self that has consciousness and free will.

There can’t be any neural correlates of consciousness, says Blackmore, because there’s nothing for the neural processes to be correlated with. So here we have it again, this strange conclusion that flies against common sense. Well of course if a philosophical or scientific idea is incompatible with common sense that doesn’t necessarily disqualify it from being worth serious consideration. But in this case I believe it goes much, much further than that.

Let’s just stop and examine what is being claimed. We believe we have a private world of subjective conscious impressions; but that belief is based on an illusion – we don’t have such a world. But an illusion is itself a subjective experience. How can the broad class of things of which illusions are one subclass be itself an illusion? The notion is simply nonsense. You could only rescue it from incoherence by saying that illusions could be described as data in a processing machine (like a brain) which embody false accounts of what they are supposed to represent.

Imagine one of those systems which reads car number plates and measures average speeds over a stretch of road. Suppose we somehow got into the works and caused the numbers to be altered before they were logged, so that no speeding tickets were issued. Could we then say that the system was suffering an illusion? It would be a very odd way of speaking – because illusions are experiences, not scrambled data. Having an illusion implies consciousness (which involves something I have written about before – intentionality).  Just as Descartes famously concluded that he couldn’t doubt the existence of his doubting, we can’t be deluded about the experience of being deluded.

History repeats

The Aristotelian Universe

The universe according to Aristotle (mysearch.org.uk)

Here’s an example of how we can have illusions about the nature of the world: it was once an unquestioned belief that our planet was stationary and the sun orbited around it. Through objective measurement and logical analysis we now know that is wrong. But people thought this because it felt like it – our beliefs start with subjective experience (which we don’t have, according to the view I’m criticising). But of course a whole established world-view was based around this illusion. We are told that when one of the proponents of the new conception – Galileo – discovered corroborating evidence through his telescope, in the form of satellites orbiting Jupiter, supporters of the status quo refused to look into the telescope. (It’s an account of which the facts may be a little different.) But it nevertheless illustrates the extremity of the measures which the believers in an established order may take in order to protect it.

So now we have a 21st century version of that phenomenon. Our objective knowledge of the brain as an electrochemical machine can’t, even in principle, explain the existence of subjective experiences. If we are not to admit that our account of the world is seriously incomplete, a quick fix is simply to deny that this messy subjectivity is anything real, and conveniently ignore whether we are making any sense in doing so.

A Princeton psychologist, Michael Graziano, who researches into consciousness was quoted in a recent issue of New Scientist magazine, referring to what philosopher David Chalmers called ‘the hard problem’ – how and why the brain should give rise to conscious awareness at all:

“There is no hard problem,” says Graziano. “There is only the question of how the brain, an information-processing device, concludes and insists it has consciousness. And that is a problem of information processing. To understand that process fully will require [scientific experiments]”**.

So this wholly incoherent notion – of conscious experience as an illusion – is taken as the premise for a scientific investigation. And look at the language: it’s not you or I who are insisting we are conscious, but ‘the brain’. In this very defensive objectivisation of the terms used lies the modern equivalent of the 17th century churchmen who supposedly turned away from the telescope. If we only take care to avoid any mention of the subjective, we can reassure ourselves that none of this inconvenient consciousness stuff really exists – only in the ravings of a heretic would such an idea be entertained. And the scientific hegemony is spared the embarrassment of a province it doesn’t look like being able to conquer.

But free will? Even If I have convinced you that our subjective nature is real, that question may still be open. But as I mentioned before, I think the determinism arguments would only have irresistible force if we were insentient creatures, and I have tried to underline the fact that we are not. Our subjective world is the most immediate and undeniable reality of our experience – indeed it is our experience. It’s there, in that world, that we seem to be free,  and in which libertarians like myself believe we are free. Not surprisingly, it’s that world whose reality Harris is determined to deny. My contention is that, in doing so, he joins others in the fraternity of uncompromising physicalists and, like them, fatally undermines his own position.


*I haven’t explicitly distinguished between what I mean by illusion and delusion. Just to be clear: an illusion is experiencing something that appears other than it is. A delusion would be when we believe it to be as it appears. So while, for example, Harris would admit to experiencing what he believes to be the illusion of freewill, he would not admit to being deluded by it. But he would of course claim that I and many others are deluded.

**A stable mind is a conscious mind, in New Scientist 11 April 2015, p10. I did find an article for the New York Times by Graziano in which he addresses more directly some of the objections I have raised. But for the sake of brevity I’ll just mention that in that article I believe he simply falls into the same conceptual errors that I have already described.

Freedom and Purpose

Commuting days until retirement: 34

Retirement, I find, involves a lot of decisions. This blog shows that the important one was taken over two years ago – but there have been doubts about that along the way. And then, as the time approaches, a whole cluster of secondary decisions loom. Do I take my pension income by this method or that method? Can I phase my retirement and continue part-time for a while? (That one was taken care of for me – the answer was no. I felt relieved; I didn’t really want to.)  So I am free to make the first of these decisions, but not the second. And that brings me to what this post is about: what it means when we say we are ‘free’ to make a decision.

I’m not referring to the trivial sense, in which we are not free if some external factor constrains us, as with my part-time decision. It’s that more thorny philosophical problem I’m chasing, namely the dilemma as to whether we can take full responsibility as the originators of our actions; or whether we should assume that they are an inevitable consequence of the way things are in the world – the world of which our bodies and brains are a part.

It’s a dilemma which seems unresolved in modern Western society: our intuitive everyday assumption is that the first is true; indeed our whole system of morals – and of law and justice – is founded on it: we are individually held responsible for our actions unless constrained by external circumstances, or perhaps some mental dysfunction that we cannot help. Yet in our increasingly secular society, majority educated opinion drifts towards the materialist view – that the traditional assumption of freedom of the will is an illusion.

Any number of books have been written on how these approaches might be reconciled; I’m not going to get far in one blog post. But it does seem to me that this concept of freedom of action is far more elusive than is often accepted, and that facile approaches to it often end up by missing the point altogether. I would just like to try and give some idea of why I think that.

Early in Ian McEwan’s novel Atonement, the child writer Briony finds herself alone in a quiet house, in a reflective frame of mind:

Briony Tallis

Briony Tallis depicted on the cover of Atonement

She raised one hand and flexed its fingers and wondered, as she had sometimes before, how this thing, this machine for gripping, this fleshy spider on the end of her arm, came to be hers, entirely at her command. Or did it have some little life of its own? She bent her finger and straightened it. The mystery was in the instant before it moved, the dividing moment between not moving and moving, when her intention took effect. It was like a wave breaking. If she could only find herself at the crest, she thought, she might find the secret of herself, that part of her that was really in charge. She brought her forefinger closer to her face and stared at it, urging it to move. It remained still because she was pretending, she was not entirely serious, and because willing it to move, or being about to move it, was not the same as actually moving it. And when she did crook it finally, the action seemed to start in the finger itself, not in some part of her mind. When did it know to move, when did she know to move it?  There was no catching herself out. It was either-or. There was no stitching, no seam, and yet she knew that behind the smooth continuous fabric was the real self – was it her soul? – which took the decision to cease pretending, and gave the final command.

I don’t know whether, at time of writing, McEwan knew of the famous (or infamous) experiments of Benjamin Libet, some 18 years before the book was published. McEwan is a keen follower of scientific and philosophical ideas, so it’s quite likely that he did. Libet, who had been a neurological researcher since the early 1960s, designed a seminal series of experiments in the early eighties in which he examined the psychophysiological processes underlying the experience McEwan evokes.

Subjects were hooked up to detectors of brain impulses, and then asked to press a key or take some other detectable action at a moment of their own choosing, during some given period of time. They were also asked to record the instant at which they consciously made the decision to take action, by registering the position of a moving spot on an oscilloscope.

The most talked about finding of these experiments was not only that there was an identifiable electrical brain impulse associated with each decision, but that it generally occurred before the reported moment of the subject’s conscious decision. And so, on the face of it, the conclusion to be drawn is that, when we imagine ourselves to be freely taking a decision, it is really being driven by some physical process of which we are unaware; ergo free will is an illusion.

Benjamin Libet

Benjamin Libet

But of course it’s not quite that simple. In the course of his experiments Libet himself found that sometimes there was an impulse looking like the initiation of an action which was not actually followed by one. It turned out that in these cases the subject had considered moving at that moment but decided against it; so it’s as if, even when there is some physical drive to action we may still have the freedom to veto it. Compare McEwan’s Briony: ‘It remained still because she was pretending, she was not entirely serious, and because willing it to move, or being about to move it, was not the same as actually moving it.’ And this description is one that I should think we can all recognise from our own experience.

There have been other criticisms: if a subject may be deluded about when their actions are initiated, how reliable can their assessment be of exactly when they made a decision? (This from arch-physicalist Daniel Dennett). We feel we are floating helplessly in some stirred-up conceptual soup where objective and subjective measurements are difficult to disentangle from one another.

But you may be wondering all this time what these random finger crookings and key pressings have to do with my original question of whether we are free to make the important decisions which can shape our lives. Well, I hope you are, because that’s really the point of my post. There’s a big difference between these rather meaningless physical actions and the sorts of voluntary decisions that really interest us. Most, if not all, significant actions we take in our lives are chosen with a purpose. Philosophical reveries like Briony’s apart, we don’t sit around considering whether to move our finger at this moment or that moment; such minor bodily movements are normally triggered quite unconsciously, and generally in the pursuit of some higher end.

Rather, before opting for one of the paths open to us, there is some mental process of weighing up and considering what the result of each alternative might be, and which outcome we think it best to bring about. This may be an almost instantaneous judgement (which way to turn the steering wheel) or a more extended consideration of, for example, whether I should arrange my finances to my own maximum advantage, or to that of my family after my death. In either case I am constrained by a complicated network of beliefs, prejudices and instincts, some of which I am probably only slightly consciously aware of, if at all.

Teasing out the meaning of what it is for a decision to be ‘free’ in this context, is evidently very difficult, and certainly not something I’m going to try and achieve here, even if I could. But what is clear is that an isolated action like crooking your finger or pressing a button at some random moment, and for no specific purpose, has very little in common with the decisions by which we order our lives. It’s extremely difficult to imagine any objective experiment which could reliably investigate the causes of those more significant choices.

David Hume

David Hume

Immanuel Kant

Immanuel Kant

So maybe we are driven towards the philosopher Hume’s view that ‘reason is, and ought only to be the slave of the passions’. But I find the Kantian view attractive – that we can objectively deduce a morally correct course of action from our own existence as rational, sentient beings. Perhaps our freedom somehow consists in our ability to navigate a course between these two – to recognize when our ‘passions’ are driving us in the ‘right’ direction, and when they are not. Or that when we have conflicting instincts, as we often do, there is the potential freedom to rationally adjudicate between them.

Some have attempted to carve out a space for freewill in a supposedly deterministic universe by pointing out the randomness of quantum events and suchlike as the putative first causes of action. But this is an obvious fallacy. If our actions were bound by such meaningless occurrences, there is no sense in which we could be considered free at all. However this perspective does, it seems to me, throw some light on the Libet experiments. If we are asked to take random, meaning-free decisions, is it surprising that we then appear to be subjugating ourselves to whatever random, purposeless events that might be taking place in our nervous system?

Ian McEwan must have had in mind the dichotomy between meaningless, consequence-free actions and significant ones, and how we can ascribe responsibility. The plot of Atonement, as its title hints, eventually hinges on the character Briony’s own sense of responsibility for those of her actions that are significant in a broader perspective. But as we are introduced to her, McEwan has her puzzling over the source of those much more limited impulses that do not spring from any sort of rationale.

Recently I wrote about Martin Gardner, a strict believer in scientific rigour but also in metaphysical truths not capable of scientific demonstration, and his approach appeals to me. Freewill, he asserts, is inseparable from consciousness:

For me, free will and consciousness are two names for the same thing. I cannot conceive of myself being self-aware without having some degree of free will. Persons completely paralyzed can decide what to think about or when to blink their eyes. Nor can I imagine myself having free will without being conscious. (From The Whys of a Philosophical Scrivener, Postscript)

At the beginning of his chapter on free will he refers to Wittgenstein’s doctrine that only those questions which can meaningfully be asked can have answers, and what remains cannot be spoken about, continuing:

The thesis of this chapter, although extremely simple and therefore annoying to most contemporary thinkers, is that the free-will problem cannot be solved because we do not know exactly how to put the question.

The chapter examines a wide range of views before restating Gardner’s own position.  ‘Indeed,’ he says, ‘it was with a feeling of enormous relief that I concluded, long ago, that free will is an unfathomable mystery.’

It will be with another feeling of enormous relief that I will soon have a taste of freedom of a kind I haven’t before experienced; but will I be truly free? Well, I will at least have more time to think (freely or otherwise) about it.

The Mathematician and the Surgeon

Commuting days until retirement: 108

After my last post, which, among other things, compared differing attitudes to death and its aftermath (or absence of one) on the part of Arthur Koestler and George Orwell, here’s another fruitful comparison. It seemed to arise by chance from my next two commuting books, and each of the two people I’m comparing, as before, has his own characteristic perspective on that matter. Unlike my previous pair both could loosely be called scientists, and in each case the attitude expressed has a specific and revealing relationship with the writer’s work and interests.

The Mathematician

The first writer, whose book I came across by chance, has been known chiefly for mathematical puzzles and games. Martin Gardner was born in Oklahoma USA in 1914; his father was an oil geologist, and it was a conventionally Christian household. Although not trained as a mathematician, and going into a career as a journalist and writer, Gardner developed a fascination with mathematical problems and puzzles which informed his career – hence the justification for his half of my title.

Martin Gardner

Gardner as a young man (Wikimedia)

This interest continued to feed the constant books and articles he wrote, and he was eventually asked to write the Scientific American column Mathematical Games which ran from 1956 until the mid 1980s, and for which he became best known; his enthusiasm and sense of fun shines through the writing of these columns. At the same time he was increasingly concerned with the many types of fringe beliefs that had no scientific foundation, and was a founder member of PSICOPS,  the organisation dedicated to the exposing and debunking of pseudoscience. Back in February last year I mentioned one of its other well-known members, the flamboyant and self-publicising James Randi. By contrast, Gardner was mild-mannered and shy, averse from public speaking and never courting publicity. He died in 2010, leaving behind him many admirers and a two-yearly convention – the ‘Gathering for Gardner‘.

Before learning more about him recently, and reading one of his books, I had known his name from the Mathematical Games column, and heard of his rigid rejection of things unscientific. I imagined some sort of skinflint atheist, probably with a hard-nosed contempt for any fanciful or imaginative leanings – however sane and unexceptionable they might be – towards what might be thought of as things of the soul.

How wrong I was. His book that I’ve recently read, The Whys of a Philosophical Scrivener, consists of a series of chapters with titles of the form ‘Why I am not a…’ and he starts by dismissing solipsism (who wouldn’t?) and various forms of relativism; it’s a little more unexpected that determinism also gets short shrift. But in fact by this stage he has already declared that

I myself am a theist (as some readers may be surprised to learn).

I was surprised, and also intrigued. Things were going in an interesting direction. But before getting to the meat of his theism he spends a good deal of time dealing with various political and economic creeds. The book was written in the mid 80s, not long before the collapse of communism, which he seems to be anticipating (Why I am not a Marxist) . But equally he has little time for Reagan or Thatcher, laying bare the vacuity of their over-simplistic political nostrums (Why I am not a Smithian).

Soon after this, however, he is striding into the longer grass of religious belief: Why I am not a Polytheist; Why I am not a Pantheist; – so what is he? The next chapter heading is a significant one: Why I do not Believe the Existence of God can be Demonstrated. This is the key, it seems to me, to Gardner’s attitude – one to which I find myself sympathetic. Near the beginning of the book we find:

My own view is that emotions are the only grounds for metaphysical leaps.

I was intrigued by the appearance of the emotions in this context: here is a man whose day job is bound up with his fascination for the powers of reason, but who is nevertheless acutely conscious of the limits of reason. He refers to himself as a ‘fideist’ – one who believes in a god purely on the basis of faith, rather than any form of demonstration, either empirical or through abstract logic. And if those won’t provide a basis for faith, what else is there but our feelings? This puts Gardner nicely at odds with the modish atheists of today, like Dawkins, who never tires of telling us that he too could believe if only the evidence were there.

But at the same time he is squarely in a religious tradition which holds that ultimate things are beyond the instruments of observation and logic that are so vital to the secular, scientific world of today. I can remember my own mother – unlike Gardner a conventional Christian believer – being very definite on that point. And it reminds me of some of the writings of Wittgenstein; Gardner does in fact refer to him,  in the context of the freewill question. I’ll let him explain:

A famous section at the close of Ludwig Wittgenstein’s Tractatus Logico-Philosophicus asserts that when an answer cannot be put into words, neither can the question; that if a question can be framed at all, it is possible to answer it; and that what we cannot speak about we should consign to silence. The thesis of this chapter, although extremely simple and therefore annoying to most contemporary thinkers, is that the free-will problem cannot be solved because we do not know exactly how to put the question.

This mirrors some of my own thoughts about that particular philosophical problem – a far more slippery one than those on either side of it often claim, in my opinion (I think that may be a topic for a future post). I can add that Gardner was also on the unfashionable side of the question which came up in my previous post – that of an afterlife; and again he holds this out as a matter of faith rather than reason. He explores the philosophy of personal identity and continuity in some detail, always concluding with the sentiment ‘I do not know. Do not ask me.’ His underlying instinct seems to be that there has to something more than our bodily existence, given that our inner lives are so inexplicable from the objective point of view – so much more than our physical existence. ‘By faith, I hope and believe that you and I will not disappear for ever when we die.’ By contrast, Arthur Koestler, you may remember,  wrote in his suicide note of ‘tentative hopes for a depersonalised afterlife’ – but, as it turned out, these hopes were based partly on the sort of parapsychological evidence which was anathema to Gardner.

And of course Gardner was acutely aware of another related mystery – that of consciousness, which he finds inseparable from the issue of free will:

For me, free will and consciousness are two names for the same thing. I cannot conceive of myself being self-aware without having some degree of free will… Nor can I imagine myself having free will without being conscious.

He expresses utter dissatisfaction with the approach of arch-physicalists such as Daniel Dennett, who,  as he says,  ‘explains consciousness by denying that it exists’. (I attempted to puncture this particular balloon in an earlier post.)

Martin Gardner

Gardner in later life (Konrad Jacobs / Wikimedia)

Gardner places himself squarely within the ranks of the ‘mysterians’ – a deliberately derisive label applied by their opponents to those thinkers who conclude that these matters are mysteries which are probably beyond our capacity to solve. Among their ranks is Noam Chomsky: Gardner cites a 1983 interview with the grand old man of linguistics,  in which he expresses his attitude to the free will problem (scroll down to see the relevant passage).

The Surgeon

And so to the surgeon of my title, and if you’ve read one of my other blog posts you will already have met him – he’s a neurosurgeon named Henry Marsh, and I wrote a post based on a review of his book Do No Harm. Well, now I’ve read the book, and found it as impressive and moving as the review suggested. Unlike many in his profession, Marsh is a deeply humble man who is disarmingly honest in his account about the emotional impact of the work he does. He is simultaneously compelled towards,  and fearful of, the enormous power of the neurosurgeon both to save and to destroy. His narrative swings between tragedy and elation, by way of high farce when he describes some of the more ill-conceived management ‘initiatives’ at his hospital.

A neurosurgical operation

A neurosurgical operation (Mainz University Medical Centre)

The interesting point of comparison with Gardner is that Marsh – a man who daily manipulates what we might call physical mind-stuff – the brain itself – is also awed and mystified by its powers:

There are one hundred billion nerve cells in our brains. Does each one have a fragment of consciousness within it? How many nerve cells do we require to be conscious or to feel pain? Or does consciousness and thought reside in the electrochemical impulses that join these billions of cells together? Is a snail aware? Does it feel pain when you crush it underfoot? Nobody knows.

The same sense of mystery and wonder as Gardner’s; but approached from a different perspective:

Neuroscience tells us that it is highly improbable that we have souls, as everything we think and feel is no more or no less than the electrochemical chatter of our nerve cells… Many people deeply resent this view of things, which not only deprives us of life after death but also seems to downgrade thought to mere electrochemistry and reduces us to mere automata, to machines. Such people are profoundly mistaken, since what it really does is upgrade matter into something infinitely mysterious that we do not understand.

Henry Marsh

Henry Marsh

This of course is the perspective of a practical man – one who is emphatically working at the coal face of neurology, and far more familiar with the actual material of brain tissue than armchair speculators like me. While I was reading his book, although deeply impressed by this man’s humanity and integrity, what disrespectfully came to mind was a piece of irreverent humour once told to me by a director of a small company I used to work for which was closely connected to the medical industry. It was a sort of a handy cut-out-and-keep guide to the different types of medical practitioner:

Surgeons do everything and know nothing. Physicians know everything and do nothing. Psychiatrists know nothing and do nothing.  Pathologists know everything and do everything – but the patient’s dead, so it’s too late.

Grossly unfair to all to all of them, of course, but nonetheless funny, and perhaps containing a certain grain of truth. Marsh, belonging to the first category, perhaps embodies some of the aversion from dry theory that this caricature hints at: what matters to him ultimately, as a surgeon, is the sheer down-to-earth physicality of his work, guided by the gut instincts of his humanity. We hear from him about some members of his profession who seem aloof from the enormity of the dangers it embodies, and seem able to proceed calmly and objectively with what he sees almost as the detachment of the psychopath.

Common ground

What Marsh and Gardner seem to have in common is the instinct that dry, objective reasoning only takes you so far. Both trust the power of their own emotions, and their sense of awe. Both, I feel, are attempting to articulate the same insight, but from widely differing standpoints.

Two passages, one from each book, seem to crystallize both the similarities and differences between the respective approaches of the two men, both of whom seem to me admirably sane and perceptive, if radically divergent in many respects. First Gardner, emphasising in a Wittgensteinian way how describing how things appear to be is perhaps a more useful activity than attempting to pursue any ultimate reasons:

There is a road that joins the empirical knowledge of science with the formal knowledge of logic and mathematics. No road connects rational knowledge with the affirmations of the heart. On this point fideists are in complete agreement. It is one of the reasons why a fideist, Christian or otherwise, can admire the writings of logical empiricists more than the writings of philosophers who struggle to defend spurious metaphysical arguments.

And now Marsh – mystified, as we have seen, as to how the brain-stuff he manipulates daily can be the seat of all experience – having a go at reading a little philosophy in the spare time between sessions in the operating theatre:

As a practical brain surgeon I have always found the philosophy of the so-called ‘Mind-Brain Problem’ confusing and ultimately a waste of time. It has never seemed a problem to me, only a source of awe, amazement and profound surprise that my consciousness, my very sense of self, the self which feels as free as air, which was trying to read the book but instead was watching the clouds through the high windows, the self which is now writing these words, is in fact the electrochemical chatter of one hundred billion nerve cells. The author of the book appeared equally amazed by the ‘Mind-Brain Problem’, but as I started to read his list of theories – functionalism, epiphenomenalism, emergent materialism, dualistic interactionism or was it interactionistic dualism? – I quickly drifted off to sleep, waiting for the nurse to come and wake me, telling me it was time to return to the theatre and start operating on the old man’s brain.

I couldn’t help noticing that these two men – one unconventionally religious and the other not religious at all – seem between them to embody those twin traditional pillars of the religious life: faith and works.

On Being Set Free

Commuting days until retirement: 133

The underlying theme of this blog is retirement, and it will be fairly obvious to most of my readers by now – perhaps indeed to all three of you – that I’m looking forward to it. It draws closer; I can almost hear the ‘Happy retirement’ wishes from colleagues – some expressed perhaps through ever-so-slightly gritted teeth as they look forward to many more years in harness, while I am put out to graze. But of course there’s another side to that: they will also be keeping silent about the thought that being put out to graze also carries with it the not too distant prospect of the knacker’s yard – something they rarely think about in relation to themselves.

Because in fact the people I work with are generally a lot younger than I am – in a few cases younger than my children. No one in my part of the business has ever actually retired, as opposed to leaving for another job. My feeling is that to stand up and announce that I am going to retire will be to introduce something alien and faintly distasteful into the prevailing culture, like telling everyone about your arthritis at a 21st birthday party.

The revolving telescope

For most of my colleagues, retirement,  like death, is something that happens to other people. In my experience, it’s around the mid to late 20s that such matters first impinge on the consciousness – indistinct and out of focus at first, something on the edge of the visual field. It’s no coincidence, I think, that it’s around that same time that one’s perspective on life reverses, and the general sense that you’d like to be older and more in command of things starts to give way to an awareness of vanishing youth. The natural desire for what is out of reach reorientates its outlook, swinging through 180 degrees like a telescope on a revolving stand.

But I find that, having reached the sort of age I am now, it’s doesn’t do to turn your back on what approaches. It’s now sufficiently close that it is the principal factor defining the shape of the space you now have available in which to organise your life,  and you do much better not to pretend it isn’t there, but to be realistically aware. We have all known those who nevertheless keep their backs resolutely turned, and they often cut somewhat pathetic figures: a particular example I remember was a man (who would almost certainly be dead by now) who didn’t seem to accept his failing prowess at tennis as an inevitable corollary of age, but rather a series of inexplicable failures that he should blame himself for. And there are all those celebrities you see with skin stretched ever tighter over their facial bones as they bring in the friendly figure of the plastic surgeon to obscure the view of where they are headed.

Perhaps Ray Kurzweil, who featured in my previous post, is another example, with his 250 supplement tablets each day and his faith in the abilities of technology to provide him with some sort of synthetic afterlife.  Given that he has achieved a generous measure of success in his natural life, he perhaps has less need than most of us to seek a further one; but maybe it works the other way, and a well-upholstered ego is more likely to feel a continued existence as its right.

Enjoying the view

Old and Happy

Happiness is not the preserve of the young (Wikimedia Commons)

But the fact is that for most of us the impending curtailment of our time on earth brings a surprising sense of freedom. With nothing left to strive for – no anxiety about whether this or that ambition will be realised – some sort of summit is achieved. The effort is over,  and we can relax and enjoy the view. More than one survey has found that people in their seventies are nowadays collectively happier than any other age group: here are reports of three separate studies between 2011 and 2014, in Psychology Today, The Connexion, and the Daily Mail. Those adverts for pension providers and so on, showing apparently radiant wrinkly couples feeding the ducks with their grandchildren, aren’t quite as wide of the mark as you might think.

Speaking for myself, I’ve never been excessively troubled by feelings of ambition, and have probably enjoyed a relatively stress-free, if perhaps less prosperous, life as a result. And the prospect of an existence where I am no longer even expected to show such aspirations is part of the attraction of retirement. But of course there remain those for whom the fact of extinction gives rise to wholly negative feelings, but who are at the same time brave enough to face it fair and square, without any psychological or cosmetic props. A prime example in recent literature is Philip Larkin, who seems to make frequent appearances in this blog. While famously afraid of death, he wrote luminously about it. Here, in his poem The Old Fools he evokes images of the extreme old age which he never, in fact, reached himself:

Philip Larkin

Philip Larkin (Fay Godwin)

Perhaps being old is having lighted rooms
Inside your head, and people in them, acting.
People you know, yet can’t quite name; each looms
Like a deep loss restored, from known doors turning,
Setting down a lamp, smiling from a stair, extracting
A known book from the shelves; or sometimes only
The rooms themselves, chairs and a fire burning,
The blown bush at the window, or the sun’s
Faint friendliness on the wall some lonely
Rain-ceased midsummer evening.

Dream and reality seem to fuse at this ultimate extremity of conscious experience as Larkin portrays it; and it’s the snuffing out of consciousness that a certain instinct in us finds difficult to take – indeed, to believe in. Larkin, by nature a pessimist, certainly believed in it,  and dreaded it. But cultural traditions of many kinds have not accepted extinction as inevitable: we are not obliviously functioning machines but the subjects of experiences like the ones Larkin writes about. As such we have immortal souls which transcend the gross physical world, it has been held – so why should we not survive death? (Indeed, according some creeds, why should we not have existed before birth?)

Timid hopes

Well, whatever immortal souls might be, I find it difficult to make out a case for individual survival, and this is perhaps the majority view in the secular culture I inhabit. It seems pretty clear to me that my own distinguishing characteristics are indissolubly linked to my physical body: damage to the brain, we know, can can change the personality, and perhaps rob us of our memories and past experience, which most quintessentially define us as individuals. But even though our consciousness can be temporarily wiped out by sleep or anaesthetics, there remains the sense (for me, anyway) that since we have no notion whatever of how we could provide an account of it in physical terms,  there is the faint suggestion that some aspect of our experience could be independent of our bodily existence.

You may or may not accept both of these beliefs – the temporality of the individual and the transcendence of consciousness. But if you do,  then the possibility seems to arise of some kind of disembodied,  collective sentience,  beyond our normal existence. And this train of thought always reminds me of the writer Arthur Koestler, who died by suicide in 1983 at the age of 77. An outspoken advocate of voluntary euthanasia, he’d been suffering in later life from Parkinson’s disease, and had then contracted a progressive, incurable form of leukaemia. His suicide note (which turned out to have been written several months before his death) included the following passage:

I wish my friends to know that I am leaving their company in a peaceful frame of mind, with some timid hopes for a de-personalised after-life beyond due confines of space, time and matter and beyond the limits of our comprehension. This ‘oceanic feeling’ has often sustained me at difficult moments, and does so now, while I am writing this.

Death sentence

In fact Koestler had, since he was quite young, been more closely acquainted with death than most of us. Born in Hungary, during his earlier career as a journalist and political writer he twice visited Spain during its civil war in the 1930s. He made his first visit as an undercover investigator of the Fascist movement, being himself at that time an enthusiastic supporter of communism. A little later he returned to report from the Republican side,  but was in Malaga when it was captured by Fascist troops. By now Franco had come to know of his anti-fascist writing, and he was imprisoned in Seville under sentence of death.

Koestler portrayed on the cover of the book

Koestler portrayed on the cover of the book

In his account of this experience, Dialogue with Death, he describes how prisoners would try to block their ears to avoid the nightly sound of a telephone call to the prison, when a list of prisoner names would be dictated and the men later led out and shot. His book is illuminating on the psychology of these conditions,  and the violent emotional ups and downs he experienced:

One of my magic remedies was a certain quotation from a certain work of Thomas Mann’s; its efficacy never failed. Sometimes, during an attack of fear, I repeated the same verse thirty or forty times, for almost an hour, until a mild state of trance came on and the attack passed. I knew it was the method of the prayer-mill, of the African tom-tom, of the age-old magic of sounds. Yet in spite of my knowing it, it worked…
I had found out that the human spirit is able to call upon certain aids of which, in normal circumstances, it has no knowledge, and the existence of which it only discovers in itself in abnormal circumstances. They act, according to the particular case, either as merciful narcotics or ecstatic stimulants. The technique which I developed under the pressure of the death-sentence consisted in the skilful exploitation of these aids. I knew, by the way, that at the decisive moment when I should have to face the wall, these mental devices would act automatically, without any conscious effort on my part. Thus I had actually no fear of the moment of execution; I only feared the fear which would precede that moment.

That there are emotional ‘ups’ at all seems surprising,  but later he expands on one of them:

Often when I wake at night I am homesick for my cell in the death-house in Seville and, strangely enough, I feel that I have never been so free as I was then. This is a very strange feeling indeed. We lived an unusual life on that patio; the constant nearness of death weighed down and at the same time lightened our existence. Most of us were not afraid of death, only of the act of dying; and there were times when we overcame even this fear. At such moments we were free – men without shadows, dismissed from the ranks of the mortal; it was the most complete experience of freedom that can be granted a man.

Perhaps, in a diluted, much less intense form, the happiness of the over 70s revealed by the surveys I mentioned has something in common with this.

Koestler was possibly the only writer of the front rank ever to be held under sentence of death, and the experience informed his novel Darkness at Noon. It is the second in a trilogy of politically themed novels, and its protagonist, Rubashov, has been imprisoned by the authorities of an unnamed totalitarian state which appears to be a very thinly disguised portrayal of Stalinist Russia. Rubashov has been one of the first generation of revolutionaries in a movement which has hardened into an authoritarian despotism, and its leader, referred to only as ‘Number One’ is apparently eliminating rivals.  Worn down by the interrogation conducted by a younger, hard-line apparatchik, Rubashov comes to accept that he has somehow criminally acted against ‘the revolution’, and eventually goes meekly to his execution.

Shades of Orwell

By the time of writing the novel, Koestler, like so many intellectuals of that era, had made the journey from an initial enthusiasm for Soviet communism to disillusion with,  and opposition to it. And reading Darkness at Noon, I was of course constantly reminded of Orwell’s Nineteen Eighty-Four, and the capitulation of Winston Smith as he comes to love Big Brother. Darkness at Noon predates 1984 by nine years,  and nowadays has been somewhat eclipsed by Orwell’s much more well known novel. The two authors had met briefly during the Spanish civil war, where Orwell was actively involved in fighting against fascism, and met again and discussed politics around the end of the war. It seems clear that Orwell, having written his own satire on the Russian revolution in Animal Farm, eventually wrote 1984 under the conscious influence of Koestler’s novel. But they are of course very different characters: you get the feeling that to Orwell, with his both-feet-on-the-ground Englishness, Koestler might have seemed a rather flighty and exotic creature.

Orwell (aka Eric Blair) from the photo on his press pass (NUJ/Wikimedia Commons)

Orwell (aka Eric Blair) from the photo on his press pass (Wikimedia Commons)

In fact,  during the period between the publications of Darkness at Noon and 1984, Orwell wrote an essay on Arthur Koestler – probably while he was still at work on Animal Farm. His view of Koestler’s output is mixed: on one hand he admires Koestler as a prime example of the continental writers on politics whose views have been forged by hard experience in this era of political oppression – as opposed to English commentators who merely strike attitudes towards the turmoil in Europe and the East, while viewing it from a relatively safe distance. Darkness at Noon he regards as a ‘masterpiece’ – its common ground with 1984 is not, it seems, a coincidence. (Orwell’s review of Darkness at Noon in the New Statesman is also available.)

On the other hand he finds much of Koestler’s work unsatisfactory, a mere vehicle for his aspirations towards a better society. Orwell quotes Koestler’s description of himself as a ‘short-term pessimist’,  but also detects a utopian undercurrent which he feels is unrealistic. His own views are expressed as something more like long-term pessimism, doubting whether man can ever replace the chaos of the mid-twentieth century with a society that is both stable and benign:

Nothing is in sight except a welter of lies, hatred, cruelty and ignorance, and beyond our present troubles loom vaster ones which are only now entering into the European consciousness. It is quite possible that man’s major problems will NEVER be solved. But it is also unthinkable! Who is there who dares to look at the world of today and say to himself, “It will always be like this: even in a million years it cannot get appreciably better?” So you get the quasi-mystical belief that for the present there is no remedy, all political action is useless, but that somewhere in space and time human life will cease to be the miserable brutish thing it now is. The only easy way out is that of the religious believer, who regards this life merely as a preparation for the next. But few thinking people now believe in life after death, and the number of those who do is probably diminishing.

In death as in life

Orwell’s remarks neatly return me to the topic I have diverged from. If we compare the deaths of the two men, they seem to align with their differing attitudes in life. Both died in the grip of a disease – Orwell succumbing to tuberculosis after his final, gloomy novel was completed, and Koestler escaping his leukaemia by suicide but still expressing ‘timid hopes’.

After the war Koestler had adopted England as his country and henceforth wrote only in English – most of his previous work had been in German. In  being allowed a longer life than Orwell to pursue his writing, he had moved on from politics to write widely in philosophy and the history of ideas, although never really being a member of the intellectual establishment. These are areas which you feel would always have been outside the range of the more down-to-earth Orwell, who was strongly moral,  but severely practical. Orwell goes on to say, in the essay I quoted: ‘The real problem is how to restore the religious attitude while accepting death as final.’ This so much reflects his attitudes – he habitually enjoyed attending Anglican church services, but without being a believer. He continues, epigramatically:

Men can only be happy when they do not assume that the object of life is happiness. It is most unlikely, however, that Koestler would accept this. There is a well-marked hedonistic strain in his writings, and his failure to find a political position after breaking with Stalinism is a result of this.

Again, we strongly feel the tension between their respective characters: Orwell, with his English caution, and Koestler with his continental adventurism. In fact, Koestler had a reputation as something of an egotist and aggressive womaniser. Even his suicide reflected this: it was a double suicide with his third wife, who was over 20 years younger than he was and in good health. Her accompanying note explained that she couldn’t continue her life without him. Friends confirmed that she had entirely subjected her life to his: but to what extent this was a case of bullying,  as some claimed, will never be known.

Of course there was much common ground between the two men: both were always on the political left, and both,  as you might expect, were firmly opposed to capital punishment: anyone who needs convincing should read Orwell’s autobiographical essay A Hanging. And Koestler wrote a more prosaic piece – a considered refutation of the arguments for judicial killing – in his book Reflections on Hanging; it was written in the 1950s, when, on Koestler’s own account, some dozen hangings were occurring in Britain each year.

But while Orwell faced his death stoically, Koestler continued his dalliance with the notion of some form of hereafter; you feel that, as with Kurzweil, a well-developed ego did not easliy accept the thought of extinction. In writing this post, I discovered that he had been one of a number of intellectual luminaries who contributed to a collection of essays under the title Life after Death,  published in the 1970s. Keen to find a more detailed statement of his views, I actually found his piece rather disappointing. First I’ll sketch in a bit of background to clarify where I think he is coming from.

Back in Victorian times there was much interest in evidence of ‘survival’ – seances and table-rapping sessions were popular, and fraudulent mediums were prospering. Reasons for this are not hard to find: traditional religion, while strong, faced challenges. Steam-powered technology was burgeoning, the world increasingly seemed to be a wholly mechanical affair,  and Darwinism had arrived to encourage the trend towards materialism. In 1882 the Society for Psychical Research was formed, becoming a focus both for those who were anxious to subvert the materialist world view, and those who wanted to investigate the phenomena objectively and seek intellectual clarity.

But it wasn’t long before the revolution in physics, with relativity and quantum theory, exploded the mechanical certainties of the Victorians. At the same time millions suffered premature deaths in two world wars, giving ample motivation to believe that those lost somehow still existed and could maybe even be contacted.

Arthur Koestler

Koestler in later life (Eric Koch/Wikimedia Commons)

This seems to be the background against which Koestler’s ideas about the possibility of an afterlife had developed. He leans a lot on the philosophical writings of the quantum physicist Edwin Schrodinger, and seeks to base a duality of mind and matter on the wave/particle duality of quantum theory. There’s a lot of talk about psi fields and suchlike – the sort of terminology which was already sounding dated at the time he was writing.  The essay seemed to me to be rather backward looking, sitting more comfortably with the inchoate fringe beliefs of the mid 20th century than the confident secularism of Western Europe today.

A rebel to the end

I think Koestler was well aware of the way things were going, but with characteristic truculence reacted against them. He wrote a good deal on topics that clash with mainstream science, such as the significance of coincidence, and in his will used his legacy to establish a department of parapsychology,  which was set up at Edinburgh University, and still exists.

This was clearly a deliberate attempt to cock a snook at the establishment, and while he was not an attractive character in many ways I do find this defiant stance makes me warm to him a little. While I am sure I would have found Orwell more decent and congenial to know personally, Koestler is the more intellectually exciting of the two. I think Orwell might have found Koestler’s notion of the sense of freedom when facing death difficult to understand – but maybe this might have changed had he survived into his seventies. And in a general sense I share Koestler’s instinct that in human consciousness there is far more yet to understand than we have yet been able to, as it were, get our minds around.

Retirement, for me, will certainly bring freedom – not only freedom from the strained atmosphere of worldly ambition and corporate business-speak (itself an Orwellian development) but more of my own time to reflect further on the matters I’ve spoken of here.

A Singular Notion

Commuting days until retirement: 168

I’ve been reading about the future. Well, one man’s idea of the future, anyway – and of course when it comes to the future, people’s ideas about it are really all we can have. This particular writer obviously considers his own ideas to be highly upbeat and optimistic, but others may view them with apprehension, if not downright disbelief – and I share some of their reservations.

Ray Kurzweil

Ray Kurzweil
(Photo: Roland Dobbins / Wikimedia Commons)

The man in question is Ray Kurzweil, and it has to be said that he is massively well informed – about the past and the present, anyway; his claims to knowledge of the future are what I want to examine. He holds the position of Head of Engineering at Google, but has also founded any number of high-tech companies and is credited with a big part in inventing flatbed scanners, optical character recognition, speech synthesis and speech recognition. On top of all this, he is quite a philosopher, and has carried on debates with other philosophers about the basis of his ideas, and we hear about some of these debates in the book I’ve been reading.

The book is The Singularity is Near, and its length (500 dense pages, excluding notes) is partly responsible for the elapsed time since my last substantial post. Kurzweil is engagingly enthusiastic about his enormous stock of knowledge,  so much so that he is unable to resist laying the exhaustive details of every topic before you. Repeatedly you find yourself a little punch drunk under the remorseless onslaught of facts – at which point he has an engaging way of saying ‘I’ll be dealing with that in more detail in the next chapter.’ You feel that perhaps quite a bit of the content would be better accommodated in endnotes – were it not for the fact that nearly half the book consists of endnotes as it is.

Density

To my mind, the the argument of the book has two principal premises, the first of which I’d readily agree to, but the second of which seems to me highly dubious. The first idea is closely related to the ‘Singularity’ of the title. A singularity is a concept imported from mathematics, but is perhaps more familiar in the context of black holes and the big bang. In a black hole, enormous amounts of matter become so concentrated under their own gravitational force that they shrink to a point of, well, as far as we can tell, infinite density. (At this point I can’t help thinking of Kurzweil’s infinitely dense prose style – perhaps it is suited to his topic.) But what’s important about this for our present purposes is the fact that some sort of boundary has been crossed: things are radically different, and all the rules and guidelines that we have previously found useful in investigating how the world works, no longer apply.

To understand how this applies, by analogy,  to our future, we have to introduce the notion of exponential growth – that is, growth not by regular increments but by multiples. A well known illustration of the surprising power of this is the old fable of the King who has a debt of gratitude to one of his subjects, and asks what he would like as a reward. The man asks for one grain of wheat corresponding to the first square of the chess board, two for the second, four for the third,  and so on up to the sixty-fourth, doubling each time. At first the King is incredulous that the man has demanded so little, but of course soon finds that the entire output of his country would fall woefully short of what is asked. (The number of grains works out at 18,446,744,073,709,551,615 – this is of a similar order to,  say,  the estimated number of grains of sand in the world.)

Such unexpected expansion is the hallmark of exponential growth – however gradually it rises at first, eventually the curve will always accelerate explosively upward. Kurzweil devotes many pages to arguing how the advance of human technical capability follows just such a trajectory. One frequently quoted example is what has become known as Moore’s law: in 1965 a co-founder of the chip company Intel, Gordon Moore, made an extrapolation from what had then been achieved and asserted that the number of processing elements that could be fitted on to a chip of a given size would double every year.  This was later modified to two years, but has nevertheless continued exponentially, and there is no reason, short of global calamity, that it will stop in the foreseeable future. The evidence is all around us: thirty years ago, equipment with the power of a modern smartphone would have been a roomful of immobile cabinets costing thousands of pounds.

Accelerating returns

That’s of course just one example; taking a broader view we could look, as Kurzweil does, at the various revolutions that have transformed human life over time. The stages of agricultural revolution – the transition from the hunter-gatherer way of life,  via subsistence farming to systematic growth and distribution of food,  took many centuries,  or even millennia. The industrial revolution could be said to have been even greater in its effects over a mere century or two, while the digital revolution we are currently experiencing has made radical changes in just the last thirty years or so. Kurzweil argues that each of these steps forward provides us with the wherewithal to effect further changes even more rapidly and efficiently – and hence the exponential nature of our progress. Kurzweil refers to it as ‘The Law of Accelerating Returns’.

So if we are proceeding by ever-increasing steps forward, what is our destiny – what will be the nature of the exponential explosion that we must expect? This is the burden of Kurzweil’s book, and the ‘singularity’ after which nothing will be the same. His projection of our progress towards this point is based on a triumvirate of endeavours which he refers to confidently with the acronym GNR: Genetics, Nanotechnology and Robotics. Genetics will continue its progress – exponentially – in finding cures for the disorders which limit our life span, as well as disentangling many of the mysteries of how we – and our brains – develop. For Nanotechnology,  Kurzweil has extensive expectations. Tiny, ultimately self-reproducing machines could be sent out into the world to restructure matter and turn innocent lumps of rock into computers with so far undreamt of processing power. And they could journey inwards,  into our bodies, ferreting out cancer cells and performing all sorts of repairs that would be difficult or impossible now. Kurzweil’s enthusiasm reaches its peak when he describes these microscopic helpers travelling round the blood vessels of our brains, scanning their surroundings and reporting back over wi-fi on what they find. This would be part of the grand project of ‘reverse engineering the brain’.

And with the knowledge gained thus, the third endeavour,  Robotics, already enlisted in the development of the nanobots now navigating our brains, would come into its own. Built on many decades of computing experience,  and enhanced by an understanding of how the human brain works, a race of impossibly intelligent robots, which nevertheless boast human qualities, would be born. Processing power is still of course expanding exponentially, adopting any handy lumps of rock as its substrate, and Kurzweil sees it expanding across the universe as the possibilities of our own planet are exhausted.

Cyborgs

And so what of we poor, limited humans? We don’t need to be left behind,  or disposed of somehow by our vastly more capable creations, according to Kurzweil. Since the functionality of our brains, both in general and on an individual basis, can be replicated within the computing power which is all around us, he envisages us enhancing ourselves by technology. Either we develop the ability to ‘upload the patterns of an actual human into a suitable non-biological, thinking substrate’, or we could simply continue the development of devices like neural implants until nanotechnology is actively extending and even replacing our biological faculties. ‘We will then be cyborgs,’ he explains, and ‘the nonbiological portion of our intelligence will expand its powers exponentially.’

If some of the above makes you feel distinctly queasy, then you’re not alone. A number of potential problems, even disasters, will have occurred to you. But Kurzweil is unfailingly upbeat; while listing a number of ways that things could go wrong, he reasons that all of them can be avoided. And in a long section at the end he lists many objections by critics and provides answers to all of them.

Meanwhile, back in the future, the singularity is under way; and perhaps the most surprising aspect of it is how soon Kurzweil sees it happening. Basing his prediction on an exhaustive analysis, he sets it at: 2045. Not a typo on my part,  but a date well within the lifetime of many of us. I’ll be 97 by then if I’m alive at all,  which I don’t expect to be, exponential advances in medicine notwithstanding. It so happens that Kurzweil himself was born in the same year as me; and as you might expect, this energetic man fully expects to see the day – indeed to be able to upload himself and continue into the future. He tells us how, once relatively unhealthy and suffering from type II diabetes, he took himself in hand ‘from my perspective as an inventor’. He immersed himself in the medical literature, and with the collaboration of a medical expert, aggressively applied a range of therapies to himself. At the time of writing the book, he proudly relates, he was taking 250 supplement pills each day and a half-dozen intravenous nutritional therapies per week. As a result he was judged to have attained a biological age of 40, although he was then 56 in calendar years.

This also brings us to the second – to my mind rather more dubious – plank upon which his vision of the future rests. As we have seen, the best prospects for humanity, he claims,  lie not in the messy and unreliable biological packages which have taken us thus far, but as entities somehow (dis)embodied in the substrate of the computing power which is expanding to fill ever more of the known universe.

Dialogue

Before examining this proposition further, I’d like to mention that, while Kurzweil’s book is hard going at times, it does have some refreshing touches. One of these is the frequent dialogues introduced at the end of chapters, where Kurzweil himself (‘Ray’) discusses the foregoing material with a variety of characters. These include,  among others, a woman from the present day and her uploaded self from a hundred years hence, as well as various luminaries from the past and present: Ned Ludd (the original Luddite from the 18th century), Charles Darwin, Sigmund Freud and Bill Gates. One rather nicely conceived one involves a couple of primordial bacteria discussing the pros and cons of clumping together and giving up some of their individuality in order to form larger organisms; we are implicitly invited to compare the reluctance of one of them to enter a world full of greater possibilities with our own apprehension about the singularity.

So in the same spirit, I have taken the opportunity here to discuss the matter with Kurzweil directly, and I suppose I am going to be the present day equivalent of the reluctant bacterium. (Most of the claims he makes below are not put into his mouth by me, but come from the book.)

DUNCOMMUTIN: Ray,  thank you for taking the trouble to visit my blog.

RAY: That’s my pleasure.

DUNCOMMUTIN: In the book you provide answers to a number of objections – many of them technically based ones which address whether the developments you outline are possible at all. I’ll assume that they are, but raise questions about whether we should really want them to happen.

RAY: OK. You won’t be the first to do that – but fire away.

DUNCOMMUTIN: Well, “fire away” is an apt phrase to introduce my first point: you have some experience of working on defence projects, and this is reflected in some of the points you make in the book. At one point you remark that ‘Warfare will move toward nanobot-based weapons, as well as cyber-weapons’. With all this hyper-intelligence at the service of our brains, won’t some of it reach the conclusion that war is a pretty stupid way of conducting things?

RAY: Yes – in one respect you have a point. But look at the state of the world today. Many people think that the various terrorist organisations that are gaining ever higher profiles pose the greatest threat to our future. Their agendas are mostly based on fanaticism and religious fundamentalism. I may be an optimist, but I don’t see that threat going away any time soon. Now there are reasoned objections to the future that I’m projecting, like your own – I welcome these, and view such debate as important.  But inevitably there will be those whose opposition will be unreasonable and destructive. Most people today would agree that we need armed forces to protect our democracy and,  indeed, our freedom to debate the shape of our future. So it follows that,  as we evolve enhanced capabilities, we should exploit them to counter those threats. But going back to your original point – yes,  I have every hope that the exponentially increasing intelligence we will have access to will put aside the possibility of war between technologically advanced nations. And indeed,  perhaps the very concept of a nation state might eventually disappear.

DUNCOMMUTIN: OK,  that seems reasonable. But I want to look further at the notion of each of us being part of some pan-intelligent entity. There are so many potential worries here. I’ll leave aside the question of computer viruses and cyber-warfare, which you deal with in the book. But can you really see this future being adopted wholesale? Before going into some of the reservations I have, I’d want to say that many will share them.

RAY: Imagine that we have reached that time – not so far in the future. I and like-minded people will be already taking advantage of the opportunities to expand our intelligence, while,  if I may say so, you and your more conservative-minded friends will have not. But expanded intelligence makes you a better debater.  Who do you think will win the argument?

DUNCOMMUTIN: Now you’re really worrying me. Being a better debater isn’t the same as being right. Isn’t this just another way of saying ‘might is right’ – the philosophy of the dictator down the ages?

RAY: That’s a bit unfair – we’re not talking about coercion here, but persuasion – a democratic concept.

DUNCOMMUTIN: Maybe, but it sounds very much as if, with all this overwhelming computer power, persuasion will very easily become coercion.

RAY: Remember that it is from the most technologically advanced nations that these developments will be initiated – and they are democracies. I see democracy and the right of choice being kept as fundamental principles.

DUNCOMMUTIN: You might,  Ray – but what safeguards will we have to retain freedom of choice and restrain any over-zealous technocrats? However I won’t pursue this line further. Here’s another thing that bothers me.  There’s an old saying: ‘To err is human,  but it takes a computer to really foul things up.’ If you look at the history of recent large scale IT projects, particularly in the public sector, you will come across any number of expensive flops that had to be abandoned. Now what you are proposing could be described, it seems to me, as the most ambitious IT project yet. What could happen if I commit the functioning of my own brain to a system which turns out to have serious flaws?

RAY: The problems you are referring to are associated with what we will come to see as the embryonic stage – the dark ages, if you will – of computing. It’s important to recognize that the science of computing is advancing by leaps and bounds, and that software exists which assists in the design of further software. Ultimately program design will be the preserve, not of sweaty pony-tailed characters slaving away in front of screens, but of proven self-organising software entities whose reliability is beyond doubt. Once again, as software principles are developed, proven and applied to the design of further software, we will see exponential progression in this area.

DUNCOMMUTIN: That reassures me in one way, but gives me more cause for concern in another. I am thinking of what I call the coffee machine scenario.

RAY: Coffee machine?

DUNCOMMUTIN: Yes.  In the office where I work there are state-of-the-art coffee machines,  fully automated. You only have to touch a few icons on a screen to order a cup of coffee, tea, or other drink just as you like it, with the right proportions of milk,  sugar,  and so on. The drink you specify is then delivered within seconds. The trouble is,  it tastes pretty ghastly, rendering the whole enterprise effectively pointless. What I am suggesting is that, given all the supreme and unimaginably complex technical wizardry that goes into our new existence, it’s going to be impossible for us humans to keep track of where it’s all going; and the danger is that the point will be missed: the real essence of ourselves will be lost or destroyed.

RAY: OK,  I think I see where you’re going. First of all,  let me reassure you that nanoengineered coffee will be better than anything you’ve tasted before! But, to get to the substantial point, you seem a bit vague about what this ‘essence’ is. Remember that what I am envisaging is a full reverse engineering of the human brain, and indeed body. The computation which results would mirror everything we think and feel. How could this fail to include what you see as the ‘essence’? Our brains and bodies are – in essence – computing processes; computing underlies the foundations of everything we care about, and that won’t be changing.

DUNCOMMUTIN: Well, I could find quite a few people who would say that computing underlies everything they hate – but I accept that’s a slightly frivolous comment. To zero in on this question of essence, let’s look at one aspect of human life – sense of humour. Humour comes at least partly under the heading of ’emotion’, and like other emotions,  it involves bodily functions,  most importantly in this case laughing. Everyone would agree that it’s a pleasant and therapeutic experience.

RAY: Let me jump in here to point out that while many bodily functions may no longer be essential in a virtual computation-driven world, that doesn’t mean they have to go. Physical breathing, for example, won’t be necessary, but if we find breathing itself pleasurable, we can develop virtual ways of having this sensual experience. The same goes for laughing.

DUNCOMMUTIN: But it’s not so much the laughing itself,  but what gives rise to it, which interests me. Humour often involves the apprehension of things being wrong,  or other than they should be – a gap between an aspiration and what is actually achieved. In this perfect virtual world, it seems as if such things will be eliminated.  Maybe we will find ourseleves still able to laugh virtually – but have nothing to virtually laugh at.

RAY: You’ll remember how I’ve said in my book that in such a world there will be limitless possibilities when it comes to entertainment and the arts. Virtual or imagined worlds in which anything can happen,  and in which things can go wrong, could be summoned at will. Such worlds could be immersive, and seem utterly real. These could provide all the entertainment and humour you could ever want.

DUNCOMMUTIN: There’s still something missing,  to my mind. Irony,  humour, artistic portrayals, whatever – all these have the power that they do because they are rooted in gritty reality, not in something we know to have been erected as some form of electronic simulation. In the world you are portraying it seems to me that everything promises to have a thinned-out,  ersatz quality – much like the coffee I mentioned a little while back.

RAY: Well if you really feel that way,  you may have to consider whether it’s worth this small sacrifice for the sake of eliminating hunger, disease, and maybe death itself.

DUNCOMMUTIN: Eliminating death – that raises a whole lot more questions, and if we go into them this blog entry will never finish. I have just one more point I would like to put to you: the question of consciousness, and how that can be preserved in a new substrate or mode of existence. I have to say I was impressed to see that,  unlike many commentators, you don’t dodge the difficulty of this question, but face it head-on.

RAY: Thank you. Yes, the difficulty is that, since it concerns subjective experience, this is the one matter that can’t be resolved by objective observation. It’s not a scientific question but a philosophical one – indeed,  the fundamental philosophical question.

DUNCOMMUTIN: Yes – but you still evidently believe that consciousness would transfer to our virtual, disembodied life. You cross swords with John Searle, whose Chinese Room argument readers of this blog will have come across. His view that consciousness is a fundamentally biological function that could not exist in any artificial substrate is not compatible with your envisaged future.

RAY: Indeed. The Chinese Room argument I think is tautologous – a circular argument – and I don’t see any basis for his belief that consciousness is necessarily biological.

DUNCOMMUTIN: I agree with you about the supposed biological nature of consciousness – perhaps for different reasons – but not about the Chinese Room. However there isn’t space to go into that here. What I want to know is, what makes you confident that your virtualised existence will be a conscious one – in other words,  that you will actually have future experiences to look forward to?

RAY: I’m a patternist. That is, it seems to me that conscious experience is an inevitable emergent property of a certain pattern of functionality, in terms of relationships between entities and how they develop over time. Our future technology will be able to map the pattern of these relationships to any degree of detail, and, by virtue of that,  consciousness will be preserved.

DUNCOMMUTIN: This seems to me to be a huge leap of faith. Is it not possible that you are mistaken, and that your transfer to the new modality will effectively bring about your death? Or worse, some form of altered, and not necessarily pleasant, experience?

RAY: On whether there will be any subjective experience at all, if the ‘pattern’ theory is not correct then I know of no other coherent one – and yes,  I’m prepared to stake my future existence on that. On whether the experience will be altered in some way: as I mentioned, we will be able to model brain and body patterns to any degree of detail, so I see no reason why that future experience should not be of the same quality.

DUNCOMMUTIN: Then the big difference is that I don’t see the grounds for having the confidence that you do, and would prefer to remain as my own imperfect,  mortal self. Nevertheless, I wish you the best for your virtual future – and thanks again for answering my questions.

RAY: No problem – and if you change your mind,  let me know.


The book: Kurzweil, Ray: The Singularity is Near,  Viking Penguin, 2005

For more recent material see: www.singularity.com and www.kurzweilai.net

Read All About It (part 1)

Commuting days until retirement: 300

Imagine a book. It’s a thick, heavy, distinguished looking book, with an impressive tooled leather binding, gilt-trimmed, and it has marbled page edges. A glance at the spine shows it to be a copy of Shakespeare’s complete works. It must be like many such books to be found on the shelves of libraries or well-to-do homes around the world at the present time, although it is not well preserved. The binding is starting to crumble, and much of the gilt lettering can no longer be made out. There’s also something particularly unexpected about this book, which accounts for the deterioration.  Let your mental picture zoom out, and you see, not a set of book-laden shelves, or a polished wood table bearing other books and papers, but an expanse of greyish dust, bathed in bright, harsh light. The lower cover is half buried in this dust, to a depth of an inch or so, and some is strewn across the front, as if ithe book had been dropped or thrown down. Zoom out some more, and you see a rocky expanse of ground, stretching away to what seems like a rather close, sharply defined horizon, separating this desolate landscape from a dark sky.

Yes, this book is on the moon, and it has been the focus of a long standing debate between my brother and sister-in-law. I had vaguely remembered one of them mentioning this some years back, and thought it would be a way in to this piece on intentionality, a topic I have been circling around warily in previous posts. To clarify: books are about things – in fact our moon-bound book is about most of the perennial concerns of human beings. What is it that gives books this quality of ‘aboutness’ – or intentionality? When all’s said and done our book boils down to a set of inert ink marks on paper. Placing it on the moon, spatially distant, and and perhaps temporally distant, from human activity, leaves us with the puzzle as to how those ink marks reach out across time and space to hook themselves into that human world. And if it had been a book ‘about’, say, physics or astronomy, that reach would have been, at least in one sense, wider.

Which problem?

Well, I thought that was what my brother and sister-in-law had been debating when I first heard about it; but when I asked them it turned out that their what they’d been arguing about was the question of literary merit, or more generally, intrinsic value. The book contains material that has been held in high regard by most of humanity (except perhaps GCSE students) for hundreds of years. At some distant point in space and time, perhaps after humanity has disappeared, does that value survive, contained within it, or is it entirely dependent upon who perceives and interprets it?

Two questions, then – let’s refer to them as the ‘aboutness’ question and the ‘value’ question. Although the value question wasn’t originally within the intended scope of this post, it might be worth trying to  tease out how far each question might shed light on the other.

What is a book?

First, an important consideration which I think has a bearing on both questions – and which may have occurred to you already. The term ‘book’ has at least two meanings. “Give me those books” – the speaker refers to physical objects, of the kind I began the post with. “He’s written two books” – there may of course be millions of copies of each, but these two books are abstract entities which may or may not have been published. Some years back I worked for a small media company whose director was wildly enthusiastic about the possibilities of IT (that was my function), but somehow he could never get his head around the concepts involved. When we discussed some notional project, he would ask, with an air of addressing the crucial point, “So will it be a floppy disk, or a CD-ROM?” (I said it was a long time ago.) In vain I tried to get it across to him that the physical instantiation, or the storage medium, was a very secondary matter. But he had a need to imagine himself clutching some physical object, or the idea would not fly in his mind. (I should have tried to explain by using the book example, but never thought of it at the time.)

So with this in mind, we can see that the moon-bound Shakespeare is what is sometimes called in philosophy an ‘intuition pump’ – an example intended to get us thinking in a certain way, but perhaps misleadingly so. This has particular importance for the value question, it seems to me: what we value is set of ideas and modes of expression, not some object. And so its physical, or temporal, location is not really relevant. We could object that there are cases where this doesn’t apply – what about works of art? An original Rembrandt canvas is a revered object; but if it were to be lost it would live on in its reproductions, and, crucially, in people’s minds. Its loss would be sharply regretted – but so, to an extent, would the loss of a first folio edition of Shakespeare. The difference is that for the Rembrandt, direct viewing is the essence of its appreciation, while we lose nothing from Shakespeare when watching, listening or reading, if we are not in the presence of some original artefact.

Value, we might say, does not simply travel around embedded in physical objects, but depends upon the existence of appreciating minds. This gives us a route into examination of the value question – but I’m going to put that aside for the moment and return to good old ‘aboutness’ – since these thoughts also give us  some leverage for developing our ideas there.

…and what is meaning?

So are we to conclude that our copy of Shakespeare itself, as it lies on the moon, has no intrinsic connection with anything of concern or meaning to us? Imagine that some disaster eliminated human life from the earth. Would the book’s links to the world beyond be destroyed at the same time, the print on its pages suddenly reduced to meaningless squiggles?  This is perhaps another way in which we are misled by the imaginary book.

Cave painting

A 40,000 year old cave painting in the El Castillo Cave in Puente Viesgo, Spain (www.spain.info)

Think of prehistoric cave paintings which have persisted, unseen, thousands of years after the deaths of those for whom they were particularly meaningful. Eventually they are found by modern men who rediscover some meaning in them. Many of them depict recognisable animals – perhaps a food source for the people of the time; and as representational images their central meaning is clear to us. But of course we can only make educated guesses at the cloud of associations they would have had for their creators, and their full significance in their culture. And other ancient cave wall markings have been discovered which are still harder to interpret – strange abstract patterns of dots and lines (see above). What’s interesting is that we can sense that there seems to have been some sort of purpose in their creation, without having any idea what it might have been.

Luttrell Psalter

A detail from the Luttrell Psalter (Bristish Library)

Let’s look at a more recent example: the marvellous illuminated script of the Luttrell Psalter, the 14th century illuminated manuscript, now in the British Library. (you can view it in wonderful detail by going to the British Library’s Turning the Pages application.) It’s a psalter, written in Latin, and so the subject matter is still accessible to us. Of more interest are the illustrations around the text – images showing a whole range of activities we can recognise, but as they were carried on in the medieval world. This of course is a wonderful primary historical source, but it’s also more than that. Alongside the depiction of these activities is a wealth of decoration, ranging from simple flourishes to all sorts of fantastical creatures and human-animal hybrids. Some may be symbols which no longer have meaning in today’s culture, and others perhaps just jeux d’esprit on the part of the artist. It’s mostly impossible now for us to distinguish between these.

Think also of the ‘authenticity’ debate in early music that I mentioned in Words and Music a couple of posts back. The full, authentic effect of a piece of music composed some hundreds of years ago, so one argument goes, could only affect an audience as the composer intended if the audience were also of his time. Indeed, even today’s music, of any genre, will have different associations for, and effects on, a listener depending on their background and experience. And indeed, it’s quite common now for artists, conceptual or otherwise, to eschew any overriding purpose as to the meaning of their work, but to intend each person to interpret it in his or her own idiosyncratic way.

Rather too many examples, perhaps, to illustrate the somewhat obvious point that meaning is not an intrinsic property of inert symbols, such as the printed words in our lunar Shakespeare. In transmitting their sense and associations from writer to reader the symbols depend upon shared knowledge, cultural assumptions and habits of thought; something about the symbols, or images, must be recognisable by both creator and consumer. When this is not the case we are just left with a curious feeling, as when looking at that abstract cave art. We get a a strong sense of meaning and intention, but the content of the thoughts behind it are entirely unknown to us. Perhaps some unthinkably different aliens will have the same feeling on finding the Voyager robot spacecraft, which was sent on its way with some basic information about the human race and our location in the galaxy. Looking at the cave patterns we can detect that information is present – but meaning is more than just information. Symbols comprise the latter without intrinsically containing the former, otherwise we’d be able to know what those cave patterns signified.

Physical signs can’t embody meaning of themselves,  apart from the creator and the consumer, any more than a saw can cut wood without a carpenter to wield it. Tool use, indeed, in early man or advanced animals, is an indicator of intentionality – the ability to form abstract ‘what if’ concepts about what might be done, before going ahead and doing it. A certain cinematic moment comes to mind: in Kubrick’s 2001: A Space Odyssey, where the bone wielded as a tool by the primate creature in the distant past is thrown into the air, and cross-fades into a space ship in the 21st century.

Here be dragons

Information theory developed during the 20th century, and is behind all the advances of the period in computing and communications. Computers are like the examples of symbols we have looked at: the states of their circuits and storage media contain symbolic information but are innocent of meaning. Which thought, it seems to me, it leads us to the heart of the perplexity around the notion of aboutness, or intentionality. Brains are commonly thought of as sophisticated computers of a sort, which to some extent at least they must be. So how come that when, in a similar sort of way, information is encoded in the neurochemical states of our brains, it is magically invested with meaning? In his well-known book A Brief History of Time, Stephen Hawking uses a compelling phrase when reflecting on the possibility of a universal theory. Such a theory would be “just a set of rules and equations”. But, he asks,

What is it that breathes fire into the equations and makes a universe for them to describe?

I think that, in a similar spirit, we have to ask: what breathes fire into our brain circuits to add meaning to their information content?

The Chinese Room

If you’re interested enough to have come this far with me, you will probably know about a famous philosophical thought experiment which serves to support the belief that my question is indeed a meaningful and legitimate one – John Searle’s ‘Chinese Room’ argument. But I’ll explain it briefly anyway; skip the next paragraph if you don’t need the explanation.

Chinese Room

A visualisation of John Searle inside the Chinese Room

Searle imagines himself cooped up in a rather bizarre room where he can only communicate with the outside world by passing and receiving notes through an aperture. Within the room he is equipped only with an enormous card filing system containing a set of Chinese characters and rules for manipulating them. He has Chinese interlocutors outside the room, who pass in pieces of paper bearing messages in Chinese. Unable to understand Chinese, he goes through a cumbersome process of matching and manipulating the Chinese symbols using his filing system. Eventually this process yields a series of characters as an answer, which are transcribed on to another piece of paper and passed back out. The people outside (if they are patient enough) get the impression that they are having a conversation with someone inside the room who understands and responds to their messages. But, as Searle says, no understanding is taking place inside the room. As he puts it, it deals with syntax, not semantics, or in the terms we have been using, symbols, not meaning. Searle’s purpose is to demolish the claims of what he calls ‘strong AI’ – the claim that a computer system with this sort of capability could truly understand what we tell it, as judged from its ability to respond and converse. The Chinese Room could be functionally identical to such a system (only much slower) but Searle is demonstrating that is is devoid of anything that we could call understanding.

If you have an iPhone you’ll probably have used an app called ‘Siri’ which has just this sort of capability – and there are equivalents on other types of phone. When combined with the remote server that it communicates with, it can come up with useful and intelligent answers to questions. In fact, you don’t have to try very hard to make it come up with bizarre or useless answers, or flatly fail. But that’s just a question of degree – no doubt future versions will be more sophisticated. We might loosely say that Siri ‘understands’ us – but of course it’s really just a rather more efficient Chinese Room. Needless to say, Searle’s argument has generated years of controversy. I’m not going to enter into that debate, but will just say that I find the argument convincing; I don’t think that Siri can ‘understand’ me.

So if we think of understanding as the ‘fire’ that’s breathed into our brain circuits, where does it come from? Think of the experience of reading a gripping novel. You may be physically reading the words, but you’re not aware of it. ‘Understanding’ is hardly an issue, in that it goes without saying. More than understanding, you are living the events of the novel, with a succession of vivid mental images. Another scenario: you are a parent, and your child comes home from school to tell you breathlessly about some playground encounter that day – maybe positive or negative. You are immediately captivated, visualising the scene, maybe informed by memories of you own school experiences. In both of these cases, what you are doing is not really to do with processing information – that’s just the stimulus that starts it all off. You are experiencing – the information you recognise has kicked off conscious experiences; and yes, we are back with our old friend consciousness.

Understanding and consciousness

Searle also links understanding to consciousness; his position, as I understand it, is that consciousness is a specifically biological function, not to be found in clever artefacts such as computers. But he insists that it’s purely a function of physical processes nontheless – and I find it difficult to understand this view. If biologically evolved creatures can produce consciousness as a by-product of their physical functioning, how can he be so sure that computers cannot? He could be right, but it seems to be a mere dogmatic assertion. I agree with him that you can’t have meaning – and hence intentionality – without consciousness. For sure, although he denies it, he leaves open the possibility that a computer (and thus, presumably, the Chinese Room as a whole) could be conscious. But he does have going for him the immense implausibility of that idea.

Dog

How much intentionality?

So does consciousness automatically bring intentionality with it? In my last post I referred to a dog’s inability to understand or recognise a pointing gesture. We assume that dogs have consciousness of some sort – in a simpler form, they have some of the characteristics which lead us to assume that other humans like ourselves have it. But try thinking yourself for a moment into what it might be to inhabit the mind of a dog. Your experiences consist of the here and now (as ours do) but probably not a lot more. There’s no evidence that a dog’s awareness of the past consists of more than simple learned associations of a Pavlovian kind. They can recognise ‘walkies’, but it seems a mere trigger for a state of excitement, rather than a gateway to a rich store of memories. And they don’t have the brain power to anticipate the future. I know some dog owners might dispute these points – but even if a dog’s awareness extends beyond ‘is’ to ‘was’ and ‘will be’, it surely doesn’t include ‘might be’ or ‘could have been’. Add to this the dog’s inability to use offered information to infer that the mind of another individual contains a truth about the world that hitherto has not been in your own mind (i.e. the ability to understand pointing – see the previous post) and it starts to become clearer what is involved in intentionality. Mere unreflective experiencing of the present moment doesn’t lead to the notion of the objects of your thought, as disticnct from the thought itself. I don’t want to offend dog-owners – maybe their pets’ abilites extend beyond that; but there are certainly other creatures – conscious ones, we assume – who have no such capacity.

So intentionality requires consciousness, but isn’t synonymous with it: in the jargon, consciousness is necessary but not sufficient for intentionality. As hinted earlier, the use of tools is perhaps the simplest indicator of what is sufficient – the ability to imagine how something could be done, and then to take action to make it a reality. And the earliest surviving evidence from prehistory of something resembling a culture is taken to be the remains of ancient graves, where objects surrounding a body indicate that thought was given to the body’s destiny – in other words, there was a concept of what may or may not happen in the future. It’s with these capabilities, we assume, that consciousness started to co-exist with the mental capacity which made intentionality possible.

So some future civilisation, alien or otherwise, finding that Shakespeare volume on the moon, will have similar thoughts to those that we would have on discovering the painted patterns in the cave. They’ll conclude that there were beings in our era who possessed the capacity for intentionality, but they won’t have the shared experience which enables them to deduce what the printed symbols are about. And, unless they have come to understand better than we do what the nature of consciousness is, they won’t have any better idea what the ultimate nature of intentionality is.

The value of what they would find is another question, which I said I would return to – and will. But this post is already long enough, and it’s too long since I last published one – so I’ll deal with that topic next time.

Consciousness 3 – The Adventures of a Naive Dualist

Commuting days until retirement: 408

A long gap since my last post: I can only plead lack of time and brain-space (or should I say mind-space?). Anyhow, here we go with Consciousness 3:

Coronation

A high point for English Christianity in the 50s: the Queen’s coronation. I can remember watching it on a relative’s TV at the age of 5

I think I must have been a schoolboy, perhaps just a teenager, when I was first aware that the society I had been born into supported two entirely different ways of looking at the world. Either you believed that the physical world around us, sticks, stones, fur, skin, bones – and of course brains – was all that existed; or you accepted one of the many varieties of belief which insisted that there was more to it than that. My mental world was formed within the comfortable surroundings of the good old Church of England, my mother and father being Christians by conviction and by social convention, respectively. The numinous existed in a cosy relationship with the powers-that-were, and parents confidently consigned their children’s dead pets to heaven, without there being quite such a Santa Claus feel to the assertion.

But, I discovered, it wasn’t hard to find the dissenting voices. The ‘melancholy long withdrawing roar’ of the ‘sea of faith’ which Matthew Arnold had complained about in the 19th century was still under way, if you listened out for it. Ever since Darwin, and generations of physicists from Newton onwards, the biological and physical worlds had appeared to get along fine without divine support; and even in my own limited world I was aware of plenty of instances of untimely deaths of innocent sufferers, which threw doubt on God’s reputedly infinite mercy.

John Robinson

John Robinson, Bishop of Woolwich (Church Times)

And then in the 1960s a brick was thrown into the calm pool of English Christianity by a certain John Robinson, the Bishop of Woolwich at the time. It was a book called Honest to God, which sparked a vigorous debate that is now largely forgotten. Drawing on the work of other radical theologians, and aware of the strong currents of atheism around him, Robinson argued for a new understanding of religion. He noted that our notion of God had moved on from the traditional old man in the sky to a more diffuse being who was ‘out there’, but considered that this was also unsatisfactory. Any God whom someone felt they had proved to be ‘out there’ “would merely be a further piece of existence, that might conceivably have not been there”. Rather, he says, we must approach from a different angle.

God is, by definition, ultimate reality. And one cannot argue whether ultimate reality exists.

My pencilled zig-zags in the margin of the book indicate that I felt there was something wrong with this at the time. Later, after studying some philosophy, I recognised it as a crude form of Anselm’s ontological argument for the existence of God, which is rather more elegant, but equally unsatisfactory. But, to be fair, this is perhaps missing the point a little. Robinson goes on to say that “one can only ask “what ultimate reality is like – whether it… is to be described in personal or impersonal categories.” His book proceeds to develop the notion of God as in some way identical with reality, rather than as a special part of it. One might cynically characterise this as a response to atheism of the form “if you can’t beat them, join them” – hence the indignation that the book stirred in religious circles.

Teenage reality

But, leaving aside the well worn blogging topic of the existence of God, there was the teenage me, still wondering about ‘ultimate reality’, and what on earth, for want of a better expression, that might be. Maybe the ‘personal’ nature of reality which Robinson espoused was a clue. I was a person, and being a person meant having thoughts, experiences – a self, or a subjective identity.  My experiences seemed to be something quite other from the objective world described by science – which, according to the ‘materialists’ of the time, was all that there was. What I was thinking of then was the topic of my previous post, Consciousness 2 – my qualia, although I didn’t know that word at the time. So yes, there were the things around us (including our own bodies and brains), our knowledge and understanding of which had been, and was, advancing at a great rate. But it seemed to me that no amount of knowledge of the mechanics of the world could ever explain these private, subjective experiences of mine (and I assumed, of others). I was always strongly motivated to believe that there was no limit to possible knowledge – however much we knew, there would always be more to understand. Materialsm, on the other hand, seemed to embody the idea of a theoretically finite limit to what could be known – a notion which gave me a sense of claustrophobia (of which more in a future post).

So I made my way about the world, thinking of my qualia as the armour to fend off the materialist assertion that physics was the whole story. I had something that was beyond their reach: I was a something of a young Cartesian, before I had learned about Descartes. It was a another few years before ‘consciousness’ became a legitimate topic of debate in philosophy and science. One commentator I have read dates this change to the appearance of Nagel’s paper What is it like to be a Bat in 1973, which I referred to in Consciousness 1. Seeing the debate emerging, I was tempted to preen myself with the horribly arrogant thought that the rest of the world had caught up with me.

The default position

Philosophers and scientists are still seeking to find ways of assimilating consciousness to physics: such physicalism, although coming in a variety of forms, is often spoken of as the default, orthodox position. But although my perspective has changed quite a lot over the years, my fundamental opposition to physicalism has not. I am still at heart the same naive dualist I was then. But I am not a dogmatic dualist – my instinct is to believe that some form of monism might ultimately be true, but beyond our present understanding. This consigns me into another much-derided category of philosophers – the so-called ‘mysterians’.

But I’d retaliate by pointing out that there is also a bit of a vacuum at the heart of the physicalist project. Thoughts and feelings, say its supporters, are just physical things or events, and we know what we mean by that, don’t we? But do we? We have always had the instinctive sense of what good old, solid matter is – but you don’t have to know any physics to realise there are problems with the notion. If something were truly solid it would entail that it was infinitely dense – so the notion of atomism, starting with the ancient Greeks, steadily took hold. But even then, atoms can’t be little solid balls, as they were once imagined – otherwise we are back with the same problem. In the 20th century, atomic physics confirmed this, and quantum theory came up with a whole zoo of particles whose behaviour entirely conflicted with our intuitive ideas gained from experience; and this is as you might expect, since we are dealing with phenomena which we could not, in principle, perceive as we perceive the things around us. So the question “What are these particles really like?” has no evident meaning. And, approaching the problem from another standpoint, where psychology joins hands with physics, it has become obvious that the world with which we are perceptually familiar is an elaborate fabrication constructed by our brains. To be sure, it appears to map on to the ‘real’ world in all sorts of ways, but has qualities (qualia?) which we supply ourselves.

Truth

So what true, demonstrable statements can be made about the nature of matter? We are left with the potently true findings – true in the the sense of explanatory and predictive power – of quantum physics. And, when you’ve peeled away all the imaginative analogies and metaphors, these can only be expressed mathematically. At this point, rather unexpectedly, I find myself handing the debate back to our friend John Robinson. In a 1963 article in The Observer newspaper, heralding the publication of Honest to God, he wrote:

Professor Herman Bondi, commenting in the BBC television programme, “The Cosmologists” on Sir James Jeans’s assertion that “God is a great mathematician”, stated quite correctly that what he should have said is “Mathematics is God”. Reality, in other words, can finally be reduced to mathematical formulae.

In case this makes Robinson sound even more heretical than he in fact was, I should note that he goes on to say that Christianity adds to this “the deeper reliability of an utterly personal love”. But I was rather gratified to find this referral to the concluding thoughts of my post by the writer I quoted at the beginning.

I’m not going to speculate any further into such unknown regions, or into religious belief, which isn’t my central topic. But I’d just like to finish with the hope that I have suggested that the ‘default position’ in current thinking about the mind is anything but natural or inevitable.

Consciousness 2 – The Colour of Nothing

Commuting days until retirement: 437

When it comes down to basics, is there just one sort of thing, or are there two sorts of thing? (We won’t worry about the possibility of even more than that.) Anyone who has done an elementary course in philosophy will know that Descartes’ investigations led him to believe that there were two sorts: mental things and physical things, and that he thus gave birth to the modern conception of dualism.

Stone lion

Lifeless

As scientific knowledge has progressed over the centuries since, it has put paid to all sorts of beliefs in mystical entities which were taken to be explanations for how things are. A good example would be vitalism, the belief in a ‘principle of life’  something that a real lion would possess and a stone lion would not. Needless to say, we now know that the real lion would have DNA, a respiratory system and so on, all of whose modes of operation we have much understanding – and so the principle of life has withered away, as surplus to needs.

Descartes mental world, however, has been harder to kill off. There seems nothing that scientific theory can grasp which is recognisable as the something it is like I discussed in my previous post. It’s rather like one of those last houses to go as Victorian terraces are cleared for a new development, with Descartes as the obstinate old tenant who stands on his rights and refuses to be rehoused. But the philosophical bulldozers are doing their best to help the builders of science, in making way for  their objectively regular modern blocks.

Gilbert Ryle led the charge in 1949, in his book The Concept of Mind. He famously characterised dualism as the doctrine of ‘the Ghost in the Machine’: to suppose that there was some mystical entity within us corresponding to our mind was to be misled by language into making a ‘category mistake’. Ryle’s standpoint fits more or less into the area of behaviourism, also previously discussed. Then, in the 1950s, identity theory arose. The contents of your mind  colors, smells  may seem different from from all that mushy stuff in your head and its workings, but in fact they are just the same thing, if perhaps seen from a different viewpoint. There’s a name, the ‘Morning Star’, for that bright star that can be seen at dawn, and another one, the ‘Evening Star’, for its equivalent at dusk; but with a little further knowledge you discover that they are one and the same.

Nowadays, while still around, the identity theory is somewhat mired in technical philosophical debate. Meanwhile brain science has made huge strides, and at the same time computing science has become mainstream. So on the one hand, it’s tempting to see the mind as the software of the brain (functionalism, very broadly), or perhaps just to attempt to show that with enough understanding of the wiring of those tightly packed nerve fibres, and whatever is chugging around them, everything can be explained. This last approach  materialism, or in its modern, science-aware form, physicalism  can take various forms, one of them being the identity theory. Or you may consider, for example, that such mental entities as beliefs, or pains, may be real enough, but are ideally explained as  or reduced to  brain/body functions. This would make you a reductionist.

But you may be more radical and simply say that these mental things don’t really exist at all: we are just kidded into thinking they do by our habitual way of talking about ourselves folk psychology, as it’s often referred to. Then you would be an eliminativist  and it’s the eliminativists I’d like to get my philosophical knife into here. Although I don’t agree with old Descartes on that much (I’ll expand in the next post), I have an certain affinity for him, and I’m willing to join him in his threatened, tumbledown house, looking out at the bulldozers ranged across the building site of 21st century Western philosophy.

Getting rid of qualia  or not

Acer leaves

My acer leaves

I think it would be fair to say that the arch-eliminativist is one Daniel Dennett, and it’s his treatment of qualia that I’d like to focus on. Qualia (singular quale) are those raw, subjective elements of which our sensory experience is composed (or as Dennett would have it, we imagine it to be composed): the vivid visual experience I’m having now of the delicately coloured acer leaves outside my window; or that smell when I burn the toast. I’m thinking of Dennett’s treatment of the topic to be found in his 1988 paper Quining Qualia, (QQ) and in Qualia Disqualified, Chapter 12 of his 1991 book Consciousness Explained (CE: with a great effort I refrain from commenting on the title). Now the task is to show that, when it comes to mental things, all that grey matter and its workings is all there is. But this is a problem, because when we look inside people’s skulls we don’t ever find the colour of acer leaves or the smell of burnt toast.

Dennett quotes an introductory book on brain science: ‘”Color” as such does not exist in the world: it exists only in the eye and brain of the beholder.’ But as he rightly points out, however good this book is on science, it has its philosophy very muddled. For one thing, the ‘eye and brain of the beholder’ are themselves part of the world – the world in which colour, we are told, does not exist. And eyes and brains have colours, too. But not like the acer leaves I’m looking at. There’s only one way to get to where Dennett wants to be: he has to strike out the qualia from the equation. They are really not there at all. That acer-colour quale I think I’m experiencing is non-existent. Really?

Argument 1: The beetle in the box

Maybe there is some help available to Dennett from one of the philosophical giants  Wittgenstein. Dennett calls it in, anyway, as support for the position that ‘the very idea of qualia is nonsense’ (CE, p.390). There is a famous passage in Wittgenstein’s Philosophical Investigations where he talks of our private sensations in an analogy:

Suppose everyone had a box with something in it: we call it a “beetle”. No one can look into anyone else’s box, and everyone says he knows what a beetle is only by looking at his beetle. Here it would be quite possible for everyone to have something different in his box … The thing in the box has no place in the language-game at all; not even as a something: for the box might even be empty. No, one can ‘divide through’ by the thing in the box; it cancels out, whatever it is.

I don’t see how this does help Dennett. It is part of Wittgenstein’s exposition known as the private language argument. He is seeking to show that language is a necessarily public activity, and that the notion of a private language known only to its one ‘speaker’ is incoherent. I think it’s significant that the example of a sensation he uses is pain, as you’ll see if you follow the link. Elsewhere Wittgenstein considers whether someone might have a private word for one of his own sensations. But, like the pain, this is just a sensation, and there’s no publicly viewable aspect to it.   But consider my acer leaves: my wife might come and join me in admiring them. We have a publicly available referent for our discussion, and if I ask her about the quality of her own sensation of the colour, she will give every appearance of knowing what I am talking about. True, I can never tell if her sensation is the same as mine, or whether it even makes sense to ask that. Nor can I tell for certain whether she really has the sensation, or is simply behaving as if she did. But I’ll leave that to Wittgenstein. His argument doesn’t seek to deny that I am acquainted with my ‘beetle’  only that it ‘has no place in the language game’. In other words, my wife and I can discuss the acer leaves and what we think of them, but we can’t discuss the precise nature of the sensation they give me – my quale. My wife would have nothing to refer to when speaking of it. In Wittgenstein’s terms, we talk about the leaves and their colour, but our intrinsically private sensations drop out of the discussion. Does this mean the qualia don’t exist? Just a moment I’ll have another look… no, mine do, anyway. Sorry, Dan.

Argument 2: Grown-up drinking

Bottled Qualia

Bottled Qualia

Another strategy open to Dennett is to point out how our supposed qualia may seem unstable in certain ways, and subject to change. He notes how beer is an acquired taste, seeming pretty unpleasant to a child, who may well take it up with gusto later in life. Can the adult be having the same qualia as the child, if the response is so different?

This strikes a chord with me. I started to sample whisky when still a teenager because it made me feel mature and sophisticated. Never mind the fact that it was disgusting  much more important to pretend to be the sort of person I wanted to be. The odd thing is  and I have often wondered about this  that I think I can remember the moment of realisation that eventually came: “Hey  I actually like this stuff!”

So what happened? Did something about these particular qualia suddenly change, rather as if I one day licked a bar of soap and found that it tasted of strawberries? Clearly not. So maybe we could say, that, although it tasted the same, it was just that I started to react to it in a different way  some neural pathway opened up in my brain that engendered a different response. There are difficulties with that idea. As Dennett puts it, in QQ:

For if it is admitted that one’s attitudes towards, or reactions to, experiences are in any way and in any degree constitutive of their experiential qualities, so that a change in reactivity amounts to or guarantees a change in the property, then those properties, those “qualitative or phenomenal features,” cease to be “intrinsic” properties, and in fact become paradigmatically extrinsic, relational properties.

He’s saying, and I agree,  that we can’t mix up subjective and objective properties in this way, otherwise the subjective elements – the qualia – are dragged off their pedestal of private ineffability and are rendered into ordinary, objectively viewable, ones. He goes on to argue, with other examples, that the concept of qualia inevitably leads to confusions of this sort, and that we can therefore banish the confusion by banishing the qualia.

So is there another way out of the dilemma, which rescues them? As with the acer leaves, my whisky-taste qualia are incontrovertibly there. Consider another type of subjective experience  everyone probably remembers something similar. You have been working, maybe in an office, for an hour or two, and suddenly an air conditioning fan is turned off. It was a fairly innocuous noise, and although it was there you simply weren’t aware of it. But now that it’s gone, you’re aware that it’s gone. As you may know, the objective, scientific term for this is ‘habituation’; your system ceases to respond to a constant stimulus. But this time I am not going to make the mistake of mixing this objective description with the subjective one. A habituated stimulus is simply removed from consciousness  your subjective qualia do change as it fades. And something like this, I would argue, is what was happening with the whisky. To a mature palate, it has a complex flavour, or to put it another way, all sorts of different, pleasurable individual qualia which can be distinguished. These put the first, primary, sharp ‘kick’ in the flavour into a new context. But probably that kick is all that the immature version of myself was experiencing. Gradually, my qualia did change as I habituated sufficiently to that kick to allow it to recede a little and allow in the other elements. There had to come some point at which I made up my mind that the stuff was worth drinking for its own sake, and not just as a means to enhance my social status.

Argument 3: Torn cardboard

Torn cardboard

Matching halves

Not convinced? Let’s look at another argument. This starts with an unexpected – and ingenious  analogy: the Rosenbergs, Soviet spies in the US in the cold war era, had a system to enable to spies to verify one another’s identity: each had a fragment of cardboard packaging, originally torn halves of the same jelly package (US brand name Jell-O). So the jagged tear in each piece would perfectly and uniquely match the other. Dennett is equating our perceptual apparatus with one of the cardboard halves; and the characteristics of the world perceived with the other. The two have co-evolved. Anatomical investigation shows how birds and bees, whose nourishment depends on the recognition of flowers and berries, have colour perception, while primarily carnivorous animals  dogs and cats for example  do not. But at the same time plants have evolved flower and berry colour to enable pollination or seed dispersal by the bees or birds. The two sides evolve, matching each other perfectly, like the cardboard fragments. And of course we are omnivores, and have colour perception too. When hunting was scarce, our ability to recognise the colour of a ripe apple could have been a life-and-death matter. And so it would have been for the apple species too, as we unwittingly propagated its seeds. As he puts it:

Why is the sky blue? Because apples are red and grapes are purple, not the other way around. (CE p378)

A lovely idea, but what’s the relevance? His deeper intention with the torn cardboard analogy is to focus on the fact that, if we look at just one of the halves on its own, we are hard put to see anything but a piece of rubbish without purpose or significance  it is given validity only by its sibling. Dennett seeks to demote colour experiences, considered on their own, to a similarly nullified status. Here’s a crucial passage. ‘Otto’ is Dennett’s imaginary defender of qualia  for present purposes he’s me:

And Otto can’t say anything more about the property he calls pink than “It’s this!” (taking himself to be pointing “inside” at a private, phenomenal property of his experience). All that move accomplishes (at best) is to point to his own idiosyncratic color-discrimination state, a move that is parallel to holding up a piece of Jell-O box and saying that it detects this shape property. Otto points to his discrimination-device, perhaps, but not to any quale that is exuded by it, or worn by it, or rendered by it, when it does its work. There are no such things. (CE p383 – my italics).

I don’t think Dennett earns the right to arrive at his concluding statement. There seem to me to be two elements at work here. One is an appeal to the Wittgensteinian beetle argument we considered (‘…taking himself to be pointing “inside”…’), which I tried to show does not do Dennett’s work for him. The second appears to be simply a circular argument: if we decide to assert that Otto is not referring any private experience but something objective (a ‘color-discrimination state’) then we have only banished his qualia by virtue of this assertion. The fact that we can’t be aware of them for ourselves does not change this. The function of the cardboard fragment is an objective one, inseparable from its identification of its counterpart, just as colour perception as an objective function is inseparable from how it evolved. But there’s nothing about the cardboard that corresponds to subjective qualia  the analogy fails. When I think of my experience of the acer leaves I am not thinking of the ‘color-discrimination state’ of my brain  I don’t know anything about that. In fact it’s only from the science I have been taught that I know that there is any such thing. (This final notion nods to another well-known argument – this time in favour of qualia – Frank Jackson’s ‘knowledge’ argument  I’ll leave you to follow the link if you’re interested.)

But this being just a blog, and this post having already been delayed too long, I’ll content myself with having commented on just three arguments from one physicalist philosopher. And so I am still there with Descartes in his tottering house, resisting its demolition. In the next post I’ll enlarge on why I am so foolhardy and perverse.

Consciousness 1 – Zombies

Commuting days until retirement: 451

Commuting at this time of year, with the lengthening mornings and evenings, gives me a chance to lose myself in the sight of tracts of England sliding across my field of vision – I think of Philip Larkin in The Whitsun Weddings:  ‘An Odeon went past, a cooling tower, and someone running up to bowl…’  (His lines tend to jump into my mind like this). It’s tempting to enlarge a scene like this into a simile for life, like the one that Larkin’s poem leads into. Of course we are not just passive observers, but the notion of life as a film show – a series of scenes progressing past your eyes – has a certain curious attractiveness.

A rather more specatcular view than any I get on my train journey. Photo: Yamaguchi Yoshiaki. Wikimedia Commons

A rather more specatcular view than any I get on my train journey.
Photo: Yamaguchi Yoshiaki. Wikimedia Commons

Now imagine that, as I sit in the train, I am not quite a human being as you think of one. Instead I’m a cleverly constructed robot who appears in every way like a human but, being a robot, has something important missing. The objects outside the train form images on some sensor in each of my pseudo-eyes, and the results may then be processed by successive layers of digital circuitry which perform ever more sophisticated interpretative functions. Perhaps these resolve the light patterns that entered my ‘eyes’ into discrete objects, and and trigger motor functions which cause my head and eyes to swivel and follow them as they pass. Much, in fact, like the real me, idly watching the scenes sliding by.

Now let’s elaborate our robot to have capabilities beyond sitting on a train and following the objects outside; now it can produce all the behaviour that any human being can.This curious offspring of a thought-experiment is what philosophers refer to as a zombie – not the sort in horror films with the disintegrating face and staring eyeballs, but a creature who may be as well behaved and courteous as any decent human being. The only difference is that, despite (we presume) the brain churning away as busily as anyone else’s, there are no actual sensations in there – none of those primary, immediate experiences with a subjective quality: the fresh green of a spring day, or the inner rapture of an orgasm. So what’s different? There are a number of possibilities, but, as you will have guessed, the one I am thinking of is that inner, subjective world of experience we all have, but assume that machines do not. This is well expressed by saying that there’s something that it is like to be me, but not something that it’s like to be a machine.(1)  The behaviour is there all right, but that’s all. In the phrase I rather like, the lights are on but nobody’s at home.

Many people who think about the question nowadays, especially those of a scientific bent, tend to conclude that, of course, we must ultimately be nothing but machines of one sort or another. We have discovered many – perhaps most – of the physical principles upon which our brains and bodies work, and we have traced their evolution over time from simple molecular entities. So there we are – machines. But conscious machines – machines that there is something it is like to be? It has frequently been debated whether or not such a machine with all these capabilities would ipso facto be conscious – whether it would have a mind. Or, in other words, whether we could in principle build a conscious machine. (There are some who speculate that we may already have done so.)

One philosophical response to this problem is that of behaviourism, a now justly neglected philosophical position.(2) If you are a behaviourist you believe that your mind, and your mental activity – your thoughts – are defined in terms of your behaviour. The well-known Turing Test constitutes a behaviourist criterion, since it is based on the principle that a computer system whose responses are indistinguishable from those of a human is taken for all practical purposes to have a mind. (I wrote about Turing a little while ago – but here I part company with him.) And for a behaviourist, the phrase ‘What it is like to be…’ can have no meaning, or at best a rather convoluted one based on what we say or do; but its meaning is plain and obvious to you or me. It’s difficult to resist repeating the old joke about behaviourism: two post-coital behaviourists lie in bed together, and one says ‘That was great for you – how was it for me?’ But I take the view of behaviourism that the joke implies – it’s absurd.

Behaviourists, however, can’t be put down as burglars or voyeurs: they don’t peer into the lighted windows to see what’s going on inside. It’s enough for them that the lights are on. For them the concept of a zombie is either meaningless or a logical impossibility.  But there is another position on the nature of the mind which is much more popular in contemporary thought, but which has a different sort of problem with the notion of a zombie. I’m thinking of eliminative materialism.

Well, as I write this post, I feel it extending indefinitely as more ideas churn through that machine I refer to as my brain. So to avoid it becoming impossibly long, and taking another three weeks to write it, I’ll stop there, and just entitle this piece as Part 1. Part 2 will take up the topic of eliminative materialism.

In the meantime I’d just like to leave one thought: I started with a snatch of Philip Larkin, and I’ve always felt that poetry is in essence a celebration of conscious experience; without consciousness I don’t believe that poetry would be possible.


(1) The phrase is mainly associated with Thomas Nagel, and his influential 1974 paper What is it Like to be a Bat? But he in turn attributes it to the English philosopher Timothy Sprigge.

(2) I’m referring to the philosophical doctrine of behaviourism – distinct from, but related to the psychological one – J B Watson, B F Skinner et al.