Old Ladies, Old Gentlemen, Ironmongers and Encyclopaedias

Commuting days until retirement: 61

So far, I haven’t got to see the film I mentioned – the one I prepared for by reading Testament of Youth and writing about it in the last post. Infuriatingly, my local cinema and all the others in the area were showing it only around lunch time on weekdays. Obviously whoever decides these things had put it down as a film for old ladies. If it were a few months further on I’d be retired, and would be able to consider myself an honorary old lady, buttoning up my overcoat and toddling round to the picture house for a couple of hours of sedate entertainment. As it is I’ll probably wait and see it on DVD, Netflix or whatever.

But in fact I’m not at all in the habit of thinking of myself as an old gentleman, let alone an old lady. A few months ago I went to see a medical specialist about a minor but painful condition (happily temporary, as it turned out). I got my copy of his letter to the GP, which started: ‘I saw this 66 year old gentleman…’. My immediate reaction was: ‘Who can he be talking about? Have I got the wrong letter?’

But no – it came home to me that the way the world sees you isn’t often the way you think of yourself. But does anyone now think of themselves as a ‘gentleman’? A hundred years ago they certainly would have done; and the doctor’s letter shows that it’s still a formal, polite way of referring to someone who has at least reached middle age. But to think of yourself in that way seems like divorcing yourself from the contemporary world and consigning yourself to a time-warped existence.

John Carey

John Carey

My current train reading has in fact reminded me all too sharply of how distant in time my origins now are. It’s another piece of autobiography, of the Oxford English professor and literary pundit John Carey – The Unexpected Professor. He’s a good deal older than me – he was born in the 1930s – but there was so much in his early life which struck a chord with me.

I remember, as he does, the sorts of books that we were given by well-meaning adult relatives, with titles like 101 Things a Boy Can Do. (You could also find collections of Things a Girl Could Do, but, needless to say, they were very different.) There’s a particular phrase that always cropped up in those books which has stayed with me ever since. These masculine activities usually involved some sort of construction project which required certain outlandish components which were always ‘obtainable from any good ironmonger for a few pence.’

Reading about Carey’s boyhood, it was with a delighted shock of recognition that I found he had remembered exactly the same phrase. Like me, he wasn’t exactly sure what was meant by ‘ironmonger’. The most likely candidate in my area was a hardware shop called Mence Smith, where my queries about these unlikely items would be met with blank stares. So maybe it was a Bad Ironmonger – if it was an ironmonger at all, that is.

Another memory I share with him from that time is of the encyclopedias of the time; the very word, in the Internet age, has a dated ring to it. The most prosperous households would own the authoritative, and impossibly expensive, Encyclopedia Britannica, but most of the families I knew had a less prestigious set, usually dating from before the war, with grainy black and white illustrations and text that impressed on the reader how clever and enlightened the ‘modern’ world was, and how abjectly primitive our dim and distant forebears. I particularly remember a picture of the very latest in steam locomotives busily chuffing down the main line, with the breathless caption ‘A Mile a Minute!’  Today, sixty miles an hour is the speed of the average motorway dawdler.

And encyclopaedias make me think of a strange, shadowy business movement that was in its heyday some years back. It was based on sales people going from door to door selling encyclopedias – at one time they seem to outnumber even Jehovah’s Witnesses. I once got a telling glimpse of how this worked when, between a college course and a regular job, I was looking for an opportunity to earn something and saw a newspaper ad inviting people to a meeting about some potential work.  Encyclopedia selling was what it turned out to be. A roomful of people was addressed by a sharply suited, rather too plausible sounding character, who asked people for guesses as to what they would earn for selling a single set. There were hesitant tries.  ‘One pound?’  ‘Two pounds?’ (This was the 1960s – you can multiply the amounts by about 16 for today’s prices.) ‘No,’ he eventually said triumphantly. ‘TWENTY pounds!’

Most of his audience were now slavering like Pavlov dogs, as he had intended, and seemed not to have drawn the obvious conclusion that this meant the encyclopedias were very difficult to sell indeed; and given that it was commission only, and that I would probably be the world’s worst salesman, I knew it would dispiriting and dreadful. He invited anyone who was perverse enough still to be uninterested to leave, and only two of us out of 40 or 50 did. I was reflecting what sort of organisation it was that gleefully duped its own employees.

Some years earlier, as a child, I’d had a glimpse of this from the other side. A new set of encyclopedias, The Children’s Britannica, appeared in our house. It seemed that my father, not usually a soft touch, had bought them on the doorstep. Much later on I heard my mother’s account – that it had been an attractive young woman who was selling them. I doubt if he would have succumbed if it had been a man.

Returning to Professor Carey, I’m now about halfway through his book, hearing about his university career and greatly enjoying tracing his steps as he explored the canon of literature. He’s giving me plenty of ideas for future reading. As a memoir it’s altogether more relaxed than Testament of Youth; he has a gently humorous, self-mocking style that’s light years away from the committed, stormy intensity of Vera Brittain. What they have in common is Oxford; what they don’t have in common is close experience of the pain and loss of war. A child in the Second World War, the nearest Carey got to that was a spell of peacetime ‘National Service’ in the army, which everyone of his generation had to do (I was part of the first generation that didn’t have to). So the difference in tone is entirely understandable; but I recently read an interview with Vera Brittain’s daughter, Shirley Williams, in which she admitted that her mother had no great sense of humour.

And, come to think of it, there’s a significant point of contrast between Shirley Williams and her near contemporary John Carey. Carey makes pointed references in his foreword, and repeatedly in the book, to his utter disapproval of the closing down of grammar schools in the UK. (For those unused to the confusing English terminology, grammar schools are state funded and select by ability, while ‘public schools’, referred to below, are privately funded independent schools where most richer people send their children.)

One thing that has not changed is that Oxford – and Cambridge – still take vastly disproportionate numbers of public-school students. This is often blamed on Oxford and Cambridge. The blame, however, lies with those who destroyed the grammar schools. Selecting for merit, not money, the grammar schools, had they survived, would by now have all but eliminated the public-school contingent in Oxford and Cambridge, with far-reaching effects on our society. This book is, among other things, my tribute of gratitude to a grammar school.

Shirley Williams

Shirley Williams

It was the Labour administrations of the 60s and 70s who put this policy in place, with egalitarian intentions but a strange failure to anticipate the consequences. Shirley Williams, who was a minister of education in the 70s, had a central role in implementing the policy, and still strongly believes it to have been right. Carey, as the above passage shows, was a working class boy – no ivory tower elitist but a perfect exemplar of those who benefited from the grammar school system. One scene from his book nicely encapsulates his outlook: during his spell in the army he has a trip in a small plane in Egypt. The pilot makes a detour to give the passengers an aerial view of the pyramids. Rather than being bowled over by their grandeur, Carey is thinking of the slaves who built them, and the tendency of humanity throughout history to separate itself into a pampered elite and huge, suffering underclass.

So in Carey and Williams we have two open, attractive personalities, both with strongly expressed, left-leaning views, who are diametrically opposed over this point. I’d like to hear them debate it (I’m with Carey).

One final, unconnected point. My reading between the two autobiographical works of Vera Brittain and John Carey was David Mitchell’s latest novel The Bone Clocks. Both the two memoirs look back to earlier periods of scarcer resources and greater austerity. The Bone Clocks ends in 2043, when climate change and dwindling energy sources are eating away at the comforts we now take for granted, and the characters are wistfully remembering an era of greater plenty. I’m hoping for a long retirement, of course – but perhaps not too long.

The Unsubmerged City

Commuting days until retirement: 77

To continue the First World War theme from the previous post, I saw that a film based on Vera Brittain’s classic memoir of her early life – Testament of Youth, with that war as the central and dominating event – was soon to be released. I’ve often heard of the book but have never read it, and much prefer to see a book-based film only after having been able to immerse myself in the atmosphere of the book itself. So I’ve now done that, and am very glad that I did. It recreates the reality of that distant period in a way that could only have been managed by a writer who experienced both the best and worst that it had to offer.

Love without dignity

We first meet Vera Brittain as a girl growing up in the rather stultifying atmosphere of a middle class Edwardian household. Meeting some of her brother’s school friends might be an opening to the expanding possibilities of life, but:

The parental habit – then almost universally accepted as ‘correct’ where daughters were concerned – of inquisition into each day’s proceedings made private encounters, even with young men in the same town, almost impossible without a whole series of intrigues and subterfuges which robbed love of all its dignity.

Eventually however she does fall in love with Roland Leighton, one of the group of friends, and an especially brilliant literature and classics scholar who scoops almost all the school prizes available to him. But the experience of a closening relationship was then very different to today’s typical expectations:

We sat on the sofa till midnight, talking very quietly. The stillness, heavy-laden with the dull oppression of the snowy night, became so electric with emotion that we were frightened of one another, and dared not let even our fingers touch for fear that the love between us should render what we both believed to be decent behaviour suddenly unendurable….
I was still incredibly ignorant. I had read, by then, too much to have failed to acquire a vague and substantially correct idea of the meaning of marriage, but I did not yet understand the precise nature of the act of union. My ignorance, however, was incapable of disturbing my romantic adoration, for I knew now for certain that whatever marriage might involve in addition to my idea of it, I could not find it other than desirable.

Vera Brittain

Vera Brittain as a VAD (illustration from book)

But by this point – early 1915 – the war is under way, and soon Roland, as well as her brother Edward and other friends, are away in military training, eventually to be involved in action as officers. Vera has already gone up to Oxford, women since 1876 having been able to study at University, but not – bizarrely to our modern minds – able to take degrees. (Having returned to study after the war she became one of the first who did.) But feeling she must share the experiences of those she loves in one of the few ways that she can, by summer of that year she is becoming a VAD (Voluntary Aid Detachment) nursing assistant – one of the women drafted in to nurse the wounded in that war. With minimal training, they were pitched into dealing with men who were often dying in front of their eyes, many with wounds that would be a challenge for the most experienced nurse.

In addition, of course she is unversed in practical tasks in ways that many middle class girls of that were: she describes how she needed instruction in how to boil an egg. And of course more importantly for her work, there are other areas of ignorance:

Throughout my two decades of life, I had never looked upon the nude body of an adult male; I had never even seen a naked boy-child since the nursery days when, at the age of four or five, I used to share my evening baths with Edward. I had therefore expected, when I first started nursing, to be overcome with nervousness and embarrassment, but, to my infinite relief, I was conscious of neither. Towards the men I came to feel an almost adoring gratitude for their simple and natural acceptance of my ministrations. Short of actually going to bed with them, there was hardly an intimate service that I did not perform for one or another in the course of four years, and I still have reason to be thankful for the knowledge of masculine functioning which the care of them gave me, and for my early release from the sex-inhibitions that even to-day – thanks to the Victorian tradition which up to 1914 dictated that a young woman should know nothing of men but their faces and their clothes until marriage pitchforked her into an incompletely visualised and highly disconcerting intimacy – beset many of my female contemporaries, both married and single.

The reality of war

We see how the war explosively disrupted the hardened attitudes of the time in so many fundamental ways; but of course the core of Vera’s experience was that of death – at first hand in her nursing work, and fearfully anticipated in relation to her fiancé, brother and friends. Letters are nervously sent to, and received from, the front, and we experience her emotional swings as good and bad news of the fighting is received. As many must have done, they agree on coded phrases which will bypass censorship and give those at home clues to what is happening.

Roland anticipates that he will get through the war, nevertheless feeling it would be fitting to have received some wound as a token of what he has been through. But, as soon as Christmas 1915, Vera hears that he has died in action, shot by an enemy sniper. Numbly, she buries herself in her nursing work for the rest of the war. Her two closest male friends, members of the group formed at school with Roland and Edward, both succumb in turn, one being killed outright, and the other, Victor Richardson, blinded and brought back to hospital to recuperate, but then dying of his wounds. Her brother Edward, perhaps the quietest of the group, goes on to show great courage and wins the Military Cross. But finally, in 1918, he also dies in the fighting on the Austro-Italian border.

Brittain, Leighton Richardson

Edward Brittain, Roland Leighton and another friend, Victor Richardson (illustration from book)

I found Vera Brittain’s writing sometimes has something of the verbose, circumlocutory quality of the Victorian tradition she has inherited; but when describing the War period, driven by such enormous emotional stresses, it becomes more direct, powerful and evocative. At the same time some of the photographs included in the book brought home to me as much as anything what it must have felt like to live through that time. Not pictures of fighting or of the wounded, but simply of Vera’s brother and friends posed in relaxed groups for the camera: first in school dress at Uppingham, then in military uniform, later with freshly sprouted moustaches; the sequence then ends abruptly and shockingly with photographs of their graves.

The pacifist’s task

In the 1920s we see Vera working for what was then the League of Nations, and throwing herself into pacifist causes – but it’s a nuanced and intelligent pacifism. She writes of the heroism that war can draw out:

It is, I think, this glamour, this magic, this incomparable keying up of the spirit in a time of mortal conflict, which constitute the pacifist’s real problem – a problem still incompletely imagined, and still quite unsolved. The causes of war are always falsely represented; its honour is dishonest and its glory meretricious, but the challenge to spiritual endurance, the intense sharpening of all the senses, the vitalising consciousness of common peril for a common end, remain to allure those boys and girls who have just reached the age when love and friendship and adventure call more persistently than at any later time. The glamour may be the mere delirium of fever, which as soon as war is over dies out and shows itself for the will-o’-the-wisp that it is, but while it lasts no emotion known to man seems as yet to have quite the compelling power of this enlarged vitality…
Since those years it has often been said by pacifists… that war creates more criminals than heroes; that, far from developing noble qualities in those who take part in it, it brings out only the worst. If this were altogether true, the pacifist’s aim would be, I think, much nearer of attainment than it is. Looking back upon the psychological processes of us who were very young sixteen years ago, it seems to me that his task – our task – is infinitely complicated by the fact that war, while it lasts, does produce heroism to a far greater extent than it brutalises.

I’m lucky never to have been involved in a war, and have no idea whether I could have coped with the experience at all. But this sums up for me the paradox of how it can bring out qualities of bravery and selflessness in those who might never have been called upon to show them, were it not for the bumbling of politicians and the posturing of dictators. And perhaps that paradox was more painfully sharp in World War 1 than any other war before or since.

Brittain travels around Europe as part of her work, and experiences the festering resentment brought about by the post-war settlements, realising presciently that another war is a possibility, and wondering sadly what sort of cause it was for which those she loved had died.

War and literature

But perhaps one of the important themes of the book is the fight to preserve the life of literature in the face of the rampant destructiveness of war. There is Vera’s own underlying ambition to write, set against her war work, all-encompassing in time, energy and emotion; as well as her almost-missed university education. But more glaringly obvious is the cutting short of thousands of promising, talented lives such as Roland’s. And her brother Edward had musical ability and enthusiasm which he was never able to develop further.

As the War gets under way and Vera’s friends are sent away, letters between them include poems and other writing – their own as well as that of others. In 1915 Vera sends to Roland a leading article she has clipped from The Times newspaper, whose title I have borrowed for this post. In the book she quotes a passage:

A medieval fancy that still lingers, ghost-like, on the more lonely sea-shores, such as that Breton one so tenderly described by Renan, is the legend of the submerged city. It lies out there barely hidden under the waves, and on a still summer eve they say you may hear the music of its Cathedral bells. One day the waters will recede and the city in all its old beauty be revealed again. Might this not serve to figure the actual conditions of literature, in the nobler sense of the term, submerged as that seems to many to be by the high tide of war? Thus submerged it seemed, at any rate, to the most delicate of our literary artists, who was lately accounting for his disused pen to an aggrieved friend. ‘I have no heart,’ he said, ‘for literature in this war; we can only have faith that it is still there under the waters, and will some day re-emerge.’ . . . There is fortunately no truth in the idea of a sunken literature. A function of the spirit, it can never be submerged, or, indeed, as much as touched by war or any other external thing. It is an inalienable possession and incorruptible part of man.

And of course against whatever literature we might imagine never appeared, because of the destruction of those who would have created it, the war generated a whole body of work which would not otherwise have existed – Brittain’s Testament of Youth being one example.

Je suis CharlieOne of the reasons that I was struck by that ‘unsubmerged city’ passage was that I read it at about the same time that the Charlie Hebdo murders were committed in Paris. A hundred years on, we may be dealing with an entirely different situation and a kind of literature undreamt of in those earlier years, but compare these early 20th century sentiments to the ardent faces of the crowds waving pens and pencils in response to the shootimg of the journalists and cartoonists.

While moved by the Parisian expressions of feeling, I couldn’t help thinking at the same time of the far greater atrocities committed recently by Islamic extremists in Nigeria and Pakistan – not to mention Syria and Iraq. The Western media devoted far more space to Charlie Hebdo than to these – perhaps understandably, since they are closer to home and threaten our own Western culture. But I hope and expect that the floods of ignorance, fanaticism and brutality will not in the end submerge the metaphorical cities which form the true and established cultures of those other, more distant places. They are surely under a far greater threat than our own.

A Century On

Commuting days until retirement: 91

In this New Year a certain possession which has lain in my sock drawer for quite a while will reach the age of a hundred years. It is an heirloom of sorts, given to me by my grandfather – who also made a brief appearance in an earlier post. To mark the occasion I have dusted down and revised a poem I wrote about this item a few years ago.

 

Nineteen Fifteen

Your gift still nestles in my drawer,
its long stout shoulder strap tangled
in the laundered wool and cotton –
five decades, now, beyond your time.

Some days I halt my morning rush,
forget the clock, pause, handle the case;
read the faded ballpoint on leather;
remembering how you wrote my name,

settled in your usual green armchair
as dusk came down, one October Sunday.
I ease it from its neat, stitched holder,
this solid, purposeful bequest:

Field compass, one; standard issue;
officers, for the use of; stamped
with a government broad arrow
and the date of manufacture.

A century on it works, as then.
I squint, as you did, through the sight,
take bearings on a garden tree unobscured
by battle smoke, try to imagine scenes

of which you never spoke. I only knew
the grandpa who did tricks with coins,
went on all fours, played horses,
rolled trousers when we paddled in the sea.

True, there was that faint, framed bedroom photo:
uniformed, an unfamiliar moustache,
no glasses and unwhitened hair;
the smile that told me it was you.

So why this memento, solemnly passed on?
Did you intend, perhaps, that finding bearings
of my own, I should eventually turn and look –
see the oblique heading that your life once took?

 

1915 Army Compass

Playing horses

Quantum Immortality

Commuting days until retirement: 91

A theme underlying some of my recent posts has been what (if anything) happens to us when we die. I’d like to draw this together with some other thoughts from eight months ago, when I was gazing at the roof of Exeter Cathedral and musing on the possibility of multiple universes. This seemingly wild idea, beloved of fantasy and science fiction authors, is now increasingly taken seriously in the physics departments of universities as a serious model of reality. The idea of quantum immortality (explained below) is a link between these topics, and it was a book by the American physicist Max Tegmark, The Mathematical Universe*, that got me thinking about it.

Max Tegmark

Max Tegmark

I won’t spend time looking at the theory of multiple universes – or the Multiverse – at any length. I did explain briefly in my earlier post how the notion originally arose from quantum physics, and if you have an appetite for more detail there’s plenty in Wikipedia. There are a number of theoretical considerations which lead to the notion of a multiple universe: Tegmark sets out four that he supports, with illustrations, in a Scientific American article. I’m just going to focus here on two of them, which as Tegmark and others have speculated, could ultimately be different ways of looking at the same one. I’ll try to explain them very briefly.

The first approach: quantum divergence

It has been known since early in the last century that, where quantum physics allows a range of possible outcomes of some subatomic event, only one of these is actually observed. Experiments (for example the double slit experiment) suggest that the outcome is undetermined until an observation is made, whereupon one of the range of possibilities becomes the actual one that we find. In the phrase which represents the traditional ‘Copenhagen interpretation’ of this puzzle, the wave function collapses. Before this ‘collapse’, all the possibilities are simultaneously real – in the jargon, they exist ‘in superposition’.

But it was Hugh Everett in 1957 who first put forward another possibility which at first sight looks wildly outlandish now, and did so even more at the time: namely that the wave function never does collapse, but each possible outcome is realised in a different universe. It’s as if reality branches, and to observe a specific outcome is actually to find yourself in one of those branched universes.

The second approach: your distant twin

According to the most widely accepted theory of the creation of the universe, a phenomenon known as ‘inflation’ has the mathematical consequence that the cosmic space we now live in is infinite – it goes on for ever. And infinite space allows infinite possibilities. Statistics and probability undergo a radical transformation and start delivering certainties – a certainty, for example, that there is someone just like you, an unimaginable distance away, reading a blog written by someone just like me. And of course the someone who is reading may be getting bored with it and moving on to something else (just like you? – I hope not). But I can reassure myself that for all the doppelgangers out there who are getting bored there are just as many who are really fired up and preparing to click away at the ‘like’ button and write voluminous comments. (You see what fragile egos we bloggers have – in most universes, anyway.)

Pulling them together

But the point is, of course, that once again we have this bewildering multiplicity of possibilities, all of which claim a reality of their own; it all sounds strangely similar to the scenario posited by the first, quantum divergence approach. This similarity has been considered by Tegmark and other physicists, and Tegmark speculates that these two could be simply the same truth about the universe, but just approached from two different angles.

That is a very difficult concept to swallow whole; but for the moment we’ll proceed on the assumption that each of the huge variety of ramified possibilities that could follow from any one given situation does really exist, somewhere. And the differences between those possible worlds can have radical consequences for our lives, and indeed for our very existence. (As a previous post – Fate, Grim or Otherwise – illustrated.) Indeed, perhaps you could end up dead in one but still living in another.

Quantum Russian roulette

So if your existence branches into one universe where you are still living, breathing and conscious, and another where you are not, where are you going to find yourself after that critical moment? Since it doesn’t make sense to suppose you could find yourself dead, then we suppose that your conscious life continues into one of the worlds where you are alive.

This notion has been developed by Tegmark into a rather scary thought experiment (another version of which was also formulated by Hans Moravec some years earlier). Suppose we set up a sort of machine gun that fires a bullet every second. Only it is modified so that, at each second, some quantum mechanism like the decay of an atom determines, with a 50/50 probability, whether the bullet is actually fired. If it is not, the gun just produces a click. Now it’s the job of the intrepid experimenter, willing to take any risk in the cause of his work, to put his head in front of the machine gun.

According to the theory we have been describing, he can only experience those universes in which he will survive. Before placing his head by the gun, he’ll be hearing:
BangClickBangBangClickClickClickBang…  …etc

But with his head in place, it’ll be:
ClickClickClickClickClickClickClickClick…   …and so on.

Suppose he keeps his head there for half a minute, the probability of all the actions being clicks will be 230, or over a billion to one against. But it’s that one in a billion universe, with the sequence of clicks only, that he’ll find himself in. (Spare a thought for the billion plus universes in which his colleagues are dealing with the outcome, funerals are being arranged and coroners’ courts convened.)

Real immortality

Things become more disconcerting still if we move outside the laboratory into the world at large. At the moment of any given person’s death, obviously things could have been different in such a way that they might have survived that moment. In other words, there is a world in which the person continues to live – and as we have seen, that’s the one they will experience. But if this applies to every death event, then – subjectively – we must continue to live into an indefinitely extended old age. Each of us, on this account, will find herself or himself becoming the oldest person on earth.

A natural reaction to this argument is that, intuitively, it can’t be right. What if someone finds themselves on a railway track with a train bearing down on them and no time to jump out of the way? Or, for that matter, terminally ill? And indeed Tegmark points out that, typically, death is the ultimate upshot of a series of non-fatal events (cars swerving, changes in body cells), rather than a single, once-and-for-all, dead-or-alive event. So perhaps we arrive at this unsettling conclusion only by considerably oversimpifying the real situation.

But it seems to me that what is compelling about considerations of this sort is that they do lead us to take a bracing, if slightly unnerving, walk on the unstable, crumbling cliff-edge which forms the limits of our knowledge. Which always leads me to the suspicion, as it did for JBS Haldane, that the world is ‘not only queerer than we suppose, but queerer than we can suppose’. And that’s a suitable thought on which to end this blogging year.


*Tegmark, Max, Our Mathematical Universe: My Quest for the Ultimate Nature of Reality. Allen Lane/Penguin, 2014

Truth and Deception

Commuting days until retirement: 94

My very first post on this blog, nearly two years ago, was about Alan Turing, and in the intervening time his public profile has continued to grow. That post still appears to be one of my most frequently read ones (which isn’t saying much) – probably because there are so many searches for his name now.  You may be aware that a dramatised film of his life was recently released – The Imitation Game – so of course I went to see it.

Warning: some spoilers follow. If you don’t want to read on, then I recommend seeing the film, but also taking a monster pinch of salt with you.

Cumberbatch as Turing

Benedict Cumberbatch as Alan Turing (still from film)

My expectations weren’t too high after seeing the trailer, and the words splashed across the screen: ‘It took a man with secrets to break the biggest one.’  This suggests elements of the Hollywood-style schlocky mindset that we are all familiar with. However it looked as if Benedict Cumberbatch’s portrayal of Turing had some studied integrity, as indeed I found it did when I saw the film. But I suspect the celluloid Turing had far more exaggerated autistic traits than the real one: there’s no doubt that he was shy and socially awkward, but hardly quite as indifferent to the emotions of others as he was made to appear.

As I wrote in that earlier post, his impulses were more often generous and considerate; perhaps an example of this was his breaking off of his engagement with Joan Clarke, his fellow cryptographer at Bletchley, feeling that as a gay man he wouldn’t be able to maintain an adequate marriage. What private conversation took place we will of course never know,  but the film version made it more abrupt and brutal than I would have imagined it to be.

Knightley as Clarke

Keira Knightley as Joan Clarke (still from film)

The real Joan Clarke

Joan Clarke from a contemporary photo

Turing’s biographer,  Andrew Hodges, has criticised the film on the portrayal of Joan – played sweetly and demurely by the willowy Keira Knightley. I thought it was a good performance, given her brief (but should someone have told her that a double-first Cambridge maths graduate is not likely to pronounce the name of the great Swiss mathematician Euler as “Yooler” rather than “Oiler”?) I’m getting pedantic – but one of Hodges’ points was that Joan Clarke was stocky and bespectacled – not the ideal of femininity, but as such someone whom Turing would have found a congenial and comfortable companion, as he undoubtedly did. There’s some insight into their relationship in an interview she gave to BBC Horizon in 1992, four years before her death (see video clip at the bottom of this post).

I did enjoy the film, although the beautifully realised period settings were marred – as so often – by a cloth ear for the language and idioms of the time. Would a fairly patrician Englishman in the 1940s (Commander Denniston/Charles Dance) have said “You’re fired” rather than something like “I’m sacking you”? And would Joan Clarke have talked about “fixing” Turing’s lamb in an imagined future marriage, rather than “cooking” it?

I seem to be turning into one of those people who get their biggest kicks from looking for mistakes in movies; but what the hell – it’s fun, and these are the sorts of things that irritate me. And indeed I became increasingly impatient with the more serious departures from reality. For a start, there’s no evidence at all that anyone attempted to fire Turing – this was just a ladle of dramatic tension clumsily poured in to spice things up. And so it went on.

“Based on a true story”, we were told at the start. So there’s an expectation of some invented scenes and dialogue, and minor deviations from fact to help the story work as a film. If I had known nothing about Turing or his Bletchley work before seeing film I would now believe that:

  • Apart from some clerical help by the Wrens, the entire wartime decryption operation was down to four men,  one woman and a single machine which Turing built almost single-handed as well as designing it.
  • He named the machine ‘Christopher’ after his close school friend who had died, and that the machine was a sort of emotional substitute after this loss (as was a computer he later designed in Manchester).
  • Turing was effectively blackmailed by the Soviet spy John Cairncross into keeping quiet about Cairncross’s activities.
  • At the time of his arrest for gross indecency in 1951 he threw the Official Secrets Act to the wind and told the interrogating officer all about his wartime work. (This is used in the narrative as a framing device.)

These range from the absurd, through the highly improbable, to the patently false; and I have only picked out a few of the most egregious examples. Just to examine one of them: it’s true that the spy Cairncross was at Bletchley and passed details of decrypts to Stalin’s Russia. But these were not Enigma decrypts, but those of the later, more complex naval code known as ‘Tunny’, broken by Bill Tutte and decrypted by the ‘Colossus’ computer designed by Tommy Flowers – an operation which of course owed much to Turing. It’s unlikely that Cairncross would have met Turing or had any significant contact with him; but how the thought of livening up the dull old truth with a bonus spy gets those film writers’ pulses racing! According to the film, the good old Brits of course know what Cairncross is up to, and only let him release the material that suits their purposes. (How on earth they manage this without his knowledge, given that the character in the film has full access to the Enigma decrypts, is never explained.) In reality, Cairncross’s activities were not discovered until the 1950s.

Rebuilt Turing Bombe

The replica bombe in the Bletchley museum (Ted Coles / Wikimedia)

And of course there wasn’t just one machine (they were actually known as ‘bombes’) but eventually some 200 of them were installed, at Bletchley and elsewhere, to get through all the work. I don’t think any of them was called ‘Christopher’.

As is customary with fictionalised true stories,  we get the ‘what happened afterwards’  follow-up facts on the screen before the credits – but even these are sloppily inaccurate: the Bletchley code breaking activities, we are told, were kept secret for more than 50 years. The makers have apparently failed to notice that the Hodges biography on which they claim to base the film, and which describes Enigma in great detail, was published in 1983, less than 40 years after the war. And the existence of the Bletchley code breaking operation was in fact made public in the 1970s, nearly a decade before that.

There I go again; but the point at issue is whether all the distortions of the truth can be justified. The film’s producer Teddy Schwartzman has been quoted as saying that, while the makers did not want to fabricate events, there are some ‘creative liberties’. Well, that’s one way of putting it. More, I would say, that many important truths were distorted in the hope of pushing up box office receipts. I haven’t attempted to count the fabricated events,  but I doubt whether the fingers of both hands would be sufficient.

I don’t imagine that Turing himself, with his mathematician’s love of detail, would have been very happy with this portrayal. The truth of Turing’s life contained drama enough – it was unnecessary to daub the picture with splotches of gaudy dramatic invention and make such clumsy attempts to drag in a spurious emotional subtext. In a different way,  the film was as disobliging to his memory as the account by his prejudiced brother John which I described in my first blog post.

The Bletchley operation was one of the greatest examples in history of overcoming barriers to discovering the truth, as well as in helping to deceive an enemy. Unfortunately what we were given here was too little truth, and a generous helping of deception

Another Antiversary

Commuting days until retirement: 100

I thought I had invented the word,  but on googling, I find there’s nothing new under the cybersun. Antiversary, according to my search, has been adopted to mean the anniversary of a split-up, or a divorce, rather than a wedding. Well I’m going to stick with the meaning I coined back in March 2013 at the time of my 500 day antiversary: it’s the celebration of an event a given time before it happens, rather than after*. And here I am at my 100th. Days, of course, not years; the prospect of another hundred years of commuting would find me under the train rather than in it.

Napoleon

Napoleon at the start of his ‘100 days’

“100 days” always reminds me of a prime minister who,  in 1964, before the start of my working career, gave himself that time for ‘dynamic changes to get Britain moving again’. This was Harold Wilson, in 1964, whom I am old enough to remember quite well.  I expect he was consciously taking a leaf out of the book of previous politicians – but presumably not the one who was originally associated with the phrase. This, it seems, was Napoleon, who returned from exile in Elba to rule France again, on 20 March 1815. His last cent jours as Emperor ended after defeat at the Battle of Waterloo.

It was in the USA that the yardstick of 100 days was taken up as a way of measuring the effectiveness of presidents on taking up office, and the first, and most successful, was Franklin D. Roosevelt, whose ‘New Deal’ began America’s recovery from the depression. Inevitably, later presidents, and indeed Harold Wilson in Britain, did not bask in such widespread approval after their own inaugural 100 days. Fortunately I don’t have to fight a battle or put a country back on its feet, but just coast gently towards the day when the work really begins: house and garden to be taken in hand, years of accumulated debris to be cleared, vegetables to be, grown, blog posts – and perhaps other things – to be written. How much time I will have for all that is, you might say, yet to be determined. And at almost the same time as my 100 days ends, that of another government in the UK will start. What sort of government that will be, and whether indeed it will last that long, is anyone’s guess at the moment.)

So now the much-anticipated moment has moved closer, and become certain; we’ve taken the financial advice and started to make the arrangements, and I have made my intentions public at work. I mentioned in a recent post how broaching the subject of retirement might be a bit of a conversation-stopper in terms of company culture. When I made my announcement at an informal meeting, I felt a sort of collective frisson shiver through the room, as younger colleagues suddenly sensed the proximity of something thought of as remote and unreal. “Congratulations!”, said one, with a sort of well meant awkwardness.

I’m not sure whether I deserve to be congratulated on anything, but it’s obvious to any reader of this blog how much I savour the prospect of retiring.  However after nearly fifty years of working life, with few breaks, there’s a rather uncomfortable feeling of cutting myself adrift from my source of sustenance. The only time I have known the branch I was sitting on to have been sawn off was when the company I was working for folded; but now here I am, busy doing the sawing myself. Can that be sensible?

Safety net

My pension, seen as I hurtle towards it
(Ed Berg/Wikimedia)

It’s as if I’m together with fellow workers in a building (as of course I literally am) but in the metaphorical building I’m thinking of, some, leaving for other jobs, simply take the lift down to the ground floor and walk sensibly out of the main door into another building. I’m a bit concerned about the way these suicide-related images keep occurring to me, but here’s the only retirer I’m aware of at my workplace – me – gaily jumping out of the office window on the assumption that his pension safety net is held,  stretched out, by strong hands far below.

Maybe it’s the prospect of doing nothing while being paid – it doesn’t seem real to me. Yes, I know it’s more that I’ve actually built up a fund over time by making contributions, etc etc – but there’s still an eerie unreality about the thought of spending the day doing as you please and still getting money, even if rather less than before.

So a hundred more commuting days left, which I celebrate here with strangely mixed feelings. No snatch of doggerel to celebrate this antiversary, as there was for the 500th – for one thing, I exhausted all the possible rhyme words for ‘antiversary’. For a while now I haven’t been inspired to express myself more seriously in poetry, either; but perhaps retirement will give me a chance to look out that muse from wherever she’s hiding. But I am planning to revive one poem for the New Year.

And I’ll only add that, while engrossed in tapping this into my tablet on the train, I failed to notice the station I should have changed at, and had to get out at the next stop to go back and catch a later connection. I’m not even a competent commuter any more. Definitely time to think about stopping.


*My son suggests that the word should be ‘anteversary’ (ante = before). So there we are. I can have my word and the divorcees can keep theirs.

The Mathematician and the Surgeon

Commuting days until retirement: 108

After my last post, which, among other things, compared differing attitudes to death and its aftermath (or absence of one) on the part of Arthur Koestler and George Orwell, here’s another fruitful comparison. It seemed to arise by chance from my next two commuting books, and each of the two people I’m comparing, as before, has his own characteristic perspective on that matter. Unlike my previous pair both could loosely be called scientists, and in each case the attitude expressed has a specific and revealing relationship with the writer’s work and interests.

The Mathematician

The first writer, whose book I came across by chance, has been known chiefly for mathematical puzzles and games. Martin Gardner was born in Oklahoma USA in 1914; his father was an oil geologist, and it was a conventionally Christian household. Although not trained as a mathematician, and going into a career as a journalist and writer, Gardner developed a fascination with mathematical problems and puzzles which informed his career – hence the justification for his half of my title.

Martin Gardner

Gardner as a young man (Wikimedia)

This interest continued to feed the constant books and articles he wrote, and he was eventually asked to write the Scientific American column Mathematical Games which ran from 1956 until the mid 1980s, and for which he became best known; his enthusiasm and sense of fun shines through the writing of these columns. At the same time he was increasingly concerned with the many types of fringe beliefs that had no scientific foundation, and was a founder member of PSICOPS,  the organisation dedicated to the exposing and debunking of pseudoscience. Back in February last year I mentioned one of its other well-known members, the flamboyant and self-publicising James Randi. By contrast, Gardner was mild-mannered and shy, averse from public speaking and never courting publicity. He died in 2010, leaving behind him many admirers and a two-yearly convention – the ‘Gathering for Gardner‘.

Before learning more about him recently, and reading one of his books, I had known his name from the Mathematical Games column, and heard of his rigid rejection of things unscientific. I imagined some sort of skinflint atheist, probably with a hard-nosed contempt for any fanciful or imaginative leanings – however sane and unexceptionable they might be – towards what might be thought of as things of the soul.

How wrong I was. His book that I’ve recently read, The Whys of a Philosophical Scrivener, consists of a series of chapters with titles of the form ‘Why I am not a…’ and he starts by dismissing solipsism (who wouldn’t?) and various forms of relativism; it’s a little more unexpected that determinism also gets short shrift. But in fact by this stage he has already declared that

I myself am a theist (as some readers may be surprised to learn).

I was surprised, and also intrigued. Things were going in an interesting direction. But before getting to the meat of his theism he spends a good deal of time dealing with various political and economic creeds. The book was written in the mid 80s, not long before the collapse of communism, which he seems to be anticipating (Why I am not a Marxist) . But equally he has little time for Reagan or Thatcher, laying bare the vacuity of their over-simplistic political nostrums (Why I am not a Smithian).

Soon after this, however, he is striding into the longer grass of religious belief: Why I am not a Polytheist; Why I am not a Pantheist; – so what is he? The next chapter heading is a significant one: Why I do not Believe the Existence of God can be Demonstrated. This is the key, it seems to me, to Gardner’s attitude – one to which I find myself sympathetic. Near the beginning of the book we find:

My own view is that emotions are the only grounds for metaphysical leaps.

I was intrigued by the appearance of the emotions in this context: here is a man whose day job is bound up with his fascination for the powers of reason, but who is nevertheless acutely conscious of the limits of reason. He refers to himself as a ‘fideist’ – one who believes in a god purely on the basis of faith, rather than any form of demonstration, either empirical or through abstract logic. And if those won’t provide a basis for faith, what else is there but our feelings? This puts Gardner nicely at odds with the modish atheists of today, like Dawkins, who never tires of telling us that he too could believe if only the evidence were there.

But at the same time he is squarely in a religious tradition which holds that ultimate things are beyond the instruments of observation and logic that are so vital to the secular, scientific world of today. I can remember my own mother – unlike Gardner a conventional Christian believer – being very definite on that point. And it reminds me of some of the writings of Wittgenstein; Gardner does in fact refer to him,  in the context of the freewill question. I’ll let him explain:

A famous section at the close of Ludwig Wittgenstein’s Tractatus Logico-Philosophicus asserts that when an answer cannot be put into words, neither can the question; that if a question can be framed at all, it is possible to answer it; and that what we cannot speak about we should consign to silence. The thesis of this chapter, although extremely simple and therefore annoying to most contemporary thinkers, is that the free-will problem cannot be solved because we do not know exactly how to put the question.

This mirrors some of my own thoughts about that particular philosophical problem – a far more slippery one than those on either side of it often claim, in my opinion (I think that may be a topic for a future post). I can add that Gardner was also on the unfashionable side of the question which came up in my previous post – that of an afterlife; and again he holds this out as a matter of faith rather than reason. He explores the philosophy of personal identity and continuity in some detail, always concluding with the sentiment ‘I do not know. Do not ask me.’ His underlying instinct seems to be that there has to something more than our bodily existence, given that our inner lives are so inexplicable from the objective point of view – so much more than our physical existence. ‘By faith, I hope and believe that you and I will not disappear for ever when we die.’ By contrast, Arthur Koestler, you may remember,  wrote in his suicide note of ‘tentative hopes for a depersonalised afterlife’ – but, as it turned out, these hopes were based partly on the sort of parapsychological evidence which was anathema to Gardner.

And of course Gardner was acutely aware of another related mystery – that of consciousness, which he finds inseparable from the issue of free will:

For me, free will and consciousness are two names for the same thing. I cannot conceive of myself being self-aware without having some degree of free will… Nor can I imagine myself having free will without being conscious.

He expresses utter dissatisfaction with the approach of arch-physicalists such as Daniel Dennett, who,  as he says,  ‘explains consciousness by denying that it exists’. (I attempted to puncture this particular balloon in an earlier post.)

Martin Gardner

Gardner in later life (Konrad Jacobs / Wikimedia)

Gardner places himself squarely within the ranks of the ‘mysterians’ – a deliberately derisive label applied by their opponents to those thinkers who conclude that these matters are mysteries which are probably beyond our capacity to solve. Among their ranks is Noam Chomsky: Gardner cites a 1983 interview with the grand old man of linguistics,  in which he expresses his attitude to the free will problem (scroll down to see the relevant passage).

The Surgeon

And so to the surgeon of my title, and if you’ve read one of my other blog posts you will already have met him – he’s a neurosurgeon named Henry Marsh, and I wrote a post based on a review of his book Do No Harm. Well, now I’ve read the book, and found it as impressive and moving as the review suggested. Unlike many in his profession, Marsh is a deeply humble man who is disarmingly honest in his account about the emotional impact of the work he does. He is simultaneously compelled towards,  and fearful of, the enormous power of the neurosurgeon both to save and to destroy. His narrative swings between tragedy and elation, by way of high farce when he describes some of the more ill-conceived management ‘initiatives’ at his hospital.

A neurosurgical operation

A neurosurgical operation (Mainz University Medical Centre)

The interesting point of comparison with Gardner is that Marsh – a man who daily manipulates what we might call physical mind-stuff – the brain itself – is also awed and mystified by its powers:

There are one hundred billion nerve cells in our brains. Does each one have a fragment of consciousness within it? How many nerve cells do we require to be conscious or to feel pain? Or does consciousness and thought reside in the electrochemical impulses that join these billions of cells together? Is a snail aware? Does it feel pain when you crush it underfoot? Nobody knows.

The same sense of mystery and wonder as Gardner’s; but approached from a different perspective:

Neuroscience tells us that it is highly improbable that we have souls, as everything we think and feel is no more or no less than the electrochemical chatter of our nerve cells… Many people deeply resent this view of things, which not only deprives us of life after death but also seems to downgrade thought to mere electrochemistry and reduces us to mere automata, to machines. Such people are profoundly mistaken, since what it really does is upgrade matter into something infinitely mysterious that we do not understand.

Henry Marsh

Henry Marsh

This of course is the perspective of a practical man – one who is emphatically working at the coal face of neurology, and far more familiar with the actual material of brain tissue than armchair speculators like me. While I was reading his book, although deeply impressed by this man’s humanity and integrity, what disrespectfully came to mind was a piece of irreverent humour once told to me by a director of a small company I used to work for which was closely connected to the medical industry. It was a sort of a handy cut-out-and-keep guide to the different types of medical practitioner:

Surgeons do everything and know nothing. Physicians know everything and do nothing. Psychiatrists know nothing and do nothing.  Pathologists know everything and do everything – but the patient’s dead, so it’s too late.

Grossly unfair to all to all of them, of course, but nonetheless funny, and perhaps containing a certain grain of truth. Marsh, belonging to the first category, perhaps embodies some of the aversion from dry theory that this caricature hints at: what matters to him ultimately, as a surgeon, is the sheer down-to-earth physicality of his work, guided by the gut instincts of his humanity. We hear from him about some members of his profession who seem aloof from the enormity of the dangers it embodies, and seem able to proceed calmly and objectively with what he sees almost as the detachment of the psychopath.

Common ground

What Marsh and Gardner seem to have in common is the instinct that dry, objective reasoning only takes you so far. Both trust the power of their own emotions, and their sense of awe. Both, I feel, are attempting to articulate the same insight, but from widely differing standpoints.

Two passages, one from each book, seem to crystallize both the similarities and differences between the respective approaches of the two men, both of whom seem to me admirably sane and perceptive, if radically divergent in many respects. First Gardner, emphasising in a Wittgensteinian way how describing how things appear to be is perhaps a more useful activity than attempting to pursue any ultimate reasons:

There is a road that joins the empirical knowledge of science with the formal knowledge of logic and mathematics. No road connects rational knowledge with the affirmations of the heart. On this point fideists are in complete agreement. It is one of the reasons why a fideist, Christian or otherwise, can admire the writings of logical empiricists more than the writings of philosophers who struggle to defend spurious metaphysical arguments.

And now Marsh – mystified, as we have seen, as to how the brain-stuff he manipulates daily can be the seat of all experience – having a go at reading a little philosophy in the spare time between sessions in the operating theatre:

As a practical brain surgeon I have always found the philosophy of the so-called ‘Mind-Brain Problem’ confusing and ultimately a waste of time. It has never seemed a problem to me, only a source of awe, amazement and profound surprise that my consciousness, my very sense of self, the self which feels as free as air, which was trying to read the book but instead was watching the clouds through the high windows, the self which is now writing these words, is in fact the electrochemical chatter of one hundred billion nerve cells. The author of the book appeared equally amazed by the ‘Mind-Brain Problem’, but as I started to read his list of theories – functionalism, epiphenomenalism, emergent materialism, dualistic interactionism or was it interactionistic dualism? – I quickly drifted off to sleep, waiting for the nurse to come and wake me, telling me it was time to return to the theatre and start operating on the old man’s brain.

I couldn’t help noticing that these two men – one unconventionally religious and the other not religious at all – seem between them to embody those twin traditional pillars of the religious life: faith and works.

On Being Set Free

Commuting days until retirement: 133

The underlying theme of this blog is retirement, and it will be fairly obvious to most of my readers by now – perhaps indeed to all three of you – that I’m looking forward to it. It draws closer; I can almost hear the ‘Happy retirement’ wishes from colleagues – some expressed perhaps through ever-so-slightly gritted teeth as they look forward to many more years in harness, while I am put out to graze. But of course there’s another side to that: they will also be keeping silent about the thought that being put out to graze also carries with it the not too distant prospect of the knacker’s yard – something they rarely think about in relation to themselves.

Because in fact the people I work with are generally a lot younger than I am – in a few cases younger than my children. No one in my part of the business has ever actually retired, as opposed to leaving for another job. My feeling is that to stand up and announce that I am going to retire will be to introduce something alien and faintly distasteful into the prevailing culture, like telling everyone about your arthritis at a 21st birthday party.

The revolving telescope

For most of my colleagues, retirement,  like death, is something that happens to other people. In my experience, it’s around the mid to late 20s that such matters first impinge on the consciousness – indistinct and out of focus at first, something on the edge of the visual field. It’s no coincidence, I think, that it’s around that same time that one’s perspective on life reverses, and the general sense that you’d like to be older and more in command of things starts to give way to an awareness of vanishing youth. The natural desire for what is out of reach reorientates its outlook, swinging through 180 degrees like a telescope on a revolving stand.

But I find that, having reached the sort of age I am now, it’s doesn’t do to turn your back on what approaches. It’s now sufficiently close that it is the principal factor defining the shape of the space you now have available in which to organise your life,  and you do much better not to pretend it isn’t there, but to be realistically aware. We have all known those who nevertheless keep their backs resolutely turned, and they often cut somewhat pathetic figures: a particular example I remember was a man (who would almost certainly be dead by now) who didn’t seem to accept his failing prowess at tennis as an inevitable corollary of age, but rather a series of inexplicable failures that he should blame himself for. And there are all those celebrities you see with skin stretched ever tighter over their facial bones as they bring in the friendly figure of the plastic surgeon to obscure the view of where they are headed.

Perhaps Ray Kurzweil, who featured in my previous post, is another example, with his 250 supplement tablets each day and his faith in the abilities of technology to provide him with some sort of synthetic afterlife.  Given that he has achieved a generous measure of success in his natural life, he perhaps has less need than most of us to seek a further one; but maybe it works the other way, and a well-upholstered ego is more likely to feel a continued existence as its right.

Enjoying the view

Old and Happy

Happiness is not the preserve of the young (Wikimedia Commons)

But the fact is that for most of us the impending curtailment of our time on earth brings a surprising sense of freedom. With nothing left to strive for – no anxiety about whether this or that ambition will be realised – some sort of summit is achieved. The effort is over,  and we can relax and enjoy the view. More than one survey has found that people in their seventies are nowadays collectively happier than any other age group: here are reports of three separate studies between 2011 and 2014, in Psychology Today, The Connexion, and the Daily Mail. Those adverts for pension providers and so on, showing apparently radiant wrinkly couples feeding the ducks with their grandchildren, aren’t quite as wide of the mark as you might think.

Speaking for myself, I’ve never been excessively troubled by feelings of ambition, and have probably enjoyed a relatively stress-free, if perhaps less prosperous, life as a result. And the prospect of an existence where I am no longer even expected to show such aspirations is part of the attraction of retirement. But of course there remain those for whom the fact of extinction gives rise to wholly negative feelings, but who are at the same time brave enough to face it fair and square, without any psychological or cosmetic props. A prime example in recent literature is Philip Larkin, who seems to make frequent appearances in this blog. While famously afraid of death, he wrote luminously about it. Here, in his poem The Old Fools he evokes images of the extreme old age which he never, in fact, reached himself:

Philip Larkin

Philip Larkin (Fay Godwin)

Perhaps being old is having lighted rooms
Inside your head, and people in them, acting.
People you know, yet can’t quite name; each looms
Like a deep loss restored, from known doors turning,
Setting down a lamp, smiling from a stair, extracting
A known book from the shelves; or sometimes only
The rooms themselves, chairs and a fire burning,
The blown bush at the window, or the sun’s
Faint friendliness on the wall some lonely
Rain-ceased midsummer evening.

Dream and reality seem to fuse at this ultimate extremity of conscious experience as Larkin portrays it; and it’s the snuffing out of consciousness that a certain instinct in us finds difficult to take – indeed, to believe in. Larkin, by nature a pessimist, certainly believed in it,  and dreaded it. But cultural traditions of many kinds have not accepted extinction as inevitable: we are not obliviously functioning machines but the subjects of experiences like the ones Larkin writes about. As such we have immortal souls which transcend the gross physical world, it has been held – so why should we not survive death? (Indeed, according some creeds, why should we not have existed before birth?)

Timid hopes

Well, whatever immortal souls might be, I find it difficult to make out a case for individual survival, and this is perhaps the majority view in the secular culture I inhabit. It seems pretty clear to me that my own distinguishing characteristics are indissolubly linked to my physical body: damage to the brain, we know, can can change the personality, and perhaps rob us of our memories and past experience, which most quintessentially define us as individuals. But even though our consciousness can be temporarily wiped out by sleep or anaesthetics, there remains the sense (for me, anyway) that since we have no notion whatever of how we could provide an account of it in physical terms,  there is the faint suggestion that some aspect of our experience could be independent of our bodily existence.

You may or may not accept both of these beliefs – the temporality of the individual and the transcendence of consciousness. But if you do,  then the possibility seems to arise of some kind of disembodied,  collective sentience,  beyond our normal existence. And this train of thought always reminds me of the writer Arthur Koestler, who died by suicide in 1983 at the age of 77. An outspoken advocate of voluntary euthanasia, he’d been suffering in later life from Parkinson’s disease, and had then contracted a progressive, incurable form of leukaemia. His suicide note (which turned out to have been written several months before his death) included the following passage:

I wish my friends to know that I am leaving their company in a peaceful frame of mind, with some timid hopes for a de-personalised after-life beyond due confines of space, time and matter and beyond the limits of our comprehension. This ‘oceanic feeling’ has often sustained me at difficult moments, and does so now, while I am writing this.

Death sentence

In fact Koestler had, since he was quite young, been more closely acquainted with death than most of us. Born in Hungary, during his earlier career as a journalist and political writer he twice visited Spain during its civil war in the 1930s. He made his first visit as an undercover investigator of the Fascist movement, being himself at that time an enthusiastic supporter of communism. A little later he returned to report from the Republican side,  but was in Malaga when it was captured by Fascist troops. By now Franco had come to know of his anti-fascist writing, and he was imprisoned in Seville under sentence of death.

Koestler portrayed on the cover of the book

Koestler portrayed on the cover of the book

In his account of this experience, Dialogue with Death, he describes how prisoners would try to block their ears to avoid the nightly sound of a telephone call to the prison, when a list of prisoner names would be dictated and the men later led out and shot. His book is illuminating on the psychology of these conditions,  and the violent emotional ups and downs he experienced:

One of my magic remedies was a certain quotation from a certain work of Thomas Mann’s; its efficacy never failed. Sometimes, during an attack of fear, I repeated the same verse thirty or forty times, for almost an hour, until a mild state of trance came on and the attack passed. I knew it was the method of the prayer-mill, of the African tom-tom, of the age-old magic of sounds. Yet in spite of my knowing it, it worked…
I had found out that the human spirit is able to call upon certain aids of which, in normal circumstances, it has no knowledge, and the existence of which it only discovers in itself in abnormal circumstances. They act, according to the particular case, either as merciful narcotics or ecstatic stimulants. The technique which I developed under the pressure of the death-sentence consisted in the skilful exploitation of these aids. I knew, by the way, that at the decisive moment when I should have to face the wall, these mental devices would act automatically, without any conscious effort on my part. Thus I had actually no fear of the moment of execution; I only feared the fear which would precede that moment.

That there are emotional ‘ups’ at all seems surprising,  but later he expands on one of them:

Often when I wake at night I am homesick for my cell in the death-house in Seville and, strangely enough, I feel that I have never been so free as I was then. This is a very strange feeling indeed. We lived an unusual life on that patio; the constant nearness of death weighed down and at the same time lightened our existence. Most of us were not afraid of death, only of the act of dying; and there were times when we overcame even this fear. At such moments we were free – men without shadows, dismissed from the ranks of the mortal; it was the most complete experience of freedom that can be granted a man.

Perhaps, in a diluted, much less intense form, the happiness of the over 70s revealed by the surveys I mentioned has something in common with this.

Koestler was possibly the only writer of the front rank ever to be held under sentence of death, and the experience informed his novel Darkness at Noon. It is the second in a trilogy of politically themed novels, and its protagonist, Rubashov, has been imprisoned by the authorities of an unnamed totalitarian state which appears to be a very thinly disguised portrayal of Stalinist Russia. Rubashov has been one of the first generation of revolutionaries in a movement which has hardened into an authoritarian despotism, and its leader, referred to only as ‘Number One’ is apparently eliminating rivals.  Worn down by the interrogation conducted by a younger, hard-line apparatchik, Rubashov comes to accept that he has somehow criminally acted against ‘the revolution’, and eventually goes meekly to his execution.

Shades of Orwell

By the time of writing the novel, Koestler, like so many intellectuals of that era, had made the journey from an initial enthusiasm for Soviet communism to disillusion with,  and opposition to it. And reading Darkness at Noon, I was of course constantly reminded of Orwell’s Nineteen Eighty-Four, and the capitulation of Winston Smith as he comes to love Big Brother. Darkness at Noon predates 1984 by nine years,  and nowadays has been somewhat eclipsed by Orwell’s much more well known novel. The two authors had met briefly during the Spanish civil war, where Orwell was actively involved in fighting against fascism, and met again and discussed politics around the end of the war. It seems clear that Orwell, having written his own satire on the Russian revolution in Animal Farm, eventually wrote 1984 under the conscious influence of Koestler’s novel. But they are of course very different characters: you get the feeling that to Orwell, with his both-feet-on-the-ground Englishness, Koestler might have seemed a rather flighty and exotic creature.

Orwell (aka Eric Blair) from the photo on his press pass (NUJ/Wikimedia Commons)

Orwell (aka Eric Blair) from the photo on his press pass (Wikimedia Commons)

In fact,  during the period between the publications of Darkness at Noon and 1984, Orwell wrote an essay on Arthur Koestler – probably while he was still at work on Animal Farm. His view of Koestler’s output is mixed: on one hand he admires Koestler as a prime example of the continental writers on politics whose views have been forged by hard experience in this era of political oppression – as opposed to English commentators who merely strike attitudes towards the turmoil in Europe and the East, while viewing it from a relatively safe distance. Darkness at Noon he regards as a ‘masterpiece’ – its common ground with 1984 is not, it seems, a coincidence. (Orwell’s review of Darkness at Noon in the New Statesman is also available.)

On the other hand he finds much of Koestler’s work unsatisfactory, a mere vehicle for his aspirations towards a better society. Orwell quotes Koestler’s description of himself as a ‘short-term pessimist’,  but also detects a utopian undercurrent which he feels is unrealistic. His own views are expressed as something more like long-term pessimism, doubting whether man can ever replace the chaos of the mid-twentieth century with a society that is both stable and benign:

Nothing is in sight except a welter of lies, hatred, cruelty and ignorance, and beyond our present troubles loom vaster ones which are only now entering into the European consciousness. It is quite possible that man’s major problems will NEVER be solved. But it is also unthinkable! Who is there who dares to look at the world of today and say to himself, “It will always be like this: even in a million years it cannot get appreciably better?” So you get the quasi-mystical belief that for the present there is no remedy, all political action is useless, but that somewhere in space and time human life will cease to be the miserable brutish thing it now is. The only easy way out is that of the religious believer, who regards this life merely as a preparation for the next. But few thinking people now believe in life after death, and the number of those who do is probably diminishing.

In death as in life

Orwell’s remarks neatly return me to the topic I have diverged from. If we compare the deaths of the two men, they seem to align with their differing attitudes in life. Both died in the grip of a disease – Orwell succumbing to tuberculosis after his final, gloomy novel was completed, and Koestler escaping his leukaemia by suicide but still expressing ‘timid hopes’.

After the war Koestler had adopted England as his country and henceforth wrote only in English – most of his previous work had been in German. In  being allowed a longer life than Orwell to pursue his writing, he had moved on from politics to write widely in philosophy and the history of ideas, although never really being a member of the intellectual establishment. These are areas which you feel would always have been outside the range of the more down-to-earth Orwell, who was strongly moral,  but severely practical. Orwell goes on to say, in the essay I quoted: ‘The real problem is how to restore the religious attitude while accepting death as final.’ This so much reflects his attitudes – he habitually enjoyed attending Anglican church services, but without being a believer. He continues, epigramatically:

Men can only be happy when they do not assume that the object of life is happiness. It is most unlikely, however, that Koestler would accept this. There is a well-marked hedonistic strain in his writings, and his failure to find a political position after breaking with Stalinism is a result of this.

Again, we strongly feel the tension between their respective characters: Orwell, with his English caution, and Koestler with his continental adventurism. In fact, Koestler had a reputation as something of an egotist and aggressive womaniser. Even his suicide reflected this: it was a double suicide with his third wife, who was over 20 years younger than he was and in good health. Her accompanying note explained that she couldn’t continue her life without him. Friends confirmed that she had entirely subjected her life to his: but to what extent this was a case of bullying,  as some claimed, will never be known.

Of course there was much common ground between the two men: both were always on the political left, and both,  as you might expect, were firmly opposed to capital punishment: anyone who needs convincing should read Orwell’s autobiographical essay A Hanging. And Koestler wrote a more prosaic piece – a considered refutation of the arguments for judicial killing – in his book Reflections on Hanging; it was written in the 1950s, when, on Koestler’s own account, some dozen hangings were occurring in Britain each year.

But while Orwell faced his death stoically, Koestler continued his dalliance with the notion of some form of hereafter; you feel that, as with Kurzweil, a well-developed ego did not easliy accept the thought of extinction. In writing this post, I discovered that he had been one of a number of intellectual luminaries who contributed to a collection of essays under the title Life after Death,  published in the 1970s. Keen to find a more detailed statement of his views, I actually found his piece rather disappointing. First I’ll sketch in a bit of background to clarify where I think he is coming from.

Back in Victorian times there was much interest in evidence of ‘survival’ – seances and table-rapping sessions were popular, and fraudulent mediums were prospering. Reasons for this are not hard to find: traditional religion, while strong, faced challenges. Steam-powered technology was burgeoning, the world increasingly seemed to be a wholly mechanical affair,  and Darwinism had arrived to encourage the trend towards materialism. In 1882 the Society for Psychical Research was formed, becoming a focus both for those who were anxious to subvert the materialist world view, and those who wanted to investigate the phenomena objectively and seek intellectual clarity.

But it wasn’t long before the revolution in physics, with relativity and quantum theory, exploded the mechanical certainties of the Victorians. At the same time millions suffered premature deaths in two world wars, giving ample motivation to believe that those lost somehow still existed and could maybe even be contacted.

Arthur Koestler

Koestler in later life (Eric Koch/Wikimedia Commons)

This seems to be the background against which Koestler’s ideas about the possibility of an afterlife had developed. He leans a lot on the philosophical writings of the quantum physicist Edwin Schrodinger, and seeks to base a duality of mind and matter on the wave/particle duality of quantum theory. There’s a lot of talk about psi fields and suchlike – the sort of terminology which was already sounding dated at the time he was writing.  The essay seemed to me to be rather backward looking, sitting more comfortably with the inchoate fringe beliefs of the mid 20th century than the confident secularism of Western Europe today.

A rebel to the end

I think Koestler was well aware of the way things were going, but with characteristic truculence reacted against them. He wrote a good deal on topics that clash with mainstream science, such as the significance of coincidence, and in his will used his legacy to establish a department of parapsychology,  which was set up at Edinburgh University, and still exists.

This was clearly a deliberate attempt to cock a snook at the establishment, and while he was not an attractive character in many ways I do find this defiant stance makes me warm to him a little. While I am sure I would have found Orwell more decent and congenial to know personally, Koestler is the more intellectually exciting of the two. I think Orwell might have found Koestler’s notion of the sense of freedom when facing death difficult to understand – but maybe this might have changed had he survived into his seventies. And in a general sense I share Koestler’s instinct that in human consciousness there is far more yet to understand than we have yet been able to, as it were, get our minds around.

Retirement, for me, will certainly bring freedom – not only freedom from the strained atmosphere of worldly ambition and corporate business-speak (itself an Orwellian development) but more of my own time to reflect further on the matters I’ve spoken of here.

A Singular Notion

Commuting days until retirement: 168

I’ve been reading about the future. Well, one man’s idea of the future, anyway – and of course when it comes to the future, people’s ideas about it are really all we can have. This particular writer obviously considers his own ideas to be highly upbeat and optimistic, but others may view them with apprehension, if not downright disbelief – and I share some of their reservations.

Ray Kurzweil

Ray Kurzweil
(Photo: Roland Dobbins / Wikimedia Commons)

The man in question is Ray Kurzweil, and it has to be said that he is massively well informed – about the past and the present, anyway; his claims to knowledge of the future are what I want to examine. He holds the position of Head of Engineering at Google, but has also founded any number of high-tech companies and is credited with a big part in inventing flatbed scanners, optical character recognition, speech synthesis and speech recognition. On top of all this, he is quite a philosopher, and has carried on debates with other philosophers about the basis of his ideas, and we hear about some of these debates in the book I’ve been reading.

The book is The Singularity is Near, and its length (500 dense pages, excluding notes) is partly responsible for the elapsed time since my last substantial post. Kurzweil is engagingly enthusiastic about his enormous stock of knowledge,  so much so that he is unable to resist laying the exhaustive details of every topic before you. Repeatedly you find yourself a little punch drunk under the remorseless onslaught of facts – at which point he has an engaging way of saying ‘I’ll be dealing with that in more detail in the next chapter.’ You feel that perhaps quite a bit of the content would be better accommodated in endnotes – were it not for the fact that nearly half the book consists of endnotes as it is.

Density

To my mind, the the argument of the book has two principal premises, the first of which I’d readily agree to, but the second of which seems to me highly dubious. The first idea is closely related to the ‘Singularity’ of the title. A singularity is a concept imported from mathematics, but is perhaps more familiar in the context of black holes and the big bang. In a black hole, enormous amounts of matter become so concentrated under their own gravitational force that they shrink to a point of, well, as far as we can tell, infinite density. (At this point I can’t help thinking of Kurzweil’s infinitely dense prose style – perhaps it is suited to his topic.) But what’s important about this for our present purposes is the fact that some sort of boundary has been crossed: things are radically different, and all the rules and guidelines that we have previously found useful in investigating how the world works, no longer apply.

To understand how this applies, by analogy,  to our future, we have to introduce the notion of exponential growth – that is, growth not by regular increments but by multiples. A well known illustration of the surprising power of this is the old fable of the King who has a debt of gratitude to one of his subjects, and asks what he would like as a reward. The man asks for one grain of wheat corresponding to the first square of the chess board, two for the second, four for the third,  and so on up to the sixty-fourth, doubling each time. At first the King is incredulous that the man has demanded so little, but of course soon finds that the entire output of his country would fall woefully short of what is asked. (The number of grains works out at 18,446,744,073,709,551,615 – this is of a similar order to,  say,  the estimated number of grains of sand in the world.)

Such unexpected expansion is the hallmark of exponential growth – however gradually it rises at first, eventually the curve will always accelerate explosively upward. Kurzweil devotes many pages to arguing how the advance of human technical capability follows just such a trajectory. One frequently quoted example is what has become known as Moore’s law: in 1965 a co-founder of the chip company Intel, Gordon Moore, made an extrapolation from what had then been achieved and asserted that the number of processing elements that could be fitted on to a chip of a given size would double every year.  This was later modified to two years, but has nevertheless continued exponentially, and there is no reason, short of global calamity, that it will stop in the foreseeable future. The evidence is all around us: thirty years ago, equipment with the power of a modern smartphone would have been a roomful of immobile cabinets costing thousands of pounds.

Accelerating returns

That’s of course just one example; taking a broader view we could look, as Kurzweil does, at the various revolutions that have transformed human life over time. The stages of agricultural revolution – the transition from the hunter-gatherer way of life,  via subsistence farming to systematic growth and distribution of food,  took many centuries,  or even millennia. The industrial revolution could be said to have been even greater in its effects over a mere century or two, while the digital revolution we are currently experiencing has made radical changes in just the last thirty years or so. Kurzweil argues that each of these steps forward provides us with the wherewithal to effect further changes even more rapidly and efficiently – and hence the exponential nature of our progress. Kurzweil refers to it as ‘The Law of Accelerating Returns’.

So if we are proceeding by ever-increasing steps forward, what is our destiny – what will be the nature of the exponential explosion that we must expect? This is the burden of Kurzweil’s book, and the ‘singularity’ after which nothing will be the same. His projection of our progress towards this point is based on a triumvirate of endeavours which he refers to confidently with the acronym GNR: Genetics, Nanotechnology and Robotics. Genetics will continue its progress – exponentially – in finding cures for the disorders which limit our life span, as well as disentangling many of the mysteries of how we – and our brains – develop. For Nanotechnology,  Kurzweil has extensive expectations. Tiny, ultimately self-reproducing machines could be sent out into the world to restructure matter and turn innocent lumps of rock into computers with so far undreamt of processing power. And they could journey inwards,  into our bodies, ferreting out cancer cells and performing all sorts of repairs that would be difficult or impossible now. Kurzweil’s enthusiasm reaches its peak when he describes these microscopic helpers travelling round the blood vessels of our brains, scanning their surroundings and reporting back over wi-fi on what they find. This would be part of the grand project of ‘reverse engineering the brain’.

And with the knowledge gained thus, the third endeavour,  Robotics, already enlisted in the development of the nanobots now navigating our brains, would come into its own. Built on many decades of computing experience,  and enhanced by an understanding of how the human brain works, a race of impossibly intelligent robots, which nevertheless boast human qualities, would be born. Processing power is still of course expanding exponentially, adopting any handy lumps of rock as its substrate, and Kurzweil sees it expanding across the universe as the possibilities of our own planet are exhausted.

Cyborgs

And so what of we poor, limited humans? We don’t need to be left behind,  or disposed of somehow by our vastly more capable creations, according to Kurzweil. Since the functionality of our brains, both in general and on an individual basis, can be replicated within the computing power which is all around us, he envisages us enhancing ourselves by technology. Either we develop the ability to ‘upload the patterns of an actual human into a suitable non-biological, thinking substrate’, or we could simply continue the development of devices like neural implants until nanotechnology is actively extending and even replacing our biological faculties. ‘We will then be cyborgs,’ he explains, and ‘the nonbiological portion of our intelligence will expand its powers exponentially.’

If some of the above makes you feel distinctly queasy, then you’re not alone. A number of potential problems, even disasters, will have occurred to you. But Kurzweil is unfailingly upbeat; while listing a number of ways that things could go wrong, he reasons that all of them can be avoided. And in a long section at the end he lists many objections by critics and provides answers to all of them.

Meanwhile, back in the future, the singularity is under way; and perhaps the most surprising aspect of it is how soon Kurzweil sees it happening. Basing his prediction on an exhaustive analysis, he sets it at: 2045. Not a typo on my part,  but a date well within the lifetime of many of us. I’ll be 97 by then if I’m alive at all,  which I don’t expect to be, exponential advances in medicine notwithstanding. It so happens that Kurzweil himself was born in the same year as me; and as you might expect, this energetic man fully expects to see the day – indeed to be able to upload himself and continue into the future. He tells us how, once relatively unhealthy and suffering from type II diabetes, he took himself in hand ‘from my perspective as an inventor’. He immersed himself in the medical literature, and with the collaboration of a medical expert, aggressively applied a range of therapies to himself. At the time of writing the book, he proudly relates, he was taking 250 supplement pills each day and a half-dozen intravenous nutritional therapies per week. As a result he was judged to have attained a biological age of 40, although he was then 56 in calendar years.

This also brings us to the second – to my mind rather more dubious – plank upon which his vision of the future rests. As we have seen, the best prospects for humanity, he claims,  lie not in the messy and unreliable biological packages which have taken us thus far, but as entities somehow (dis)embodied in the substrate of the computing power which is expanding to fill ever more of the known universe.

Dialogue

Before examining this proposition further, I’d like to mention that, while Kurzweil’s book is hard going at times, it does have some refreshing touches. One of these is the frequent dialogues introduced at the end of chapters, where Kurzweil himself (‘Ray’) discusses the foregoing material with a variety of characters. These include,  among others, a woman from the present day and her uploaded self from a hundred years hence, as well as various luminaries from the past and present: Ned Ludd (the original Luddite from the 18th century), Charles Darwin, Sigmund Freud and Bill Gates. One rather nicely conceived one involves a couple of primordial bacteria discussing the pros and cons of clumping together and giving up some of their individuality in order to form larger organisms; we are implicitly invited to compare the reluctance of one of them to enter a world full of greater possibilities with our own apprehension about the singularity.

So in the same spirit, I have taken the opportunity here to discuss the matter with Kurzweil directly, and I suppose I am going to be the present day equivalent of the reluctant bacterium. (Most of the claims he makes below are not put into his mouth by me, but come from the book.)

DUNCOMMUTIN: Ray,  thank you for taking the trouble to visit my blog.

RAY: That’s my pleasure.

DUNCOMMUTIN: In the book you provide answers to a number of objections – many of them technically based ones which address whether the developments you outline are possible at all. I’ll assume that they are, but raise questions about whether we should really want them to happen.

RAY: OK. You won’t be the first to do that – but fire away.

DUNCOMMUTIN: Well, “fire away” is an apt phrase to introduce my first point: you have some experience of working on defence projects, and this is reflected in some of the points you make in the book. At one point you remark that ‘Warfare will move toward nanobot-based weapons, as well as cyber-weapons’. With all this hyper-intelligence at the service of our brains, won’t some of it reach the conclusion that war is a pretty stupid way of conducting things?

RAY: Yes – in one respect you have a point. But look at the state of the world today. Many people think that the various terrorist organisations that are gaining ever higher profiles pose the greatest threat to our future. Their agendas are mostly based on fanaticism and religious fundamentalism. I may be an optimist, but I don’t see that threat going away any time soon. Now there are reasoned objections to the future that I’m projecting, like your own – I welcome these, and view such debate as important.  But inevitably there will be those whose opposition will be unreasonable and destructive. Most people today would agree that we need armed forces to protect our democracy and,  indeed, our freedom to debate the shape of our future. So it follows that,  as we evolve enhanced capabilities, we should exploit them to counter those threats. But going back to your original point – yes,  I have every hope that the exponentially increasing intelligence we will have access to will put aside the possibility of war between technologically advanced nations. And indeed,  perhaps the very concept of a nation state might eventually disappear.

DUNCOMMUTIN: OK,  that seems reasonable. But I want to look further at the notion of each of us being part of some pan-intelligent entity. There are so many potential worries here. I’ll leave aside the question of computer viruses and cyber-warfare, which you deal with in the book. But can you really see this future being adopted wholesale? Before going into some of the reservations I have, I’d want to say that many will share them.

RAY: Imagine that we have reached that time – not so far in the future. I and like-minded people will be already taking advantage of the opportunities to expand our intelligence, while,  if I may say so, you and your more conservative-minded friends will have not. But expanded intelligence makes you a better debater.  Who do you think will win the argument?

DUNCOMMUTIN: Now you’re really worrying me. Being a better debater isn’t the same as being right. Isn’t this just another way of saying ‘might is right’ – the philosophy of the dictator down the ages?

RAY: That’s a bit unfair – we’re not talking about coercion here, but persuasion – a democratic concept.

DUNCOMMUTIN: Maybe, but it sounds very much as if, with all this overwhelming computer power, persuasion will very easily become coercion.

RAY: Remember that it is from the most technologically advanced nations that these developments will be initiated – and they are democracies. I see democracy and the right of choice being kept as fundamental principles.

DUNCOMMUTIN: You might,  Ray – but what safeguards will we have to retain freedom of choice and restrain any over-zealous technocrats? However I won’t pursue this line further. Here’s another thing that bothers me.  There’s an old saying: ‘To err is human,  but it takes a computer to really foul things up.’ If you look at the history of recent large scale IT projects, particularly in the public sector, you will come across any number of expensive flops that had to be abandoned. Now what you are proposing could be described, it seems to me, as the most ambitious IT project yet. What could happen if I commit the functioning of my own brain to a system which turns out to have serious flaws?

RAY: The problems you are referring to are associated with what we will come to see as the embryonic stage – the dark ages, if you will – of computing. It’s important to recognize that the science of computing is advancing by leaps and bounds, and that software exists which assists in the design of further software. Ultimately program design will be the preserve, not of sweaty pony-tailed characters slaving away in front of screens, but of proven self-organising software entities whose reliability is beyond doubt. Once again, as software principles are developed, proven and applied to the design of further software, we will see exponential progression in this area.

DUNCOMMUTIN: That reassures me in one way, but gives me more cause for concern in another. I am thinking of what I call the coffee machine scenario.

RAY: Coffee machine?

DUNCOMMUTIN: Yes.  In the office where I work there are state-of-the-art coffee machines,  fully automated. You only have to touch a few icons on a screen to order a cup of coffee, tea, or other drink just as you like it, with the right proportions of milk,  sugar,  and so on. The drink you specify is then delivered within seconds. The trouble is,  it tastes pretty ghastly, rendering the whole enterprise effectively pointless. What I am suggesting is that, given all the supreme and unimaginably complex technical wizardry that goes into our new existence, it’s going to be impossible for us humans to keep track of where it’s all going; and the danger is that the point will be missed: the real essence of ourselves will be lost or destroyed.

RAY: OK,  I think I see where you’re going. First of all,  let me reassure you that nanoengineered coffee will be better than anything you’ve tasted before! But, to get to the substantial point, you seem a bit vague about what this ‘essence’ is. Remember that what I am envisaging is a full reverse engineering of the human brain, and indeed body. The computation which results would mirror everything we think and feel. How could this fail to include what you see as the ‘essence’? Our brains and bodies are – in essence – computing processes; computing underlies the foundations of everything we care about, and that won’t be changing.

DUNCOMMUTIN: Well, I could find quite a few people who would say that computing underlies everything they hate – but I accept that’s a slightly frivolous comment. To zero in on this question of essence, let’s look at one aspect of human life – sense of humour. Humour comes at least partly under the heading of ’emotion’, and like other emotions,  it involves bodily functions,  most importantly in this case laughing. Everyone would agree that it’s a pleasant and therapeutic experience.

RAY: Let me jump in here to point out that while many bodily functions may no longer be essential in a virtual computation-driven world, that doesn’t mean they have to go. Physical breathing, for example, won’t be necessary, but if we find breathing itself pleasurable, we can develop virtual ways of having this sensual experience. The same goes for laughing.

DUNCOMMUTIN: But it’s not so much the laughing itself,  but what gives rise to it, which interests me. Humour often involves the apprehension of things being wrong,  or other than they should be – a gap between an aspiration and what is actually achieved. In this perfect virtual world, it seems as if such things will be eliminated.  Maybe we will find ourseleves still able to laugh virtually – but have nothing to virtually laugh at.

RAY: You’ll remember how I’ve said in my book that in such a world there will be limitless possibilities when it comes to entertainment and the arts. Virtual or imagined worlds in which anything can happen,  and in which things can go wrong, could be summoned at will. Such worlds could be immersive, and seem utterly real. These could provide all the entertainment and humour you could ever want.

DUNCOMMUTIN: There’s still something missing,  to my mind. Irony,  humour, artistic portrayals, whatever – all these have the power that they do because they are rooted in gritty reality, not in something we know to have been erected as some form of electronic simulation. In the world you are portraying it seems to me that everything promises to have a thinned-out,  ersatz quality – much like the coffee I mentioned a little while back.

RAY: Well if you really feel that way,  you may have to consider whether it’s worth this small sacrifice for the sake of eliminating hunger, disease, and maybe death itself.

DUNCOMMUTIN: Eliminating death – that raises a whole lot more questions, and if we go into them this blog entry will never finish. I have just one more point I would like to put to you: the question of consciousness, and how that can be preserved in a new substrate or mode of existence. I have to say I was impressed to see that,  unlike many commentators, you don’t dodge the difficulty of this question, but face it head-on.

RAY: Thank you. Yes, the difficulty is that, since it concerns subjective experience, this is the one matter that can’t be resolved by objective observation. It’s not a scientific question but a philosophical one – indeed,  the fundamental philosophical question.

DUNCOMMUTIN: Yes – but you still evidently believe that consciousness would transfer to our virtual, disembodied life. You cross swords with John Searle, whose Chinese Room argument readers of this blog will have come across. His view that consciousness is a fundamentally biological function that could not exist in any artificial substrate is not compatible with your envisaged future.

RAY: Indeed. The Chinese Room argument I think is tautologous – a circular argument – and I don’t see any basis for his belief that consciousness is necessarily biological.

DUNCOMMUTIN: I agree with you about the supposed biological nature of consciousness – perhaps for different reasons – but not about the Chinese Room. However there isn’t space to go into that here. What I want to know is, what makes you confident that your virtualised existence will be a conscious one – in other words,  that you will actually have future experiences to look forward to?

RAY: I’m a patternist. That is, it seems to me that conscious experience is an inevitable emergent property of a certain pattern of functionality, in terms of relationships between entities and how they develop over time. Our future technology will be able to map the pattern of these relationships to any degree of detail, and, by virtue of that,  consciousness will be preserved.

DUNCOMMUTIN: This seems to me to be a huge leap of faith. Is it not possible that you are mistaken, and that your transfer to the new modality will effectively bring about your death? Or worse, some form of altered, and not necessarily pleasant, experience?

RAY: On whether there will be any subjective experience at all, if the ‘pattern’ theory is not correct then I know of no other coherent one – and yes,  I’m prepared to stake my future existence on that. On whether the experience will be altered in some way: as I mentioned, we will be able to model brain and body patterns to any degree of detail, so I see no reason why that future experience should not be of the same quality.

DUNCOMMUTIN: Then the big difference is that I don’t see the grounds for having the confidence that you do, and would prefer to remain as my own imperfect,  mortal self. Nevertheless, I wish you the best for your virtual future – and thanks again for answering my questions.

RAY: No problem – and if you change your mind,  let me know.


The book: Kurzweil, Ray: The Singularity is Near,  Viking Penguin, 2005

For more recent material see: www.singularity.com and www.kurzweilai.net

Sign of Madness

Commuting days until retirement: 187

sign-not-in-useSeen today beside the motorway: a sign like the one in the illustration.

It wasn’t clear at all what it referred to, so we can only assume it refers to itself. But if it’s not in use…

How does the Department of Transport expect us to interpret these signs? I think we should be told. I have a mental picture of the people in government departments, faced with such difficult questions from the public, speaking as they used to in old films (and still do in some new films): “Let’s feed it into the computer and see what it comes up with.”

Result: nationwide systems failure, malfunctioning traffic lights, gridlock everywhere.

I think the Department of Transport should take on an adviser in philosophical logic to avoid these dire consequences.