The Mathematician and the Surgeon

Commuting days until retirement: 108

After my last post, which, among other things, compared differing attitudes to death and its aftermath (or absence of one) on the part of Arthur Koestler and George Orwell, here’s another fruitful comparison. It seemed to arise by chance from my next two commuting books, and each of the two people I’m comparing, as before, has his own characteristic perspective on that matter. Unlike my previous pair both could loosely be called scientists, and in each case the attitude expressed has a specific and revealing relationship with the writer’s work and interests.

The Mathematician

The first writer, whose book I came across by chance, has been known chiefly for mathematical puzzles and games. Martin Gardner was born in Oklahoma USA in 1914; his father was an oil geologist, and it was a conventionally Christian household. Although not trained as a mathematician, and going into a career as a journalist and writer, Gardner developed a fascination with mathematical problems and puzzles which informed his career – hence the justification for his half of my title.

Martin Gardner

Gardner as a young man (Wikimedia)

This interest continued to feed the constant books and articles he wrote, and he was eventually asked to write the Scientific American column Mathematical Games which ran from 1956 until the mid 1980s, and for which he became best known; his enthusiasm and sense of fun shines through the writing of these columns. At the same time he was increasingly concerned with the many types of fringe beliefs that had no scientific foundation, and was a founder member of PSICOPS,  the organisation dedicated to the exposing and debunking of pseudoscience. Back in February last year I mentioned one of its other well-known members, the flamboyant and self-publicising James Randi. By contrast, Gardner was mild-mannered and shy, averse from public speaking and never courting publicity. He died in 2010, leaving behind him many admirers and a two-yearly convention – the ‘Gathering for Gardner‘.

Before learning more about him recently, and reading one of his books, I had known his name from the Mathematical Games column, and heard of his rigid rejection of things unscientific. I imagined some sort of skinflint atheist, probably with a hard-nosed contempt for any fanciful or imaginative leanings – however sane and unexceptionable they might be – towards what might be thought of as things of the soul.

How wrong I was. His book that I’ve recently read, The Whys of a Philosophical Scrivener, consists of a series of chapters with titles of the form ‘Why I am not a…’ and he starts by dismissing solipsism (who wouldn’t?) and various forms of relativism; it’s a little more unexpected that determinism also gets short shrift. But in fact by this stage he has already declared that

I myself am a theist (as some readers may be surprised to learn).

I was surprised, and also intrigued. Things were going in an interesting direction. But before getting to the meat of his theism he spends a good deal of time dealing with various political and economic creeds. The book was written in the mid 80s, not long before the collapse of communism, which he seems to be anticipating (Why I am not a Marxist) . But equally he has little time for Reagan or Thatcher, laying bare the vacuity of their over-simplistic political nostrums (Why I am not a Smithian).

Soon after this, however, he is striding into the longer grass of religious belief: Why I am not a Polytheist; Why I am not a Pantheist; – so what is he? The next chapter heading is a significant one: Why I do not Believe the Existence of God can be Demonstrated. This is the key, it seems to me, to Gardner’s attitude – one to which I find myself sympathetic. Near the beginning of the book we find:

My own view is that emotions are the only grounds for metaphysical leaps.

I was intrigued by the appearance of the emotions in this context: here is a man whose day job is bound up with his fascination for the powers of reason, but who is nevertheless acutely conscious of the limits of reason. He refers to himself as a ‘fideist’ – one who believes in a god purely on the basis of faith, rather than any form of demonstration, either empirical or through abstract logic. And if those won’t provide a basis for faith, what else is there but our feelings? This puts Gardner nicely at odds with the modish atheists of today, like Dawkins, who never tires of telling us that he too could believe if only the evidence were there.

But at the same time he is squarely in a religious tradition which holds that ultimate things are beyond the instruments of observation and logic that are so vital to the secular, scientific world of today. I can remember my own mother – unlike Gardner a conventional Christian believer – being very definite on that point. And it reminds me of some of the writings of Wittgenstein; Gardner does in fact refer to him,  in the context of the freewill question. I’ll let him explain:

A famous section at the close of Ludwig Wittgenstein’s Tractatus Logico-Philosophicus asserts that when an answer cannot be put into words, neither can the question; that if a question can be framed at all, it is possible to answer it; and that what we cannot speak about we should consign to silence. The thesis of this chapter, although extremely simple and therefore annoying to most contemporary thinkers, is that the free-will problem cannot be solved because we do not know exactly how to put the question.

This mirrors some of my own thoughts about that particular philosophical problem – a far more slippery one than those on either side of it often claim, in my opinion (I think that may be a topic for a future post). I can add that Gardner was also on the unfashionable side of the question which came up in my previous post – that of an afterlife; and again he holds this out as a matter of faith rather than reason. He explores the philosophy of personal identity and continuity in some detail, always concluding with the sentiment ‘I do not know. Do not ask me.’ His underlying instinct seems to be that there has to something more than our bodily existence, given that our inner lives are so inexplicable from the objective point of view – so much more than our physical existence. ‘By faith, I hope and believe that you and I will not disappear for ever when we die.’ By contrast, Arthur Koestler, you may remember,  wrote in his suicide note of ‘tentative hopes for a depersonalised afterlife’ – but, as it turned out, these hopes were based partly on the sort of parapsychological evidence which was anathema to Gardner.

And of course Gardner was acutely aware of another related mystery – that of consciousness, which he finds inseparable from the issue of free will:

For me, free will and consciousness are two names for the same thing. I cannot conceive of myself being self-aware without having some degree of free will… Nor can I imagine myself having free will without being conscious.

He expresses utter dissatisfaction with the approach of arch-physicalists such as Daniel Dennett, who,  as he says,  ‘explains consciousness by denying that it exists’. (I attempted to puncture this particular balloon in an earlier post.)

Martin Gardner

Gardner in later life (Konrad Jacobs / Wikimedia)

Gardner places himself squarely within the ranks of the ‘mysterians’ – a deliberately derisive label applied by their opponents to those thinkers who conclude that these matters are mysteries which are probably beyond our capacity to solve. Among their ranks is Noam Chomsky: Gardner cites a 1983 interview with the grand old man of linguistics,  in which he expresses his attitude to the free will problem (scroll down to see the relevant passage).

The Surgeon

And so to the surgeon of my title, and if you’ve read one of my other blog posts you will already have met him – he’s a neurosurgeon named Henry Marsh, and I wrote a post based on a review of his book Do No Harm. Well, now I’ve read the book, and found it as impressive and moving as the review suggested. Unlike many in his profession, Marsh is a deeply humble man who is disarmingly honest in his account about the emotional impact of the work he does. He is simultaneously compelled towards,  and fearful of, the enormous power of the neurosurgeon both to save and to destroy. His narrative swings between tragedy and elation, by way of high farce when he describes some of the more ill-conceived management ‘initiatives’ at his hospital.

A neurosurgical operation

A neurosurgical operation (Mainz University Medical Centre)

The interesting point of comparison with Gardner is that Marsh – a man who daily manipulates what we might call physical mind-stuff – the brain itself – is also awed and mystified by its powers:

There are one hundred billion nerve cells in our brains. Does each one have a fragment of consciousness within it? How many nerve cells do we require to be conscious or to feel pain? Or does consciousness and thought reside in the electrochemical impulses that join these billions of cells together? Is a snail aware? Does it feel pain when you crush it underfoot? Nobody knows.

The same sense of mystery and wonder as Gardner’s; but approached from a different perspective:

Neuroscience tells us that it is highly improbable that we have souls, as everything we think and feel is no more or no less than the electrochemical chatter of our nerve cells… Many people deeply resent this view of things, which not only deprives us of life after death but also seems to downgrade thought to mere electrochemistry and reduces us to mere automata, to machines. Such people are profoundly mistaken, since what it really does is upgrade matter into something infinitely mysterious that we do not understand.

Henry Marsh

Henry Marsh

This of course is the perspective of a practical man – one who is emphatically working at the coal face of neurology, and far more familiar with the actual material of brain tissue than armchair speculators like me. While I was reading his book, although deeply impressed by this man’s humanity and integrity, what disrespectfully came to mind was a piece of irreverent humour once told to me by a director of a small company I used to work for which was closely connected to the medical industry. It was a sort of a handy cut-out-and-keep guide to the different types of medical practitioner:

Surgeons do everything and know nothing. Physicians know everything and do nothing. Psychiatrists know nothing and do nothing.  Pathologists know everything and do everything – but the patient’s dead, so it’s too late.

Grossly unfair to all to all of them, of course, but nonetheless funny, and perhaps containing a certain grain of truth. Marsh, belonging to the first category, perhaps embodies some of the aversion from dry theory that this caricature hints at: what matters to him ultimately, as a surgeon, is the sheer down-to-earth physicality of his work, guided by the gut instincts of his humanity. We hear from him about some members of his profession who seem aloof from the enormity of the dangers it embodies, and seem able to proceed calmly and objectively with what he sees almost as the detachment of the psychopath.

Common ground

What Marsh and Gardner seem to have in common is the instinct that dry, objective reasoning only takes you so far. Both trust the power of their own emotions, and their sense of awe. Both, I feel, are attempting to articulate the same insight, but from widely differing standpoints.

Two passages, one from each book, seem to crystallize both the similarities and differences between the respective approaches of the two men, both of whom seem to me admirably sane and perceptive, if radically divergent in many respects. First Gardner, emphasising in a Wittgensteinian way how describing how things appear to be is perhaps a more useful activity than attempting to pursue any ultimate reasons:

There is a road that joins the empirical knowledge of science with the formal knowledge of logic and mathematics. No road connects rational knowledge with the affirmations of the heart. On this point fideists are in complete agreement. It is one of the reasons why a fideist, Christian or otherwise, can admire the writings of logical empiricists more than the writings of philosophers who struggle to defend spurious metaphysical arguments.

And now Marsh – mystified, as we have seen, as to how the brain-stuff he manipulates daily can be the seat of all experience – having a go at reading a little philosophy in the spare time between sessions in the operating theatre:

As a practical brain surgeon I have always found the philosophy of the so-called ‘Mind-Brain Problem’ confusing and ultimately a waste of time. It has never seemed a problem to me, only a source of awe, amazement and profound surprise that my consciousness, my very sense of self, the self which feels as free as air, which was trying to read the book but instead was watching the clouds through the high windows, the self which is now writing these words, is in fact the electrochemical chatter of one hundred billion nerve cells. The author of the book appeared equally amazed by the ‘Mind-Brain Problem’, but as I started to read his list of theories – functionalism, epiphenomenalism, emergent materialism, dualistic interactionism or was it interactionistic dualism? – I quickly drifted off to sleep, waiting for the nurse to come and wake me, telling me it was time to return to the theatre and start operating on the old man’s brain.

I couldn’t help noticing that these two men – one unconventionally religious and the other not religious at all – seem between them to embody those twin traditional pillars of the religious life: faith and works.

A Singular Notion

Commuting days until retirement: 168

I’ve been reading about the future. Well, one man’s idea of the future, anyway – and of course when it comes to the future, people’s ideas about it are really all we can have. This particular writer obviously considers his own ideas to be highly upbeat and optimistic, but others may view them with apprehension, if not downright disbelief – and I share some of their reservations.

Ray Kurzweil

Ray Kurzweil
(Photo: Roland Dobbins / Wikimedia Commons)

The man in question is Ray Kurzweil, and it has to be said that he is massively well informed – about the past and the present, anyway; his claims to knowledge of the future are what I want to examine. He holds the position of Head of Engineering at Google, but has also founded any number of high-tech companies and is credited with a big part in inventing flatbed scanners, optical character recognition, speech synthesis and speech recognition. On top of all this, he is quite a philosopher, and has carried on debates with other philosophers about the basis of his ideas, and we hear about some of these debates in the book I’ve been reading.

The book is The Singularity is Near, and its length (500 dense pages, excluding notes) is partly responsible for the elapsed time since my last substantial post. Kurzweil is engagingly enthusiastic about his enormous stock of knowledge,  so much so that he is unable to resist laying the exhaustive details of every topic before you. Repeatedly you find yourself a little punch drunk under the remorseless onslaught of facts – at which point he has an engaging way of saying ‘I’ll be dealing with that in more detail in the next chapter.’ You feel that perhaps quite a bit of the content would be better accommodated in endnotes – were it not for the fact that nearly half the book consists of endnotes as it is.

Density

To my mind, the the argument of the book has two principal premises, the first of which I’d readily agree to, but the second of which seems to me highly dubious. The first idea is closely related to the ‘Singularity’ of the title. A singularity is a concept imported from mathematics, but is perhaps more familiar in the context of black holes and the big bang. In a black hole, enormous amounts of matter become so concentrated under their own gravitational force that they shrink to a point of, well, as far as we can tell, infinite density. (At this point I can’t help thinking of Kurzweil’s infinitely dense prose style – perhaps it is suited to his topic.) But what’s important about this for our present purposes is the fact that some sort of boundary has been crossed: things are radically different, and all the rules and guidelines that we have previously found useful in investigating how the world works, no longer apply.

To understand how this applies, by analogy,  to our future, we have to introduce the notion of exponential growth – that is, growth not by regular increments but by multiples. A well known illustration of the surprising power of this is the old fable of the King who has a debt of gratitude to one of his subjects, and asks what he would like as a reward. The man asks for one grain of wheat corresponding to the first square of the chess board, two for the second, four for the third,  and so on up to the sixty-fourth, doubling each time. At first the King is incredulous that the man has demanded so little, but of course soon finds that the entire output of his country would fall woefully short of what is asked. (The number of grains works out at 18,446,744,073,709,551,615 – this is of a similar order to,  say,  the estimated number of grains of sand in the world.)

Such unexpected expansion is the hallmark of exponential growth – however gradually it rises at first, eventually the curve will always accelerate explosively upward. Kurzweil devotes many pages to arguing how the advance of human technical capability follows just such a trajectory. One frequently quoted example is what has become known as Moore’s law: in 1965 a co-founder of the chip company Intel, Gordon Moore, made an extrapolation from what had then been achieved and asserted that the number of processing elements that could be fitted on to a chip of a given size would double every year.  This was later modified to two years, but has nevertheless continued exponentially, and there is no reason, short of global calamity, that it will stop in the foreseeable future. The evidence is all around us: thirty years ago, equipment with the power of a modern smartphone would have been a roomful of immobile cabinets costing thousands of pounds.

Accelerating returns

That’s of course just one example; taking a broader view we could look, as Kurzweil does, at the various revolutions that have transformed human life over time. The stages of agricultural revolution – the transition from the hunter-gatherer way of life,  via subsistence farming to systematic growth and distribution of food,  took many centuries,  or even millennia. The industrial revolution could be said to have been even greater in its effects over a mere century or two, while the digital revolution we are currently experiencing has made radical changes in just the last thirty years or so. Kurzweil argues that each of these steps forward provides us with the wherewithal to effect further changes even more rapidly and efficiently – and hence the exponential nature of our progress. Kurzweil refers to it as ‘The Law of Accelerating Returns’.

So if we are proceeding by ever-increasing steps forward, what is our destiny – what will be the nature of the exponential explosion that we must expect? This is the burden of Kurzweil’s book, and the ‘singularity’ after which nothing will be the same. His projection of our progress towards this point is based on a triumvirate of endeavours which he refers to confidently with the acronym GNR: Genetics, Nanotechnology and Robotics. Genetics will continue its progress – exponentially – in finding cures for the disorders which limit our life span, as well as disentangling many of the mysteries of how we – and our brains – develop. For Nanotechnology,  Kurzweil has extensive expectations. Tiny, ultimately self-reproducing machines could be sent out into the world to restructure matter and turn innocent lumps of rock into computers with so far undreamt of processing power. And they could journey inwards,  into our bodies, ferreting out cancer cells and performing all sorts of repairs that would be difficult or impossible now. Kurzweil’s enthusiasm reaches its peak when he describes these microscopic helpers travelling round the blood vessels of our brains, scanning their surroundings and reporting back over wi-fi on what they find. This would be part of the grand project of ‘reverse engineering the brain’.

And with the knowledge gained thus, the third endeavour,  Robotics, already enlisted in the development of the nanobots now navigating our brains, would come into its own. Built on many decades of computing experience,  and enhanced by an understanding of how the human brain works, a race of impossibly intelligent robots, which nevertheless boast human qualities, would be born. Processing power is still of course expanding exponentially, adopting any handy lumps of rock as its substrate, and Kurzweil sees it expanding across the universe as the possibilities of our own planet are exhausted.

Cyborgs

And so what of we poor, limited humans? We don’t need to be left behind,  or disposed of somehow by our vastly more capable creations, according to Kurzweil. Since the functionality of our brains, both in general and on an individual basis, can be replicated within the computing power which is all around us, he envisages us enhancing ourselves by technology. Either we develop the ability to ‘upload the patterns of an actual human into a suitable non-biological, thinking substrate’, or we could simply continue the development of devices like neural implants until nanotechnology is actively extending and even replacing our biological faculties. ‘We will then be cyborgs,’ he explains, and ‘the nonbiological portion of our intelligence will expand its powers exponentially.’

If some of the above makes you feel distinctly queasy, then you’re not alone. A number of potential problems, even disasters, will have occurred to you. But Kurzweil is unfailingly upbeat; while listing a number of ways that things could go wrong, he reasons that all of them can be avoided. And in a long section at the end he lists many objections by critics and provides answers to all of them.

Meanwhile, back in the future, the singularity is under way; and perhaps the most surprising aspect of it is how soon Kurzweil sees it happening. Basing his prediction on an exhaustive analysis, he sets it at: 2045. Not a typo on my part,  but a date well within the lifetime of many of us. I’ll be 97 by then if I’m alive at all,  which I don’t expect to be, exponential advances in medicine notwithstanding. It so happens that Kurzweil himself was born in the same year as me; and as you might expect, this energetic man fully expects to see the day – indeed to be able to upload himself and continue into the future. He tells us how, once relatively unhealthy and suffering from type II diabetes, he took himself in hand ‘from my perspective as an inventor’. He immersed himself in the medical literature, and with the collaboration of a medical expert, aggressively applied a range of therapies to himself. At the time of writing the book, he proudly relates, he was taking 250 supplement pills each day and a half-dozen intravenous nutritional therapies per week. As a result he was judged to have attained a biological age of 40, although he was then 56 in calendar years.

This also brings us to the second – to my mind rather more dubious – plank upon which his vision of the future rests. As we have seen, the best prospects for humanity, he claims,  lie not in the messy and unreliable biological packages which have taken us thus far, but as entities somehow (dis)embodied in the substrate of the computing power which is expanding to fill ever more of the known universe.

Dialogue

Before examining this proposition further, I’d like to mention that, while Kurzweil’s book is hard going at times, it does have some refreshing touches. One of these is the frequent dialogues introduced at the end of chapters, where Kurzweil himself (‘Ray’) discusses the foregoing material with a variety of characters. These include,  among others, a woman from the present day and her uploaded self from a hundred years hence, as well as various luminaries from the past and present: Ned Ludd (the original Luddite from the 18th century), Charles Darwin, Sigmund Freud and Bill Gates. One rather nicely conceived one involves a couple of primordial bacteria discussing the pros and cons of clumping together and giving up some of their individuality in order to form larger organisms; we are implicitly invited to compare the reluctance of one of them to enter a world full of greater possibilities with our own apprehension about the singularity.

So in the same spirit, I have taken the opportunity here to discuss the matter with Kurzweil directly, and I suppose I am going to be the present day equivalent of the reluctant bacterium. (Most of the claims he makes below are not put into his mouth by me, but come from the book.)

DUNCOMMUTIN: Ray,  thank you for taking the trouble to visit my blog.

RAY: That’s my pleasure.

DUNCOMMUTIN: In the book you provide answers to a number of objections – many of them technically based ones which address whether the developments you outline are possible at all. I’ll assume that they are, but raise questions about whether we should really want them to happen.

RAY: OK. You won’t be the first to do that – but fire away.

DUNCOMMUTIN: Well, “fire away” is an apt phrase to introduce my first point: you have some experience of working on defence projects, and this is reflected in some of the points you make in the book. At one point you remark that ‘Warfare will move toward nanobot-based weapons, as well as cyber-weapons’. With all this hyper-intelligence at the service of our brains, won’t some of it reach the conclusion that war is a pretty stupid way of conducting things?

RAY: Yes – in one respect you have a point. But look at the state of the world today. Many people think that the various terrorist organisations that are gaining ever higher profiles pose the greatest threat to our future. Their agendas are mostly based on fanaticism and religious fundamentalism. I may be an optimist, but I don’t see that threat going away any time soon. Now there are reasoned objections to the future that I’m projecting, like your own – I welcome these, and view such debate as important.  But inevitably there will be those whose opposition will be unreasonable and destructive. Most people today would agree that we need armed forces to protect our democracy and,  indeed, our freedom to debate the shape of our future. So it follows that,  as we evolve enhanced capabilities, we should exploit them to counter those threats. But going back to your original point – yes,  I have every hope that the exponentially increasing intelligence we will have access to will put aside the possibility of war between technologically advanced nations. And indeed,  perhaps the very concept of a nation state might eventually disappear.

DUNCOMMUTIN: OK,  that seems reasonable. But I want to look further at the notion of each of us being part of some pan-intelligent entity. There are so many potential worries here. I’ll leave aside the question of computer viruses and cyber-warfare, which you deal with in the book. But can you really see this future being adopted wholesale? Before going into some of the reservations I have, I’d want to say that many will share them.

RAY: Imagine that we have reached that time – not so far in the future. I and like-minded people will be already taking advantage of the opportunities to expand our intelligence, while,  if I may say so, you and your more conservative-minded friends will have not. But expanded intelligence makes you a better debater.  Who do you think will win the argument?

DUNCOMMUTIN: Now you’re really worrying me. Being a better debater isn’t the same as being right. Isn’t this just another way of saying ‘might is right’ – the philosophy of the dictator down the ages?

RAY: That’s a bit unfair – we’re not talking about coercion here, but persuasion – a democratic concept.

DUNCOMMUTIN: Maybe, but it sounds very much as if, with all this overwhelming computer power, persuasion will very easily become coercion.

RAY: Remember that it is from the most technologically advanced nations that these developments will be initiated – and they are democracies. I see democracy and the right of choice being kept as fundamental principles.

DUNCOMMUTIN: You might,  Ray – but what safeguards will we have to retain freedom of choice and restrain any over-zealous technocrats? However I won’t pursue this line further. Here’s another thing that bothers me.  There’s an old saying: ‘To err is human,  but it takes a computer to really foul things up.’ If you look at the history of recent large scale IT projects, particularly in the public sector, you will come across any number of expensive flops that had to be abandoned. Now what you are proposing could be described, it seems to me, as the most ambitious IT project yet. What could happen if I commit the functioning of my own brain to a system which turns out to have serious flaws?

RAY: The problems you are referring to are associated with what we will come to see as the embryonic stage – the dark ages, if you will – of computing. It’s important to recognize that the science of computing is advancing by leaps and bounds, and that software exists which assists in the design of further software. Ultimately program design will be the preserve, not of sweaty pony-tailed characters slaving away in front of screens, but of proven self-organising software entities whose reliability is beyond doubt. Once again, as software principles are developed, proven and applied to the design of further software, we will see exponential progression in this area.

DUNCOMMUTIN: That reassures me in one way, but gives me more cause for concern in another. I am thinking of what I call the coffee machine scenario.

RAY: Coffee machine?

DUNCOMMUTIN: Yes.  In the office where I work there are state-of-the-art coffee machines,  fully automated. You only have to touch a few icons on a screen to order a cup of coffee, tea, or other drink just as you like it, with the right proportions of milk,  sugar,  and so on. The drink you specify is then delivered within seconds. The trouble is,  it tastes pretty ghastly, rendering the whole enterprise effectively pointless. What I am suggesting is that, given all the supreme and unimaginably complex technical wizardry that goes into our new existence, it’s going to be impossible for us humans to keep track of where it’s all going; and the danger is that the point will be missed: the real essence of ourselves will be lost or destroyed.

RAY: OK,  I think I see where you’re going. First of all,  let me reassure you that nanoengineered coffee will be better than anything you’ve tasted before! But, to get to the substantial point, you seem a bit vague about what this ‘essence’ is. Remember that what I am envisaging is a full reverse engineering of the human brain, and indeed body. The computation which results would mirror everything we think and feel. How could this fail to include what you see as the ‘essence’? Our brains and bodies are – in essence – computing processes; computing underlies the foundations of everything we care about, and that won’t be changing.

DUNCOMMUTIN: Well, I could find quite a few people who would say that computing underlies everything they hate – but I accept that’s a slightly frivolous comment. To zero in on this question of essence, let’s look at one aspect of human life – sense of humour. Humour comes at least partly under the heading of ’emotion’, and like other emotions,  it involves bodily functions,  most importantly in this case laughing. Everyone would agree that it’s a pleasant and therapeutic experience.

RAY: Let me jump in here to point out that while many bodily functions may no longer be essential in a virtual computation-driven world, that doesn’t mean they have to go. Physical breathing, for example, won’t be necessary, but if we find breathing itself pleasurable, we can develop virtual ways of having this sensual experience. The same goes for laughing.

DUNCOMMUTIN: But it’s not so much the laughing itself,  but what gives rise to it, which interests me. Humour often involves the apprehension of things being wrong,  or other than they should be – a gap between an aspiration and what is actually achieved. In this perfect virtual world, it seems as if such things will be eliminated.  Maybe we will find ourseleves still able to laugh virtually – but have nothing to virtually laugh at.

RAY: You’ll remember how I’ve said in my book that in such a world there will be limitless possibilities when it comes to entertainment and the arts. Virtual or imagined worlds in which anything can happen,  and in which things can go wrong, could be summoned at will. Such worlds could be immersive, and seem utterly real. These could provide all the entertainment and humour you could ever want.

DUNCOMMUTIN: There’s still something missing,  to my mind. Irony,  humour, artistic portrayals, whatever – all these have the power that they do because they are rooted in gritty reality, not in something we know to have been erected as some form of electronic simulation. In the world you are portraying it seems to me that everything promises to have a thinned-out,  ersatz quality – much like the coffee I mentioned a little while back.

RAY: Well if you really feel that way,  you may have to consider whether it’s worth this small sacrifice for the sake of eliminating hunger, disease, and maybe death itself.

DUNCOMMUTIN: Eliminating death – that raises a whole lot more questions, and if we go into them this blog entry will never finish. I have just one more point I would like to put to you: the question of consciousness, and how that can be preserved in a new substrate or mode of existence. I have to say I was impressed to see that,  unlike many commentators, you don’t dodge the difficulty of this question, but face it head-on.

RAY: Thank you. Yes, the difficulty is that, since it concerns subjective experience, this is the one matter that can’t be resolved by objective observation. It’s not a scientific question but a philosophical one – indeed,  the fundamental philosophical question.

DUNCOMMUTIN: Yes – but you still evidently believe that consciousness would transfer to our virtual, disembodied life. You cross swords with John Searle, whose Chinese Room argument readers of this blog will have come across. His view that consciousness is a fundamentally biological function that could not exist in any artificial substrate is not compatible with your envisaged future.

RAY: Indeed. The Chinese Room argument I think is tautologous – a circular argument – and I don’t see any basis for his belief that consciousness is necessarily biological.

DUNCOMMUTIN: I agree with you about the supposed biological nature of consciousness – perhaps for different reasons – but not about the Chinese Room. However there isn’t space to go into that here. What I want to know is, what makes you confident that your virtualised existence will be a conscious one – in other words,  that you will actually have future experiences to look forward to?

RAY: I’m a patternist. That is, it seems to me that conscious experience is an inevitable emergent property of a certain pattern of functionality, in terms of relationships between entities and how they develop over time. Our future technology will be able to map the pattern of these relationships to any degree of detail, and, by virtue of that,  consciousness will be preserved.

DUNCOMMUTIN: This seems to me to be a huge leap of faith. Is it not possible that you are mistaken, and that your transfer to the new modality will effectively bring about your death? Or worse, some form of altered, and not necessarily pleasant, experience?

RAY: On whether there will be any subjective experience at all, if the ‘pattern’ theory is not correct then I know of no other coherent one – and yes,  I’m prepared to stake my future existence on that. On whether the experience will be altered in some way: as I mentioned, we will be able to model brain and body patterns to any degree of detail, so I see no reason why that future experience should not be of the same quality.

DUNCOMMUTIN: Then the big difference is that I don’t see the grounds for having the confidence that you do, and would prefer to remain as my own imperfect,  mortal self. Nevertheless, I wish you the best for your virtual future – and thanks again for answering my questions.

RAY: No problem – and if you change your mind,  let me know.


The book: Kurzweil, Ray: The Singularity is Near,  Viking Penguin, 2005

For more recent material see: www.singularity.com and www.kurzweilai.net

Words and Music

Commuting days until retirement: 360

Words and MusicThis piece was kicked off by some comments I heard from the poet Sean O’Brien on a recent radio programme. Speaking on the BBC’s Private Passions, where guests choose favourite music, and talking about Debussy, he said:

Poetry is always envious of music because that’s what poetry wants to be, whereas music has no need to be poetry. So [poets] are always following in the wake of music – but the two come quite close in sensibility here, it seems to me.

Discuss. Well, for me this immediately brought to mind a comment I once heard attributed to the composer Mendelssohn, to the effect that ‘music is not too vague for words, but too precise.’ So I went and did some Googling, and here’s the original quote:

People often complain that music is too ambiguous, that what they should think when they hear it is so unclear, whereas everyone understands words. With me, it is exactly the opposite, and not only with regard to an entire speech but also with individual words. These, too, seem to me so ambiguous, so vague, so easily misunderstood in comparison to genuine music, which fills the soul with a thousand things better than words. The thoughts which are expressed to me by music that I love are not too indefinite to be put into words, but on the contrary, too definite.

(Source: Wikiquote, where you can also see the original German.)

These two statements seem superficially similar – are they saying the same thing, from different standpoints? One difference, of course, is that O’Brien is specifically talking of poetry rather than words in general. In comparison with prose, poetry shares more of the characteristics of music: it employs rhythm, cadence, phrasing and repetition, and is frequently performed rather than read from the page; and music of course is virtually exclusively a performance art. So that’s a first thought about how poetry departs from the prosaic and finds itself approaching the territory inhabited by music.

But what is the precision which Mendelssohn insists is music’s preserve? It’s true that music is entirely bound up with the mathematics of sound frequency ratios in intervals between notes: it was this that gave Pythagoras and others their obsession with the mystique of numbers, in antiquity. A musician need not be grounded deeply in mathematical theory, but but he or she will always be intensely aware of the differing characters of musical intervals – as is anyone who enjoys music at all, if perhaps more indirectly.

I haven’t read a lot of theorising on this topic, but it seems to me that there is a strong link here with everyday speech. In my own language, as, I should imagine, in most others, pitch and phrasing indicate the emotional register of what you are saying, and hence an important element of the meaning. One can imagine this evolving from the less articulate cries of our ancestors, with an awareness of intervals of pitch developing as a method of communication, hence fostering group cohesion and Darwinian survival value. Indeed, in some languages, such as Chinese, intonation can be integral to the meaning of a word. And there’s scientific evidence of the closeness of music and language: here’s an example.

So, going back to Mendelssohn, it’s as if music has developed by abstracting certain elements from speech, leaving direct, referential meaning behind and evolving a vocabulary from pitch, timbre and the free-floating emotional states associated with them. Think for a moment of film music. It can be an interesting exercise, in the middle of a film or TV drama, to make yourself directly aware of the background music, and then imagine how your perception of what is happening would differ if it were absent. You come to realise that the music is often instructing you what to feel about a scene or a character, and it often connects with your emotions so directly that it doesn’t consciously occur to you that the feelings you experience are not your own spontaneous ones. And if you add to this the complex structures formed from key relationships and temporal development, which a professional musician would be particularly aware of, you can start to see what Mendelssohn was talking about.

The musical piece O’Brien was introducing was the ‘Dialogue between the wind and the sea’ from Debussy ‘s La Mer. In other words,a passage which seeks to evoke a visual and auditory scene, rather than simply exploring musical ideas and the emotions that arise directly from them. By contrast, we could imagine a description of such a scene in prose: to be effective the writer needs to choose the words whose meanings and associations come together in such a way that readers can recreate the sensory impressions, and and the subjective impact of the scene in their own minds. The music, on the other hand, can combine its direct emotional access with an auditory picture, in a highly effective way.

Rain Steam and Speed

Rain, Steam and Speed – J.M.W Turner (National Gallery, London)

I was trying to think of an equivalent in visual art, and and the painting that came to mind was this one, with its emphasis on the raw sensory feelings evoked by the scene, rather than a faithful (prosaic) portrayal. In his use of this technique, Turner was of course controversial in his time, and is now seen as a forerunner of the impressionist movement. Interestingly, what I didn’t know before Googling the painting was that he is also known to have influenced Debussy, who mentions him in his letters. Debussy was also sometimes spoken of as impressionist, but he hated this term being applied to his work, and here is a quote from one of those letters:

I am trying to do something different – an effect of reality… what the imbeciles call impressionism, a term which is as poorly used as possible, particularly by the critics, since they do not hesitate to apply it to Turner, the finest creator of mysterious effects in all the world of art.

I like to think that perhaps Debussy is also to some extent lining up with Mendelssohn here, and, beside his reference to Turner’s painting, maybe has in mind the unique form of access to our consciousness which music has, as opposed to other art forms. A portrayal in poetry would perhaps come somewhere between prose and music, given that poetry, as we’ve seen, borrows some of music’s tricks. I looked around for an example of a storm at sea in poetry: here’s a snippet from Swinburne, around the turn of the 20th century, describing a storm in the English Channel.

As a wild steed ramps in rebellion, and and rears till it swerves from a backward fall,
The strong ship struggled and reared, and her deck was upright as a sheer cliff’s wall.

In two lines here we have – besides two similes with one of them extended into a metaphor – alliteration, repetition and rhyme, all couched in an irregular, bucking rhythm which suggests the movement of the ship with the sea and wind. Much in common here, then, with a musical evocation of a storm. This I take to be part of what O’Brien means by poetry ‘wanting’ to be music, and being ‘close in sensibility’ in the example he was talking about.

But I don’t see how all this implies that we should somehow demote poetry to an inferior role. Yes, it’s true that words don’t so often trigger emotions as directly, by their sound alone, as does music – except perhaps in individual cases where someone has become sensitised to a word through experience. But the Swinburne passage is an example of poetry flexing the muscles which it alone possesses, in pursuit of its goal. And even when its direct purpose is not the evocation of a specific scene, the addition of the use of imagery to the auditory effects it commands can create a very compelling kind of ‘music’. A couple of instances that occur to me: first T. S. Eliot in Burnt Norton, from The Four Quartets.

Garlic and sapphires in the mud
Clot the bedded axle-tree.
The trilling wire in the blood
Sings below inveterate scars
Appeasing long-forgotten wars.

I’m vaguely aware that there all sorts of allusions in the poet’s mind which are beyond my awareness, but just at the level of the sound of the words combined with the immediate images they conjure, there is for me a magic about this which fastens itself in my mind, making it intensely enjoyable to repeat the words to myself, I just as a snatch of music can stick in the consciousness. Another example, from Dylan Thomas, known for his wayward and idiosyncratic use of language. This is the second stanza from Especially When the October Wind:

Shut, too, in a tower of words, I mark
On the horizon walking like the trees
The wordy shapes of women, and the rows
Of the star-gestured children in the park.
Some let me make you of the vowelled beeches,
Some of the oaken voices, from the roots
Of many a thorny shire tell you notes,
Some let me make you of the water’s speeches.

Again, I find pure pleasure in the play of sounds and images here, before ever considering what further meaning there may be. But what I also love about the whole poem (here is the poet reading it) is the way it is self-referential: while exploiting the power of words, it explicitly links language-words to  images – ‘vowelled beeches’ rhymed with ‘water’s speeches’. The poet himself is ‘shut in a tower of words’ .

But, returning to the comparison with music, there’s one obvious superficial difference between it and language, namely that language is generally ‘about’ something, but while music is not. But of course there are plenty of exceptions: music can be, at least in part, deliberately descriptive, as we saw with Debussy, he while poetry often does away with literal meaning to transform itself into a sort of word-music, as I’ve tried to show above. And another obvious point I haven’t made is that words and music are often – perhaps more often than not – yoked together in song. The voice itself can simultaneously be a musical instrument and a purveyor of meaning. And it may be that the music is recruited to add emotional resonance to the language – think opera or musical drama – or that the words serve to give the music a further dimension and personality, as often in popular music. In the light of his statement above, it’s interesting that Mendelssohn is famous particularly for his piano pieces entitled Songs Without Words. The oxymoron is a suggestive one, and he strongly resisted an attempt by a friend to put words to them.

But the question of what music itself is ‘about’ is a perplexing, and perhaps profound one. I am intending this post to be one approach to the philosophical question of how it’s possible for one thing to be ‘about’ another: the topic known as intentionality. In an interesting paper, Music and meaning, ambiguity and evolution, Ian Cross of the Cambridge Faculty of Music explores this question (1), referring to the fact that the associations and effects of a piece of music may differ between the performer and a listener, or between different listeners. Music has, as he puts it, ‘floating intentionality’. This reminds me of a debate that has taken place about the performance of early music. For authenticity in, say 16th century music, some claim, it’s essential that it is played on the instruments of the time. Their opponents retort that you won’t achieve that authenticity unless you have a 16th century audience as well.

Some might claim that most music is not ‘about’ anything but itself, or perhaps about the emotions it generates. I am not intending to come to any conclusion on that particular topic, but just to raise some questions in a fascinating area. In the next post I intend to approach this topic of intentionality from a completely different direction.


1. Cross, Ian: Music and meaning, ambiguity and evolution, in D. Miell, R. MacDonald & D. Hargreaves, Musical Communication, OUP, 2004.
You can read part of it here.

Science and Emotion

Commuting days until retirement: 507

heretics-coverFollowing my comments 3 posts ago, my reading on the train over the last week has been The Heretics: Adventures with the Enemies of Science, by Will Storr. Despite booming bishops and other distractions, I found it intensely readable, and I think it pretty much fulfilled my expectations.

The subtitle about ‘the Enemies of Science’ is perhaps a little misleading: this is not primarily an exercise in bashing such people – you’ll find plenty of other books that do that, and any number of internet forums (and of course I had a go at it myself, in the post I mentioned). It’s an attempt to understand them, and to investigate why they believe what they do. Storr does not treat the enemies of science as being necessarily his personal enemies, and it emerges at the same time that the friends of science are not always particularly – well, friendly.

I was going to say that it’s a strength of the book that he maintains this approach without compromising his own adherence to rationality, but that’s not strictly true. Because another strength is that he doesn’t attempt to adopt a wholly godlike, objective view. Rather, he presents himself, warts and all, as a creature like the rest of us, who has had to fight his own emotional demons in order to arrive at some sort of objectivity. And he does admit to experiencing a kind of bewitchment when talking to people with far-out beliefs. ‘They are magic-makers. And, beneath all that a private undercurrent: I feel a kind of kinship with them. I am drawn to the wrong.’

It’s a subtle approach, and a difficult path to tread, which invites misunderstanding. And one critic who I believe misunderstands is Mark Henderson in the Guardian who, while admiring aspects of Storr’s work, finds the book ‘disappointing and infuriating….  He is like the child who still wants to believe in Father Christmas, but who is just old enough to know better. Life would be more magical, more fun, if the story were true.’  Well here I think Henderson is unwittingly stating the central thesis of Storr’s book: that as humans we are above all myth makers – we have a need to see ourselves as a hero of our own preferred narrative.

This idea appeals to me in particular because it chimes in with ideas that I have come across in an evening course I am currently following in the philosophy of emotion. Writers on emotion quite naturally classify emotion into positive (happiness, love, hope, excitement) and negative (sadness, hatred, anger, fear, etc). The naive assumption is that we welcome the former and avoid the latter if we can. But of course the reality is far more nuanced, and more interesting than that. In the first place is the need of many to pursue dangerous and frightening pursuits, and then of course the undoubted delights of sado-masochism. But much closer to home is the fact that we flock to horror films and emotionally harrowing dramas – we love to be vicariously frightened or distressed. Narrative is our stock in trade, and (as the increasingly popular creative writing courses preach) unless there’s plenty of conflict and resolution, nobody will be interested.

Mythology

We all have our own unspoken narratives about our own place in the world, and in most cases these probably cast us in a more heroic light than the accounts others might give of us. They help to maintain our emotional equilibrium, and in cases where they are nullified by external circumstances, depression, despair and even suicide may result. And of course with the internet, bloggers like me can start to inflict our own narratives on a bigger potential audience than ever (I wish). And earlier theories of the world are of course entirely myth and narrative laden, from ancient Greek culture to the Bible and the Koran. Our cultural heritage fits around our instinctive nature. (As I tap this passage into my phone on the train, the man immediately next to me is engrossed in a Bible.)  How difficult, then, for us to depart from our myths, and embrace a new, evidence based, and no longer human-centred, story of of creation.

t-rexStorr encounters one of those for whom this difficulty is too great. John Mackay is a genial, avuncular Australian (there’s plenty of footage on You Tube) who has been proselytising worldwide on behalf of the creationist story for some years. In the gospel according to Mackay, everything that seems evil about nature stems from the fall of Adam and Eve in the Garden of Eden. Before this there were no thorns on plants, men lived happily with dinosaurs and nobody ever got eaten – all animals were vegetarians. A favourite of mine among his pronouncements is where Storr asks him why, if Tyrannosaurus Rex was a vegetarian, it had such big, sharp teeth. The answer (of course) is – water melons.

On You Tube I have just watched Mackay demonstrating from fossil evidence that there are plants and animals which haven’t evolved at all. This is a fundamental misunderstanding of Darwin: if organisms aren’t required by external forces to adapt, they won’t. But of course on Mackay’s time scale (the Earth is of course six thousand years old) there wouldn’t have been enough time for fossils to form, let alone for anything to evolve very much. The confusion here is manifold. For his part, Storr admits to having started out knowing little about evolution theory or the descent of man, and to having taken the scientific account on trust, as indeed most of us do. But his later discussions with a mainstream scientist demonstrate to him how incomparably more elegant and cogent the accepted scientific narrative is.

How objective can we be?

Henderson charges Storr with not giving sufficient recognition to the importance of the scientific method, and how it has developed as a defence of objective knowledge against humanity’s partial and capricious tendencies. But Storr seems to me to be well aware of this, and alert to those investigators whose partiality throws doubt on their conclusions. ‘Confirmation bias’ is a phrase that runs through the book: people tend to notice evidence which supports a belief they are loyal to, and neglect anything that throws doubt on it. A great example comes from a passage in the book where he joins a tour of a former Nazi concentration camp with David Irving, the extreme right wing historian who has devoted his life to a curious crusade to minimise the significance of the Holocaust, and exculpate Hitler. Storr is good on the man’s bizarre and contradictory character, as well as the motley group of admirers touring with him. At one point Irving is seeking to deny the authenticity of a gas chamber they are viewing, and triumphantly points out a handle on the inside of the open door. He doesn’t bother to look at the other side of the door, and but Storr does afterwards, and discovers a very sturdy bolt. You are led to imagine the effect of a modus operandi like this on the countless years of painstaking research that Irving has pursued.

But should we assume that the model of disinterested, peer-reviewed academic research we have developed has broken free of our myth-making tendencies? Those who are the most determined to support it are of course themselves human beings, with their own hero narratives. Storr attends a convention of ‘Skeptics’ (he uses the American spelling, as they themselves do) where beliefs in such things as creationism or belief in psychic phenomena are held up to ridicule. He brings out well the slightly obsessional, anorak-ish atmosphere of the event. It does, after all, seem a little perverse to devote very much time to debunking the beliefs of others, rather than developing one’s own. It’s as if the hero narrative ‘I don’t believe in mumbo-jumbo’ is being exploited for purposes of mutual and self-congratulation. The man who is effectively Skeptic-in-Chief, the American former magician James Randi, is later interviewed by Storr, and comes across as arrogant and overbearing, admitting to sometimes departing from truthfulness in pursuit of his aims.

If scientists, being human, are not free of personal mythology, could this work against the objectivity of the enterprise? I think it can, and has. Some examples come to mind for me. The first is Ignaz Semmelweis, a Hungarian physician in the early to mid 19th century. In the days before Pasteur and the germ theory of infection, he was concerned by the number of maternal deaths in his hospital from what was called ‘puerperal fever’. This seemed to be worse in births attended by doctors, rather than midwives. In a series of well executed investigations, he linked this to doctors who had come to the maternity ward after performing post-mortems, and further established that hand-washing reduced the incidence of the disease. But the notion that doctors themselves were the cause did not meet with approval: an obvious clash with the hero narrative. Semmelweis’s findings were contemptuously rejected, and he later suffered a breakdown and died in an asylum. A similar example is the English Victorian physician John Snow, who in a famous investigation into cholera in Soho, conclusively showed it to be spread via a water-pump, in contradiction with the accepted ‘miasma’ theory of airborne infection. He further linked it to pollution of the water supply by sewage – but something so distasteful was a conclusion too far for the Victorian narratives of human pride and decency, and the miasma theory continued to hold sway.

Both these examples of course come from medicine, where conclusive and repeatable results are harder to come by, and easier to contest. So let’s go to the other extreme – mathematics. You would think that a mathematical theorem would be incontrovertible, at least on grounds of offending anyone’s personal sensibilities. But around the turn of the 20th century Georg Cantor’s work on set theory led him to results concerning the nature of infinity. The consequent attacks on him by some other mathematicians, often of the most ad hominem kind, calling him a charlatan and worse, showed that someone’s personal myths were threatened. Was it their religious beliefs, or the foundations of mathematics on which their reputations depended? I don’t know – but Cantor’s discoveries are nowadays part of mainstream mathematics.

Modern heresy

My examples are from the past, of course: I wanted to look at investigators who were derided in their time, but whose discoveries have since been vindicated. If there are any such people around today, their vindication lies in the future. And there is no shortage of heterodox doctrines, as Storr shows. Are any of them remotely plausible? One interesting case is that of Rupert Sheldrake, to whom Storr gives some space. He has an unimpeachable background of education in biology and is a former Cambridge fellow. But his theory of morphic fields – mysterious intangible influences on biological processes – put him beyond the pale as far as most mainstream scientists are concerned. Sheldrake, however, is adamant that his theory makes testable predictions, and he claims to have verified some of these using approved, objective methods. Some of them concern phenomena known to popular folk-lore: the ability to sense when you are being stared at, and animals who show correct anticipation of their absent owners returning home. I can remember when we played games with the former when I was at school – and it seemed to work. And I have read Sheldrake’s book on the latter, in which he is quite convincing.

But I have no idea whether these ideas are truly valid. Storr tells of a few cases where regular scientists have been prepared to try and repeat Sheldrake’s results with these phenomena, but most degenerate into arcane wrangling over the details of experimental method, and no clear conclusions emerge. What is clear to me is that most orthodox scientists will not even consider, publicly, such matters, since doing so is seen as career suicide. Is this lack of open-mindedness also therefore a lack of objectivity? Lewis Wolpert is quoted in Storr’s book: ‘An open mind is a very bad thing – everything falls out’, a jibe repeated by Henderson. You could retort that the trouble with a closed mind is that nothing gets in.  There is a difficult balance to find here: of course a successful scientific establishment must be on its guard against destructive incursions by gullibility and nonsense.  On the other hand, as we have seen, this becomes part of the hero narrative of its practitioners, and may be guarded so jealously that it becomes in some cases an obstacle to advances.

Sheldrake tells Storr that his theories in no way destroy or undermine established knowledge, but add to it. I think this is a little disingenuous of him. If we have missed something so fundamental, it would imply that there is something fundamentally wrong about our world-view. Well of course it would be arrogant to deny that there is anything at all wrong with our world-view (and I think there is plenty – something to return to in a later post). But Storr’s critic Henderson is surely right in holding that, in the context of a systematically developed body of knowledge, there is a greater burden of proof on the proponent of the unorthodox belief than there is on the the opponent. Nevertheless, I agree with Storr that the freedom to promote heterodox positions is essential, even if most of them are inevitably barmy. It’s not just that, as Storr asserts near the end of the book, ‘wrongness is a human right’. Occasionally – just very occasionally – it is right.