A Singular Notion

Commuting days until retirement: 168

I’ve been reading about the future. Well, one man’s idea of the future, anyway – and of course when it comes to the future, people’s ideas about it are really all we can have. This particular writer obviously considers his own ideas to be highly upbeat and optimistic, but others may view them with apprehension, if not downright disbelief – and I share some of their reservations.

Ray Kurzweil

Ray Kurzweil
(Photo: Roland Dobbins / Wikimedia Commons)

The man in question is Ray Kurzweil, and it has to be said that he is massively well informed – about the past and the present, anyway; his claims to knowledge of the future are what I want to examine. He holds the position of Head of Engineering at Google, but has also founded any number of high-tech companies and is credited with a big part in inventing flatbed scanners, optical character recognition, speech synthesis and speech recognition. On top of all this, he is quite a philosopher, and has carried on debates with other philosophers about the basis of his ideas, and we hear about some of these debates in the book I’ve been reading.

The book is The Singularity is Near, and its length (500 dense pages, excluding notes) is partly responsible for the elapsed time since my last substantial post. Kurzweil is engagingly enthusiastic about his enormous stock of knowledge,  so much so that he is unable to resist laying the exhaustive details of every topic before you. Repeatedly you find yourself a little punch drunk under the remorseless onslaught of facts – at which point he has an engaging way of saying ‘I’ll be dealing with that in more detail in the next chapter.’ You feel that perhaps quite a bit of the content would be better accommodated in endnotes – were it not for the fact that nearly half the book consists of endnotes as it is.

Density

To my mind, the the argument of the book has two principal premises, the first of which I’d readily agree to, but the second of which seems to me highly dubious. The first idea is closely related to the ‘Singularity’ of the title. A singularity is a concept imported from mathematics, but is perhaps more familiar in the context of black holes and the big bang. In a black hole, enormous amounts of matter become so concentrated under their own gravitational force that they shrink to a point of, well, as far as we can tell, infinite density. (At this point I can’t help thinking of Kurzweil’s infinitely dense prose style – perhaps it is suited to his topic.) But what’s important about this for our present purposes is the fact that some sort of boundary has been crossed: things are radically different, and all the rules and guidelines that we have previously found useful in investigating how the world works, no longer apply.

To understand how this applies, by analogy,  to our future, we have to introduce the notion of exponential growth – that is, growth not by regular increments but by multiples. A well known illustration of the surprising power of this is the old fable of the King who has a debt of gratitude to one of his subjects, and asks what he would like as a reward. The man asks for one grain of wheat corresponding to the first square of the chess board, two for the second, four for the third,  and so on up to the sixty-fourth, doubling each time. At first the King is incredulous that the man has demanded so little, but of course soon finds that the entire output of his country would fall woefully short of what is asked. (The number of grains works out at 18,446,744,073,709,551,615 – this is of a similar order to,  say,  the estimated number of grains of sand in the world.)

Such unexpected expansion is the hallmark of exponential growth – however gradually it rises at first, eventually the curve will always accelerate explosively upward. Kurzweil devotes many pages to arguing how the advance of human technical capability follows just such a trajectory. One frequently quoted example is what has become known as Moore’s law: in 1965 a co-founder of the chip company Intel, Gordon Moore, made an extrapolation from what had then been achieved and asserted that the number of processing elements that could be fitted on to a chip of a given size would double every year.  This was later modified to two years, but has nevertheless continued exponentially, and there is no reason, short of global calamity, that it will stop in the foreseeable future. The evidence is all around us: thirty years ago, equipment with the power of a modern smartphone would have been a roomful of immobile cabinets costing thousands of pounds.

Accelerating returns

That’s of course just one example; taking a broader view we could look, as Kurzweil does, at the various revolutions that have transformed human life over time. The stages of agricultural revolution – the transition from the hunter-gatherer way of life,  via subsistence farming to systematic growth and distribution of food,  took many centuries,  or even millennia. The industrial revolution could be said to have been even greater in its effects over a mere century or two, while the digital revolution we are currently experiencing has made radical changes in just the last thirty years or so. Kurzweil argues that each of these steps forward provides us with the wherewithal to effect further changes even more rapidly and efficiently – and hence the exponential nature of our progress. Kurzweil refers to it as ‘The Law of Accelerating Returns’.

So if we are proceeding by ever-increasing steps forward, what is our destiny – what will be the nature of the exponential explosion that we must expect? This is the burden of Kurzweil’s book, and the ‘singularity’ after which nothing will be the same. His projection of our progress towards this point is based on a triumvirate of endeavours which he refers to confidently with the acronym GNR: Genetics, Nanotechnology and Robotics. Genetics will continue its progress – exponentially – in finding cures for the disorders which limit our life span, as well as disentangling many of the mysteries of how we – and our brains – develop. For Nanotechnology,  Kurzweil has extensive expectations. Tiny, ultimately self-reproducing machines could be sent out into the world to restructure matter and turn innocent lumps of rock into computers with so far undreamt of processing power. And they could journey inwards,  into our bodies, ferreting out cancer cells and performing all sorts of repairs that would be difficult or impossible now. Kurzweil’s enthusiasm reaches its peak when he describes these microscopic helpers travelling round the blood vessels of our brains, scanning their surroundings and reporting back over wi-fi on what they find. This would be part of the grand project of ‘reverse engineering the brain’.

And with the knowledge gained thus, the third endeavour,  Robotics, already enlisted in the development of the nanobots now navigating our brains, would come into its own. Built on many decades of computing experience,  and enhanced by an understanding of how the human brain works, a race of impossibly intelligent robots, which nevertheless boast human qualities, would be born. Processing power is still of course expanding exponentially, adopting any handy lumps of rock as its substrate, and Kurzweil sees it expanding across the universe as the possibilities of our own planet are exhausted.

Cyborgs

And so what of we poor, limited humans? We don’t need to be left behind,  or disposed of somehow by our vastly more capable creations, according to Kurzweil. Since the functionality of our brains, both in general and on an individual basis, can be replicated within the computing power which is all around us, he envisages us enhancing ourselves by technology. Either we develop the ability to ‘upload the patterns of an actual human into a suitable non-biological, thinking substrate’, or we could simply continue the development of devices like neural implants until nanotechnology is actively extending and even replacing our biological faculties. ‘We will then be cyborgs,’ he explains, and ‘the nonbiological portion of our intelligence will expand its powers exponentially.’

If some of the above makes you feel distinctly queasy, then you’re not alone. A number of potential problems, even disasters, will have occurred to you. But Kurzweil is unfailingly upbeat; while listing a number of ways that things could go wrong, he reasons that all of them can be avoided. And in a long section at the end he lists many objections by critics and provides answers to all of them.

Meanwhile, back in the future, the singularity is under way; and perhaps the most surprising aspect of it is how soon Kurzweil sees it happening. Basing his prediction on an exhaustive analysis, he sets it at: 2045. Not a typo on my part,  but a date well within the lifetime of many of us. I’ll be 97 by then if I’m alive at all,  which I don’t expect to be, exponential advances in medicine notwithstanding. It so happens that Kurzweil himself was born in the same year as me; and as you might expect, this energetic man fully expects to see the day – indeed to be able to upload himself and continue into the future. He tells us how, once relatively unhealthy and suffering from type II diabetes, he took himself in hand ‘from my perspective as an inventor’. He immersed himself in the medical literature, and with the collaboration of a medical expert, aggressively applied a range of therapies to himself. At the time of writing the book, he proudly relates, he was taking 250 supplement pills each day and a half-dozen intravenous nutritional therapies per week. As a result he was judged to have attained a biological age of 40, although he was then 56 in calendar years.

This also brings us to the second – to my mind rather more dubious – plank upon which his vision of the future rests. As we have seen, the best prospects for humanity, he claims,  lie not in the messy and unreliable biological packages which have taken us thus far, but as entities somehow (dis)embodied in the substrate of the computing power which is expanding to fill ever more of the known universe.

Dialogue

Before examining this proposition further, I’d like to mention that, while Kurzweil’s book is hard going at times, it does have some refreshing touches. One of these is the frequent dialogues introduced at the end of chapters, where Kurzweil himself (‘Ray’) discusses the foregoing material with a variety of characters. These include,  among others, a woman from the present day and her uploaded self from a hundred years hence, as well as various luminaries from the past and present: Ned Ludd (the original Luddite from the 18th century), Charles Darwin, Sigmund Freud and Bill Gates. One rather nicely conceived one involves a couple of primordial bacteria discussing the pros and cons of clumping together and giving up some of their individuality in order to form larger organisms; we are implicitly invited to compare the reluctance of one of them to enter a world full of greater possibilities with our own apprehension about the singularity.

So in the same spirit, I have taken the opportunity here to discuss the matter with Kurzweil directly, and I suppose I am going to be the present day equivalent of the reluctant bacterium. (Most of the claims he makes below are not put into his mouth by me, but come from the book.)

DUNCOMMUTIN: Ray,  thank you for taking the trouble to visit my blog.

RAY: That’s my pleasure.

DUNCOMMUTIN: In the book you provide answers to a number of objections – many of them technically based ones which address whether the developments you outline are possible at all. I’ll assume that they are, but raise questions about whether we should really want them to happen.

RAY: OK. You won’t be the first to do that – but fire away.

DUNCOMMUTIN: Well, “fire away” is an apt phrase to introduce my first point: you have some experience of working on defence projects, and this is reflected in some of the points you make in the book. At one point you remark that ‘Warfare will move toward nanobot-based weapons, as well as cyber-weapons’. With all this hyper-intelligence at the service of our brains, won’t some of it reach the conclusion that war is a pretty stupid way of conducting things?

RAY: Yes – in one respect you have a point. But look at the state of the world today. Many people think that the various terrorist organisations that are gaining ever higher profiles pose the greatest threat to our future. Their agendas are mostly based on fanaticism and religious fundamentalism. I may be an optimist, but I don’t see that threat going away any time soon. Now there are reasoned objections to the future that I’m projecting, like your own – I welcome these, and view such debate as important.  But inevitably there will be those whose opposition will be unreasonable and destructive. Most people today would agree that we need armed forces to protect our democracy and,  indeed, our freedom to debate the shape of our future. So it follows that,  as we evolve enhanced capabilities, we should exploit them to counter those threats. But going back to your original point – yes,  I have every hope that the exponentially increasing intelligence we will have access to will put aside the possibility of war between technologically advanced nations. And indeed,  perhaps the very concept of a nation state might eventually disappear.

DUNCOMMUTIN: OK,  that seems reasonable. But I want to look further at the notion of each of us being part of some pan-intelligent entity. There are so many potential worries here. I’ll leave aside the question of computer viruses and cyber-warfare, which you deal with in the book. But can you really see this future being adopted wholesale? Before going into some of the reservations I have, I’d want to say that many will share them.

RAY: Imagine that we have reached that time – not so far in the future. I and like-minded people will be already taking advantage of the opportunities to expand our intelligence, while,  if I may say so, you and your more conservative-minded friends will have not. But expanded intelligence makes you a better debater.  Who do you think will win the argument?

DUNCOMMUTIN: Now you’re really worrying me. Being a better debater isn’t the same as being right. Isn’t this just another way of saying ‘might is right’ – the philosophy of the dictator down the ages?

RAY: That’s a bit unfair – we’re not talking about coercion here, but persuasion – a democratic concept.

DUNCOMMUTIN: Maybe, but it sounds very much as if, with all this overwhelming computer power, persuasion will very easily become coercion.

RAY: Remember that it is from the most technologically advanced nations that these developments will be initiated – and they are democracies. I see democracy and the right of choice being kept as fundamental principles.

DUNCOMMUTIN: You might,  Ray – but what safeguards will we have to retain freedom of choice and restrain any over-zealous technocrats? However I won’t pursue this line further. Here’s another thing that bothers me.  There’s an old saying: ‘To err is human,  but it takes a computer to really foul things up.’ If you look at the history of recent large scale IT projects, particularly in the public sector, you will come across any number of expensive flops that had to be abandoned. Now what you are proposing could be described, it seems to me, as the most ambitious IT project yet. What could happen if I commit the functioning of my own brain to a system which turns out to have serious flaws?

RAY: The problems you are referring to are associated with what we will come to see as the embryonic stage – the dark ages, if you will – of computing. It’s important to recognize that the science of computing is advancing by leaps and bounds, and that software exists which assists in the design of further software. Ultimately program design will be the preserve, not of sweaty pony-tailed characters slaving away in front of screens, but of proven self-organising software entities whose reliability is beyond doubt. Once again, as software principles are developed, proven and applied to the design of further software, we will see exponential progression in this area.

DUNCOMMUTIN: That reassures me in one way, but gives me more cause for concern in another. I am thinking of what I call the coffee machine scenario.

RAY: Coffee machine?

DUNCOMMUTIN: Yes.  In the office where I work there are state-of-the-art coffee machines,  fully automated. You only have to touch a few icons on a screen to order a cup of coffee, tea, or other drink just as you like it, with the right proportions of milk,  sugar,  and so on. The drink you specify is then delivered within seconds. The trouble is,  it tastes pretty ghastly, rendering the whole enterprise effectively pointless. What I am suggesting is that, given all the supreme and unimaginably complex technical wizardry that goes into our new existence, it’s going to be impossible for us humans to keep track of where it’s all going; and the danger is that the point will be missed: the real essence of ourselves will be lost or destroyed.

RAY: OK,  I think I see where you’re going. First of all,  let me reassure you that nanoengineered coffee will be better than anything you’ve tasted before! But, to get to the substantial point, you seem a bit vague about what this ‘essence’ is. Remember that what I am envisaging is a full reverse engineering of the human brain, and indeed body. The computation which results would mirror everything we think and feel. How could this fail to include what you see as the ‘essence’? Our brains and bodies are – in essence – computing processes; computing underlies the foundations of everything we care about, and that won’t be changing.

DUNCOMMUTIN: Well, I could find quite a few people who would say that computing underlies everything they hate – but I accept that’s a slightly frivolous comment. To zero in on this question of essence, let’s look at one aspect of human life – sense of humour. Humour comes at least partly under the heading of ’emotion’, and like other emotions,  it involves bodily functions,  most importantly in this case laughing. Everyone would agree that it’s a pleasant and therapeutic experience.

RAY: Let me jump in here to point out that while many bodily functions may no longer be essential in a virtual computation-driven world, that doesn’t mean they have to go. Physical breathing, for example, won’t be necessary, but if we find breathing itself pleasurable, we can develop virtual ways of having this sensual experience. The same goes for laughing.

DUNCOMMUTIN: But it’s not so much the laughing itself,  but what gives rise to it, which interests me. Humour often involves the apprehension of things being wrong,  or other than they should be – a gap between an aspiration and what is actually achieved. In this perfect virtual world, it seems as if such things will be eliminated.  Maybe we will find ourseleves still able to laugh virtually – but have nothing to virtually laugh at.

RAY: You’ll remember how I’ve said in my book that in such a world there will be limitless possibilities when it comes to entertainment and the arts. Virtual or imagined worlds in which anything can happen,  and in which things can go wrong, could be summoned at will. Such worlds could be immersive, and seem utterly real. These could provide all the entertainment and humour you could ever want.

DUNCOMMUTIN: There’s still something missing,  to my mind. Irony,  humour, artistic portrayals, whatever – all these have the power that they do because they are rooted in gritty reality, not in something we know to have been erected as some form of electronic simulation. In the world you are portraying it seems to me that everything promises to have a thinned-out,  ersatz quality – much like the coffee I mentioned a little while back.

RAY: Well if you really feel that way,  you may have to consider whether it’s worth this small sacrifice for the sake of eliminating hunger, disease, and maybe death itself.

DUNCOMMUTIN: Eliminating death – that raises a whole lot more questions, and if we go into them this blog entry will never finish. I have just one more point I would like to put to you: the question of consciousness, and how that can be preserved in a new substrate or mode of existence. I have to say I was impressed to see that,  unlike many commentators, you don’t dodge the difficulty of this question, but face it head-on.

RAY: Thank you. Yes, the difficulty is that, since it concerns subjective experience, this is the one matter that can’t be resolved by objective observation. It’s not a scientific question but a philosophical one – indeed,  the fundamental philosophical question.

DUNCOMMUTIN: Yes – but you still evidently believe that consciousness would transfer to our virtual, disembodied life. You cross swords with John Searle, whose Chinese Room argument readers of this blog will have come across. His view that consciousness is a fundamentally biological function that could not exist in any artificial substrate is not compatible with your envisaged future.

RAY: Indeed. The Chinese Room argument I think is tautologous – a circular argument – and I don’t see any basis for his belief that consciousness is necessarily biological.

DUNCOMMUTIN: I agree with you about the supposed biological nature of consciousness – perhaps for different reasons – but not about the Chinese Room. However there isn’t space to go into that here. What I want to know is, what makes you confident that your virtualised existence will be a conscious one – in other words,  that you will actually have future experiences to look forward to?

RAY: I’m a patternist. That is, it seems to me that conscious experience is an inevitable emergent property of a certain pattern of functionality, in terms of relationships between entities and how they develop over time. Our future technology will be able to map the pattern of these relationships to any degree of detail, and, by virtue of that,  consciousness will be preserved.

DUNCOMMUTIN: This seems to me to be a huge leap of faith. Is it not possible that you are mistaken, and that your transfer to the new modality will effectively bring about your death? Or worse, some form of altered, and not necessarily pleasant, experience?

RAY: On whether there will be any subjective experience at all, if the ‘pattern’ theory is not correct then I know of no other coherent one – and yes,  I’m prepared to stake my future existence on that. On whether the experience will be altered in some way: as I mentioned, we will be able to model brain and body patterns to any degree of detail, so I see no reason why that future experience should not be of the same quality.

DUNCOMMUTIN: Then the big difference is that I don’t see the grounds for having the confidence that you do, and would prefer to remain as my own imperfect,  mortal self. Nevertheless, I wish you the best for your virtual future – and thanks again for answering my questions.

RAY: No problem – and if you change your mind,  let me know.


The book: Kurzweil, Ray: The Singularity is Near,  Viking Penguin, 2005

For more recent material see: www.singularity.com and www.kurzweilai.net

Watching the World Go By

Commuting days until retirement: 381

Not another description of me, looking out of the window of my commuter train, but a few thoughts prompted by looking at some early film footage. A recent programme on Channel 4 looked at the rise of Hitler, using contemporary film from the 1920s and 1930s, which had been digitally enhanced and colourised to a startling level of realism. The thoughts I wanted to share concern not the subject of the films, but the medium itself.

Edweard Muybridge

Edweard Muybridge (Wikimedia Commons)

Most people have some awareness of the early history of moving pictures, the notion having been conceived almost as early as photography itself. Probably the first pioneer of the medium was the somewhat eccentric, but evidently brilliant, Edweard Muybridge. (He had changed his name – as he did several times – from the original Edward Muggeridge). Born in England, he lived for most of his life in the USA, where on his first visit he suffered a near-fatal blow on the head in a stagecoach accident. He recovered, but perhaps this accounted for some of the eccentricity. Some years later, in 1875, on he was tried, again in America, for murder, having shot dead his young wife’s lover. The defence entered a plea of insanity, but he rather gave the lie to that with a speech on his own behalf which was both cogent and impassioned enough to sway the jury to acquit him with a verdict of ‘justified homicide’.

Muybridge's horse

Muybridge’s horse (Wikimedia Commons)

Having started his career as a bookseller he later became a professional photographer, and in 1872 he was commissioned to settle a debate over whether all four hooves of a cantering or galloping horse were ever out of contact with the ground simultaneously. Having established by means of still photographs that they indeed were, he developed a fascination with the possibilities of capturing human and animal movement photographically. His earliest efforts, in the late 1870s, involved placing a number of cameras along the side of a track, and and using various mechanical methods to trigger them sequentially as a moving horse passed by them. Showing the result involved laboriously copying the photos as silhouettes on to a disc, from which they were projected using a device which Muybridge invented and called a Zoopraxiscope.  The animation above shows a modern rendering of his original images.

By the turn of the century integrated, hand-cranked film cameras had been developed, and so, like insects from their pupae, we see the people of over a hundred years ago emerge from their frozen monochrome images into a jerky, half-real life. And in retrospect it seems as if the lack of realism was accentuated as the medium began to be put to use for entertainment. There was already the Victorian tradition of high melodrama, and on top of this actors had to find ways of expressing themselves which did not use sound. The results now appear to us impossibly stilted and artificial.

Alongside this, however, entrepreneurs of the time had, luckily for us, spotted another opportunity to exploit the new medium. They realised that if they were to film ordinary people going about their business, those people may well pay a good price to be able to see themselves in an entirely novel way. And indeed they did. So we have a wonderful resource of animated scenes from streets and other public places of the era. Until recently these early examples of ciné verité haven’t been seen very often, and I’m guessing that the most important reason for this is that other limitation on realism – the speed of the original cameras. They were hand-cranked to the highest rate that the early mechanisms would allow, but this couldn’t match the frame rate of later twentieth century equipment. The choice has been either to slow it down and put up with jarringly jerky motion; or the easier way, of simply showing it at the conventional frame rate so that motion appeared much faster.  The latter option has been resorted to so often that it has given rise to a trope: accelerated motion equals the past. Even more contemporary footage showing mocked up scenes of an earlier era has sometimes been artificially speeded up, in order to borrow a little authenticity.

But with today’s digital techniques that is now changing. Not only can individual frames be cleaned up and clarified, but new frames can be interpolated into the instants between the original ones, slowing bodily movements and restoring a natural appearance. This new realism was what struck me about the scenes I saw of 1920s Germany – but we now have an increasing number of such enhanced early films, going back to around 1900, thanks to those original entrepreneurs. There are a number of examples on YouTube, so I have chosen one to insert here. It shows a selection of scenes in England around 1900. I like to pull the image up to full screen and immerse myself in it, imagining that I am walking the streets of late Victorian or early Edwardian England, and I try unsuccessfully to think the thoughts I might have been thinking if I had really been present then. Although these are humans like us, how do they differ?

Well, most obviously in their dress. What always takes my attention is the ubiquity of hats. I searched through this clip for anyone without one. There us one smartly dressed man standing at the back of a very well-heeled looking family group, who has perhaps just stepped out of the door behind him. Otherwise all I could find was one small child (who had probably lost his) and the rowers on the river and (who are stripped down to their sporting gear, with their hats probably safely awaiting them on pegs in the changing room). Evidently if I’d been alive then I would have considered it almost unthinkable to have left the house on even the shortest journey without something on my head – whether I was rich or poor. And even the rowers are followed by another group of men out for an afternoon boat trip, and they are fully hatted and suited as they brandish the oars. I was also taken by the man who appears about 40 seconds into the sequence, approaching the camera while, in an apparently habitual gesture, he strokes back each side of his carefully manicured handlebar moustache. His bearing suggests that he considers himself the epitome of 1900 cool. He unceremoniously sweeps two children out of his way before moving off to the left. That action in itself suggests that a rather less indulgent attitude to children was commonplace then.

But looking at urban streets at that time, and allowing for all the obvious differences, there still seems something unfamiliar about the movement of the crowd. I realised what it was when watching the 1920s German footage. At that time, in the inflation-hit Weimar Republic, the streets were full of half-starved unemployed, with little to do but – yes – watch the world go by. The film clip above shows people in a more prosperous time and place, but in most of the street shots you can nevertheless see a number who are just passively standing. Some of course are staring at the novelty of the film camera, but you can see plenty of others just watching in general.

Consider what entertainment was available: if you were to stay at home, and were not a reader (many of course never got the chance to be) you either had to make your own entertainment, or go out and find it. And so the street provided the most immediate – and cheapest – way to occupy the mind. In a typical street scene today, virtually everyone would be rushing somewhere unless forced into stasis by a wait for a bus, or by a queue of some sort. And even then they will often be busily talking on the phone or texting. While most of our 1900 public are also on the move, they have to make their way around that now-vanished residue of watchers who are happy to stand and stare at the rest of the world getting to where it wants to get to. And a visual medium in its very earliest form has given us a sense of what life was like without the visual media we are now so used to.