Another title for this piece might have been ‘What I Like Most About AI’ - with the answer being that I like the way that it is accurately named. AI is artificial. It is not real intelligence. I may be preaching to the converted here, because almost everybody I speak to, of every age and background, is of the view that a machine can’t really be intelligent.1 Nonetheless, a deeper look is in order - for certainly, ‘AI’ is a big deal, and it’s important that we understand what we are dealing with and what we are not dealing with.
Those with ‘a dog in the fight’, those with something to gain from hyperbole, or fear-mongering, or who are simply disposed to denigrate the human being, continue to spin the message that ‘computers are going to be massively more intelligent than human beings’, or that ‘the brain is just a meat computer’ (as suggested by MIT computer scientist the late Marvin Minsky), or that AI is going to render most of humanity ‘useless’ (as suggested by historian and author Yuval Noah Harari).
If one listens to the hype fed to us - on this subject like almost every other - one would think that ‘the experts’ are united in their certainty that machines are superior to people, and are on their way to unimagined levels of intelligence and even to consciousness.
Is it the case then, that those of us who doubt the authenticity of AI’s intelligence, or even its potential for intelligence, are deluding ourselves? Apparently not. The truth, it turns out, is that a fair bit of ‘expert opinion’ is with us on that.
ELIZA - The First ‘AI’
Back in the 1970’s MIT computer scientist Joseph Weizenbaum laid a solid and lasting foundation for understanding the artificial intelligence question, with his seminal book ‘Computer Power and Human Reason’.
In the mid-1960s, Weizenbaum had himself developed a computer program called ‘ELIZA’, and set it to play the role of psychotherapist with those that engaged it. He did this as an experiment, and did not at all believe that a machine should or could effectively fulfil the role of ‘psychological healer’.
Even with 1960’s levels of software sophistication however, Weizenbaum found himself alarmed by the extent to which people invested what he considered an ‘unwarranted degree of authority’ in his creation. So profoundly did the experience affect him that he took a two year sabbatical from his professorship at MIT to research and write his book, in which he laid out step by step his argument that human consciousness and thinking involve dimensions which cannot, now or ever, be ‘computable’, and why it is that ‘algorithms plus processing power’ are not the same thing as ‘intelligence’. In the opening pages, he expressed the nature of his concern like this:
The reaction to ELIZA showed me more vividly than anything I had seen hitherto the enormously exaggerated attributions an even well-educated audience is capable of making, even strives to make, to a technology it doesn’t understand. Surely, I thought, decisions made by the general public about emergent technologies depend much more on what the public attributes to such technologies than on what they actually are or can or cannot do. If, as appeared to be the case, the public’s attributions are wildly mis-conceived, then public decisions are bound to be misguided and often wrong.
This quote brings us to the heart of the matter. The issue is not whether ‘AIs’ are useful or not. No doubt they will be very useful indeed for tasks such as producing program code, preparing draft contracts, quickly summarising long, complex texts, and many other things (not forgetting what is probably their most impressive achievement to date, creating ‘deep fake’ videos). And they are here to stay. Nobody is questioning that. The question is not even whether the capacities of ‘AI’s are going to increase. We all know that they are. No, the issue, above all, is to what extent they are, or can be, genuinely intelligent. And, related to that question, what we should expect from them, and how we should manage our relationship with them.
We might for instance, in seeking to manage the relationship, acknowledge that human capacities are fundamentally different from machine capacities, and consequently recognise that, in Weizenbaum’s words, “not everything that can be done by a computer should be done by a computer”. (Psychotherapy being a great, but far from isolated, example).
Fifty years on from ‘Computer Power and Human Reason’ however, society does not seem to have recognised the problem, and has continued to make enormously ‘exaggerated attributions of authority’ to each new generation of ELIZA’s successors. Such a situation would not have surprised Weizenbaum. Something had dawned on him as he researched and wrote - something that led him to realise the enormity of the task of understanding and of public education on which he had embarked. Something which he had begun to realise would occupy much of the rest of his life. He put it like this:
.. gradually I began to see that certain fundamental questions had infected me more chronically than I had first perceived. I shall probably never be rid of them.
The Emperor’s New Mind
Fifteen years after Weizenbaum first spoke to the world, Sir Roger Penrose, philosopher of science and one of the most eminent mathematical physicists in the world, came forth with a treatise entitled ‘The Emperor’s New Mind’ - a wonderful title which must reflect as well as any book ever written the nature of the content within.
Penrose argued that since the physics of brain processes is still incompletely understood, and the science of how a brain comes to be a mind is far yet behind that, it is surely not possible to engineer a comprehensive simulation of a brain or mind, or the things that those can do. Travelling on through many and complex fields of science and mathematics, he marshalled a great body of evidence in support of a variety of relevant arguments against the possibility of a genuinely intelligent machine, of which we might mention here just two:
Firstly, that a great deal of real thinking involves neither words nor numbers nor any 'encodable' symbols at all, and therefore cannot be performed by any computer. (Much less consciousness, regarding which he demonstrates the absurdities involved in suggesting that it could ever emerge from the execution of an algorithm, no matter how complex).
Secondly, that many leaps of understanding made by human minds take place in circumstances and ways that cannot possibly be consistent with a simple decoding of symbols and seem explicable only via some mental access to the higher, pre-existing, and fluid truths which are Plato's world of eternal ideals, or archetypes. This is clearly beyond the capacities of any machine (probably forever) and is the reason that computers are incapable of any genuine creativity. Remarkable arguments indeed to come from a physicist, but there they are!
Penrose won the 1990 ‘Science Book Prize’ for his work, yet it is striking how little ‘The Emperor’s New Mind’, like ‘Computer Power and Human Reason’, before it, and other relevant titles since, came to public attention. It may be instructive to think about why that is so.
Society in the Image of the Machine
Another milestone analysis of the issue, five years after Penrose, came from Stephen Talbott, at the time a senior editor with the prestigious publisher of computing books O’Reilly & Associates. It was entitled ‘The Future Does Not Compute’. Being the mid-1990s, it may be no surprise that the book’s primary focus was the social impact of the Internet. But it was still computer technology, and the interplay of human intelligence with ‘synthetic intelligence’ that he was analysing.
Accordingly, while he did devote an entire chapter to what he saw as some very definite limits to artificial intelligence, Talbott addressed himself primarily to the effects that ‘digital immersion’ via the internet were having, and would have, upon us as individuals, and as a society. The tone was set by the title of his opening chapter, which was ‘Can Human Ideals Survive the Internet’.
Computer technology affects society, Talbott said, primarily through the conscious relationship that we have, or don’t have, with the machines. At that point it gets interesting. What kind of ‘intelligence’ is having what kind of ‘relationship’ with what kind of ‘intelligence’? Let us delve a little deeper into that:
Even before the advent of computers, industrialisation was leading humanity into what we could call a mechanised mode of thinking. Since the industrial revolution, everything has increasingly been proceduralised. The groundwork has been laid for basing everything on algorithms instead of thinking. An increasingly mechanised environment and lifestyle has conditioned us to see the world, and even ourselves, in mechanised terms. Humanity has been lulled into a lazy machine-like way of looking at everything - which was later much deepened by the hypnotic effect of television, increasingly mechanical and procedural government regulation of everything, and so on. As a result, we have in many areas of life, been drifting into an ‘autopilot’ mode, in which we just ‘follow the procedures’. We have allowed ourselves to become somewhat machine-like in our own mentality - and it should perhaps not be surprising, having done so, if we now find that machines are better at being machine-like than we are.
So focused has the world become on those elements of human activity that can be reduced to algorithm, to calculation, to computation, that we have begun to lose sight altogether of those elements which cannot be. And effectively, as we have lost sight of all that is beyond mere logic and algorithm, we have begun to reduce our own lives and our own selves to the same level. Higher faculties of intuition, judgement and discrimination - even morality - have begun to atrophy. We have begun, in a certain way, to sleepwalk, and the more we sleepwalk, the more the default directions for society emerge from the technology itself.
When that happens, when we don’t have awake and engaged human beings running the show, systems come more and more to run themselves, and when they do, they gravitate ever in the direction of standardisation, universalisation and centralisation. The hopeful potentials of democratisation, decentralisation and empowerment that people start out with as each new technology emerges are lost. As Talbott warns:
… the power of the computer-based organisation to sustain itself in a semi-somnambulistic manner, free of conscious, present human control - while yet maintaining a certain internal, logical coherence - is increased to a degree we have scarcely begun to fathom.
The chilling end-point of such a drift away from thinking and engagement is also, for me, the most memorable proposition of Talbott’s book, which is that ‘systems that run themselves’ lead to the possibility, perhaps even the probability, of a certain kind of totalitarianism which does not even need a despotic human hierarchy to drive it:
… what the computational society begins to show us is how totalitarianism can be a despotism enforced by nobody at all.
The world becomes inflexible, autocratic and unreasonable and ‘it is nobody’s fault’.
It does seem necessary to acknowledge here that there is today, among certain political and commercial powers, a growing orientation toward deliberate forms of totalitarianism. The point being made however, is that deliberate intention is now optional - totalitarianism can now also emerge without it.
The danger inherent in this prescient warning of ‘a totalitarian regime without a despotic centre’ (if I might paraphrase) hardly needs emphasising. It is a potent thought, and the reality of the trend is already much more visible around us than it was when Talbott wrote of it.
The more obvious ways in which it has become visible are perhaps increasing bureauacratic control, increasing surveillance, reduction of rights, participation, autonomy and privacy, and loss of flexibility. But I’ll suggest another way too: Standardised and inflexible systems - and especially multiple such systems interacting with each other, constantly force us into absurdities, and when they do there is no recourse, no way at all, to challenge the absurdity. We have all experienced that, and it is another characteristic common to totalitarian regimes of the past.
Somewhere close to such dangers however, lies the biggest and truest opportunity that AI has to offer humanity. For as AI intensifies, it will increasingly highlight the fundamental differences between ‘creative thought’ and mere ‘computation’, and show us, in possibly painful ways, the consequence of continuing to ignore those differences. As the sleeve of Talbott’s book tells us:
[computer technology] is the most powerful invitation to remain asleep we have ever faced. Contrary to the usual view, it dwarfs television in its power to induce passivity, to scatter our minds, to destroy our imaginations, and to make us forget our humanity. And yet for those very reasons [it] may also be an opportunity to enter into our fullest humanity with a self-awareness never yet achieved.
Islands in the Cyberstream
In 2008, Weizenbaum passed away at the end of a long life, much of which had been spent grappling with the difficult questions of artificial intelligence. He was 85. Though he was by profession a computer scientist, he had in fact devoted much of his life to questions of society, and of how society would come to terms with the intensifying development of computer technology.
Shortly before he died, he gave an interview, the transcript of which was subsequently published as ‘Islands in the Cyberstream’, tellingly subtitled ‘seeking havens of reason in a programmed society’. In it Weizebaum covered many questions about AI, from whether poetry produced by a computer might or might not be considered poetry,2 to why it might be that nearly all scientists involved in creating AI are male,3 and how computer technology has buttressed establishment social and political institutions against pressure for needed change. But he also echoed further Talbott’s message that what matters is the nature of our conscious relationship with the technology, and not the technology itself. Or, to say it another way, that our focus should in fact not be on what is happening in the development of computer technology, but on what is happening with the development of ourselves.
The ´islands in the cyberstream´ Weizebaum referred to are not places, but people. They are not people, however, who avoid using computer technology. Nor even are they simply people who dislike computer technology. What they are is people who decline to surrender to perceived inevitability and go with the flow, and who insist on recognising the difference between ‘judgment and calculation’, and between ‘creativity and computation’, and who insist also on the continued application of human perception and discrimination.
American Congresswoman Tulsi Gabbard (as she was then) became an island in the cyberstream in 2018 when, in recognition of the reality that electronic voting can and always will be far more vulnerable to manipulation than simpler methods, she tabled her ´Paper Voting Bill’. Millions have taken their first step to becoming an island in the cyberstream as they have decided to limit their children’s exposure to digital devices in early years. Or when they persist in using cash wherever they can. (A very important resistance against the emergence of the ‘totalitarian regime with / without a despotic centre). The city council in Madrid, where I live, recently became an island in the cyberstream by eliminating digital devices from the city’s infant and primary classrooms, except for very short periods under close supervision. I am an island in the cyberstream when I decline to carry a smart phone with me everywhere I go. (It’s a near-impossible option for many people, I realise, but it’s not for me). Nicholas Carr, editor of the Harvard Business Review, was an island in the cyberstream when in 2010 he collated and published research showing that excessive use of digital devices can lead to shortening attention spans, and suggested ways that we might guard against that.4
Inspiringly, Weizenbaum spoke also of islands forming into archipelagos, and eventually landmasses.
Why it Matters
The question of whether machines can really be intelligent, it should now be clear, is of more than academic interest. The answers we come to will shape our expectations not only of the machines, but of ourselves. And those expectations will in turn determine whether we are heading for a world of intelligent human beings wisely applying practical tools, or a world of increasingly stunted human beings living in ‘a totalitarian regime without a despotic centre’.
The answer however, seems clear. In a very dishonest world, ‘artificial intelligence’ is refreshingly honestly named. Naturally, those who drive the agenda for both trillion dollar profits and ‘trans-humanist ideology’ don’t give up - the potentials for wealth, power and increased control over the population are just too great. The 'interests' need us to be scared of ‘the awesome intelligence of the machines’ and stunned into acquiescence and ‘acceptance of the inevitable’. Thus they keep the message and the hyperbole coming, and morphing and adapting to overcome resistance too. In recent times for example, they seem to have realised that the word ‘artificial’ was too honest, and have begun to promote instead the expression ‘machine intelligence’.
Too late though. The cat, it seems, is already out of the bag.
Responses vary a little according to what different people understand by the word ‘intelligence’. (Some for example may consider ‘the ability to play chess’ a sufficient criterion). With more demanding definitions of ‘intelligence’, and especially with a move to words like ‘thinking’ and ‘consciousness’, public consensus seems to get somewhere close to universal.
Weizenbaum speculated that a reasonable answer might be that since poetry is about meaning, and computers have no comprehension of meaning, the initial answer is no, poetry generated by a computer is not really poetry - but that maybe if a human being reads it and finds meaning in it, it becomes at that point poetry.
According to Weizenbaum, the compulsive agenda to 'create life' in the shape of artificial intelligence is driven almost entirely by men (software engineer James Damore was sacked from Google in 2017 for the political incorrectness of noting something similar), and may be driven by the male ego seeking to 'be better than women' at producing children. The stunningly insightful possibility that Weizenbaum was pointing toward is that maybe women don’t need this obsession with ‘creating life’ artificially, because they can do it for real.
See ‘The Shallows - how the internet is changing the way we think, read and remember’, Nicholas Carr, 2010.
"...deliberate intention is now optional - totalitarianism can now also emerge without it." Insightful observation. The use of this technology for making decisions puts us on autopilot, and allows the technology itself to become self-reinforcing, which means it will become more and more "optimized" by its very nature.
Brilliant Michael, thoroughly good read, highly informative on this critical issue. And as you indicate perhaps it’s a blessing in disguise to wake us up to the true nature of consciousness. I think so!