Introduction
“A picture held us captive,” Wittgenstein famously wrote, “And we couldn’t get outside it, for it lay in our language, and language seemed only to repeat it to us inexorably.” [1] It seems to me that this is an apt description of the field of Artificial Intelligence. At its heart, AI research seeks to model human intelligence as expressed in language and other productions of intelligent thought by reliably reproducing these results of conscious thought. In what follows, I will be arguing that AI researchers have produced powerful and valuable models of human intelligence, but that these fall far short of reality. My argument will proceed in two parts. In the first, I will attempt, to sketch a brief history of the field of Artificial Intelligence, offering definitions of the field in the process. I will proceed historically because this helps show how basic concepts have developed. My argument in this section will conclude by distinguishing ‘sub-symbolic’ and ‘symbolic’ AI, arguing that these usefully model the two systems of rationality that Daniel Kahneman calls System 1 (sub-symbolic) and System 2 (symbolic). Part two will transition to philosophy and will attempt to show the limitations of these computing models of human intelligence. First, I will show the aporia at the heart of the AI project, in its ‘bracketing’ of consciousness as seen in Alan Turing’s Turing test. If an AI researcher offers a picture of human rationality from the ‘outside’, the account I will give will be from the ‘inside’ of conscious, lived experience. Next, I will turn to philosophers Mary Midgely, Iris Murdoch, and David Bentley Hart to sketch an alternative picture of humans as ‘metaphysical animals’.[2] From these philosophical accounts, we will arrive at a picture of human rationality as embodied and moving toward reality with a desire to know and understand. In the last section I will argue that general or sentient AI is unlikely given my arguments in part two. Rather than worrying about sentient AI, we should be more worried about how the development of AI is driving and extending the logic of what Jaques Ellul called technique.
Part 1 A Brief History of Artificial Intelligence: Defining the Field
The dream of Artificial Intelligence is an old one, at least as old as Homer’s Illiad. The ancient epic about the sack of Troy written some 2700 years ago contains an eerie account of intelligent mechanical beings assisting the god Hephaistos in his workshop: “Handmaids ran to attend their master, / all cast in gold but a match for living, breathing girls. / Intelligence fills their hearts, voice, and strength their frames, / from the deathless gods they’ve learned their works of hand. / They rushed to support their lord as he went bustling on.”[3] The conception of artificial intelligence here would align with the definition of the field of AI research by philosopher Selmer Bringsjord. He defines it simply as “the field devoted to building artificial animals… and, for many, artificial persons.” For both “artificial animals” and “artificial persons” Bringsjord adds “or at least appear” to be animals and persons.[4] Such dreams abound, particularly in our own time when science fiction or dystopian novels harken to a future that is not yet.
Beyond these imaginings, the foundations of the field of Artificial Intelligence, or more specifically, computer science, lie in the formulation of symbolic logic dating back at least to Aristotle (384-322 BC): “…computer science grew out of logic and probability theory… Computer science, today, is shot through and through with logic; the two fields cannot be separated.”[5] Inasmuch as logic models the consequence relations in human thought and language, and thereby the process of reasoning, it is of interest for those seeking to produce machine models of human intelligence.[6] Over the centuries attempts have been made at conceiving or creating devices capable of mechanically carrying out the calculations done by human calculators. Leonardo Da Vinci (1452-1519) designed a mechanical calculator, and similar attempts were made by Blaise Pascal (1623-1662), as well as by Gottfried Wilhelm Leibniz (1646-1716) who tried to design a device that could work with concepts.[7] By far the most powerful of early ‘computers’ was one conceived and partially produced by Charles Babbage (1792-1871) known as the ‘Analytical Engine’ which had the basic features of modern computers.[8] His assistant Ada Lovelace noted the revolutionary implications of this new technology, describing the Engine as representing the “idea of a thinking or of a reasoning machine.”[9]
The development of computing and Artificial Intelligence comes into its own with Alan Turing (1912-1954). In his seminal 1936 thesis “On Computable Numbers,” Turing formulated the ‘Turing Machine’. This conceptual machine is equipped with a store (memory), a moving head (reader and modifier), and a table of rules (software) allowing it to input information, carry out operations, and output information. There are an infinite number of conceivable Turing devices, each of which carries out a different operation. A Turing machine capable of carrying out the operations of all other machines is called the ’universal machine’ and a computer capable of this is called ‘Turing complete’. These definitions of the limits of computing remain central to the field.
Turing believed that Turing machines were capable of mechanically reproducing the mental operations of human minds. As Turing’s biographer, Andrew Hodges puts it, “The upshot of this line of thought is that all mental operations are computable and hence realizable on a universal machine: the computer.”[10] While Turing’s machine was conceptual, he was involved in the design and creation of some of the first digital computers. Turing would go on to define the project of Artificial Intelligence in his seminal paper, “Computing Machinery and Intelligence.” In the paper, Turing brackets the question of whether a machine can think by proposing the ‘Turing Test’. A machine can be considered ‘intelligent’ or ‘thinking’ when a human interpreter cannot distinguish its printed responses from the printed responses of a human interlocutor. Turing’s seemingly simple test is helpful because it manages to bracket the biological distinction between humans and computers by “drawing a fairly sharp line between the physical and intellectual capacities of a man.”[11] This places the emphasis on those elements that AI researchers want to model: human cognition as expressed in language. This is a crucial point that helps us further define the AI project. AI researcher Margaret A. Boden states that while Artificial Intelligence requires physical machines to carry out its operations, “its best thought of as using what computer science call virtual machines.”[12] In other words, the real problem of Artificial Intelligence is not so much the machinery as it is the conceptualization of a logic, a software that will make physical machines “do the kinds of things that minds can do.” [13] It should be said however that contemporary approaches to AI are diverse. In their book on AI, Stuart Russell and Peter Norvig define four different approaches to Artificial Intelligence which are represented in the table below:[14]
| Human-Based | Ideal Rationality | |
| Reasoning Based | Systems that think like humans. | Systems that think rationally. |
| Behavior-Based | Systems that act like humans. | Systems that act rationally. |
In a landmark 1943 paper “A Logical Calculus of the Ideas Immanent in Nervous Activity” Warren McCulloch (1898-1969) and Walter Pitts (1923-1969) showed the promise of Turing’s artificial intelligence program. Uniting propositional logic, Turing’s ideas on computation, and neuroscience, the pair uncovered the binary systems that underlay all three. Classical propositional logic admits only two semantic values, either True or False; computer code carries out its operations with 1 or 0; and neurons in the brain have two states, on (fired) or off (unfired).[15] Logical connectives could be modeled by the network arrangement of neurons, allowing for complex propositions to be articulated: “The core implication was clear: one and the same theoretical approach—namely, Turing computation—could be applied to human and machine intelligence.”[16] Another millstone in history of Artificial Intelligence was the 1956 gathering at Dartmouth College where the term ‘Artificial Intelligence’ was coined by John McCarthy. The gathering brought together some of the figures who would be luminaries in the field including John McCarthy, Marvin Minsky, Allen Newell and Herbert Simon.[17] A few months earlier, the power of the emerging computers was demonstrated by a computer program called the Logic Theorist which proved 38 of Bertrand Russel and Alfred North Whitehead’s theorems in the Principia Mathematica, as well as finding several more elegant proofs.[18]In 1958 Frank Rosenblatt (1928-1971) developed the Perceptron, the first form of neural networks which “learned to recognize letters without being explicitly taught…”[19] The Perceptron drew inspiration from how neurons fire. The perceptron receives multiple inputs, each of which is assigned a weight. If a certain threshold is reached, the perceptron ‘fires’ a 1 for ‘true’, and if the threshold is not reached, it does not fire and outputs a 0 for ‘false. By adjusting the weights and checking the machine’s answers with the correct answers, it could be ‘trained’ over multiple iterations to produce the correct result. This idea of training through positive and negative reinforcement was shaped by the behaviorist theories of psychologist B. F. Skinner.[20] Despite the initial impressive results of the Perceptron, it was too simple to work with anything but the simplest images. It was largely abandoned as an approach, especially after a scathing 1969 critique from Minsky and Seymour Papert in Perceptrons.[21] The continuation of the logical AI program pioneered by Turing would have massive success in the development of ‘expert systems’ from 1969-1986, “in which human experts devised rules for computer programs to use in tasks such as medical diagnosis and legal decision-making.”[22] DeepBlue’s defeat of Gary Kasparov in 1997 was another landmark.The approach to AI pioneered by Rosenblatt would re-emerge in the 1980s with the development of backpropagation in multi-layered networks by David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. Backpropagation feeds an incorrect answer back into the network, allowing for self-correction. The emergence of probabilistic reasoning and Bayesian networks led to a re-integration of AI with other fields, “One consequence of AI’s newfound appreciation for data, statistical modeling, optimization, and machine learning was the gradual reunification of subfields such as computer vision, robotics, speech recognition, multiagent systems, and natural language processing that had become somewhat separate from core AI.”[23] With the rise of the internet in 2001, the ‘big data’ available made it possible to train algorithms to identify patterns in the data set, leading to developments such as face and voice recognition, text and image generation, and much more.
Symbolic and Sub-symbolic AI
Melanie Mitchell argues that there have been two main approaches to the development of Artificial Intelligence: Symbolic AI, and Sub-Symbolic AI. She characterizes Symbolic AI as consisting of “words or phrases (“the symbols”), typically understandable to a human, along with rules by which the program can combine and process these symbols to perform its assigned task.”[24] This approach works using logic, as in Alan Turing’s ‘Turing machine’. A particular problem is identified, say producing a program that can play chess, logical rules are devised in the classical if-then form, and a program is created that can ‘reason’ about the particular domain that it has been designed for. Sub-symbolic AI (also known as connectionist) on the other hand, works very differently, by ‘learning’ how to do a task, as in Frank Rosenblatt’s Perceptron. Sub-symbolic approaches can take in data from the real world and ‘learn’ from it, giving it an ‘embodied’ character in contrast to the disembodied intelligence of symbolic AI. Boden characterizes the differences as “sequential instructions” (symbolic) versus “massive parallelism” (sub-symbolic); “top-down control” via the logical program (symbolic) versus “bottom-up processing” through the network of weights (sub-symbolic); Logic (symbolic) versus probability (sub-symbolic); the brittle, non-adaptive, nature of symbolic approaches versus the “dynamical and continuously changing” sub-symbolic.[25] We could also add: understandable by humans (symbolic) versus black boxes (sub-symbolic); The inability to generalize (symbolic) versus the ability to generalize to cases not in the training set (Sub-symbolic). These differences can be advantages or disadvantages depending on the task: Symbolic systems tend to be worse at perceptual and motor tasks, while Sub-symbolic approaches lack the precision of symbolic approaches, and “no one knows how to directly program complex human knowledge or logic into these [sub-symbolic] systems.”[26] The philosopher Andy Clark said it pithily and usefully, sub-symbolic approaches are “…bad at logic, good at Frisbee.”[27]
A useful way of contrasting these approaches is through psychologist and economist Daniel Kahneman’s contrast between ‘System 1’ and ‘System 2’ cognition in his book Thinking Fast and Slow. Mitchell herself suggests this in her characterization of the difference between symbolic and sub-symbolic AI: “Symbolic AI was originally inspired by mathematical logic as well as by the way people described their conscious thought processes. In contrast, sub-symbolic approaches to AI took inspiration from neuroscience and sought to capture the sometimes-unconscious thought processes underlying what some have called fast perception, such as recognizing faces or identifying spoken words.”[28] Kahneman’s distinction between ‘System 1’ and ‘System 2’ is not intended to denote two different regions of the human brain, but rather two different forms of cognition that humans use: fast, automatic, intuitive thinking (System 1) and slow, reasoned, thinking (System 2). System 1 is always working in the background and “operates automatically and quickly, with little or no effort and no sense of voluntary control.”[29] System 2 on the other hand must be consciously engaged, it “allocates attention to the effortful mental activities that demand it, including complex computations. The operations of System 2 are often associated with the subjective experience of agency, choice, and concentration.”[30] It is useful to think of symbolic AI as analogous to System 2, and sub-symbolic AI as analogous to System 1. The obvious solution, it would seem, is to get symbolic and sub-symbolic approaches to work in sync as System 1 and System 2 do in human brains.
There have been recent attempts at creating hybrid approaches such as ‘neuro-symbolic’ AI which “attempts to combine the deep learning models’ capability to learn from data and generalize with the symbolic systems’ capability for clear reasoning and problem-solving.”[31] The success of some of these models on IQ tests is promising.[32] Ian Deary notes that IQ tests such as the WAIS-III require cognitive abilities across 13 different forms of mental activity including vocabulary, identifying similarities, arithmetic, picture arrangement, and more.[33] Historically, AI approaches have been designed to succeed at individual intellectual tasks (playing chess for example), but so-called ‘general intelligence’, the ability to succeed at a wide range of intellectual tasks, has remained elusive. Models succeeding on IQ tests show promise for a path to general intelligence. That said, further improvement may depend on producing AIs that are embodied and can therefore draw on sensory experience.
Part 2: Aporia in AI: “The Blind Spot”
We have now given a brief sketch of the field of artificial intelligence and its attempts to model human intelligence. We have seen the success of symbolic AI approaches in modeling something like the human reasoning of System 2. At the same time, sub-symbolic AI approaches model the more intuitive, pattern-recognizing ability of System 1. This, combined with advances in robotics and computer visualization could conceivably lead to an AI that is embodied and can ‘learn’ from its environment. These are impressive inventions. The project inaugurated by Turing of creating mechanical models of human cognition has been very successful.
At the same time, there is something paradoxical about the Artificial Intelligence project: it has seemingly turned the phenomenon it is studying and reproducing, inside out. In Turing’s ‘Turing Test’, the emphasis is on creating a machine that can speak or reason-in-speaking in such a way that a human interrogator would be fooled.[34] But this way of studying the mind ends up bracketing what the mind is—conscious awareness.[35] Surely what is distinctive about human minds, our deployment of language and our cognition is that we are aware of doing so, and are employing language to express something, and using reason to think about something. What this means is that Artificial Intelligence—even in its most impressive modern form—cannot produce anything more than Turing’s test aims to produce: a model of cognition that shows the products of the cognitive process without actually thinking.John R. Searle demonstrates this with his famous Chinese Room thought experiment. In the thought experiment, a man is given statements in Chinese. Using a table of rules, he can infallibly give an appropriate response in Chinese. The man can do this—and this is key to Searle’s point—without understanding a word of Chinese. Searle contrasts this rule-follower with a native English speaker conversing in English with an interlocutor. One speaker understands the language they are speaking, while the other does not, even though from “the external point of view… the answers to the Chinese questions and the English questions are equally good.”[36] Searle’s point is that Artificial Intelligence produces the correct response, the correct ‘output’, without actually understanding. The native English speaker can produce the correct response while understanding, ‘cognizing’. That both of these can pass the Turing test, seems to suggest that there is an aporia of subjective, conscious awareness at the heart of the AI project.[37]
This aporia is the same aporia that underlies the scientific project as a whole. In their book, The Blind Spot: Why Science Cannot Ignore Human Experience, Adam Frank, Marcello Gleiser, and Evan Thompson diagnose what they call ‘the blind spot’: “At the heart of science lies something we do not see that makes science possible, just as the blind spot lies at the heart of our visual field and makes seeing possible… in the scientific blind spot sits direct experience—that by which anything appears, shows up, or becomes available to us. It is the precondition of observation, investigation, exploration, measurement, and justification.”[38] Michael Polanyi makes a similar argument drawing on Kurt Gödel’s incompleteness theorems. Gödel’s theorems show that formal systems—such as those underlying computing and mathematics—cannot establish their ground from within the formal system. Polanyi sees the work of Gödel as “establishing a realm that cannot be formalized and hence is prior to the computation that a machine can do.”[39] Polanyi equates this outside realm as “the primordial capacity of reflection on rules which is itself not bound to those rules.”[40] The formal system itself only functions then, “by virtue of unformalized supplements” of conscious reflection. Namely, “symbols must be identifiable and their meaning known, axioms must be understood to assert something, proofs must be acknowledged to demonstrate something, and this identifying, knowing, understanding, acknowledging, are unformalized operations on which the working of the formal system depends.”[41] Polanyi sees these pre-formal mental acts as providing the “semantic function” of the formal system’s syntax.[42] We saw how Searle’s Chinese Room experiment shows that a computer cannot actually ‘think’, ‘reason’, ‘cognize’, ‘understand’, or do anything else that is the preview of intentional consciousness; Polanyi extends this to the roots of computing in Gödel and Turing’s work on formal systems by arguing that these tacitly depend on the capacity of conscious reflection. David Bentley Hart makes this point in his typically acerbic prose:
“A physical system can be neither right nor wrong, but only efficient or inefficient. Software no more thinks than [a] abacus thinks, and its coding is only a set of protocols for physical processes. No computer has ever used language, responded to a question, or intended a meaning. No computer has ever played chess or Go. No computer has ever added two numbers together, let alone, entertained a thought.”[43]
What is a Human? Metaphysical Animals
If the argument in the section above is convincing, then there is a ‘blind spot’ in the AI project which cannot help but put forward a distorted picture of human and their intelligence. It is clear after all, that how AI reaches its results is very different from how humans think. Hubert Dreyfuss seminally criticized symbolic (or GOFAI) as reproducing the “old rationalist dream,” which is “based on the old Cartesian idea that all understanding consists in forming and using appropriate symbolic representations.”[44] Furthermore, sub-symbolic AI has suffered from ‘adversarial examples’, where small, specific, imperceptible-to-humans changes to an image’s pixels, lead the AI to characterize it as something wildly different (for example, where we would see a ‘Temple,’ AI would see an ‘Ostrich’).[45] It is time then, to consider an alternative picture, this one not sketched ‘from the outside’, but rather from within the lived experience of the living human subject.
In sketching out an alternative, I can think of no better place to look then the quartet of women—Iris Murdoch, Mary Midgely, Elizabeth Anscombe, and Phillipa Foot—who responded to the dominant picture of humans as “efficient, calculating machines” in post-war Britain.[46] These women were responding to the same philosophical milieu of logical positivism and scientific reductionism that produced the likes of G.E. Moore, A. J. Ayre, Gottlob Frege, Bertrand Russell, and Alan Turing. In this intellectual milieu (which is very much still our own),
“Speculative metaphysical enquiry—the pursuit of knowledge of human nature, morality, God, reality, truth and beauty—was to give way to clarification and linguistic analysis in the service of science. The only questions were those who could be answered by empirical methods… [Metaphysical] questions… go beyond the limits of what we can measure and observe, and so they were designated ‘Nonsense’… [This was] a vision of human beings as ‘efficient calculating machines’, individuals whose intellectual powers enable them to move beyond their messy animal nature so as to organize and rationalize an otherwise brute and formless world. It was declared that there were no genuine philosophical problems; questions that were not amenable to scientific investigation were embarrassing muddles or linguistic confusions.”[47]
This is an intellectual culture that is ‘captive to a picture’, caught up in a model of the world that filters out or distorts, much of what human beings think, feel, desire, imagine, and see. In the place of this picture, these four women put forward an alternative of humans as ‘metaphysical animals’: We are creatures, rooted and limited by our animal nature, and yet simultaneously oriented by our desire for a ‘Good’ that transcends us. The picture of rationality that emerges from this picture cannot be separated from ethical transformation. Knowing well, cannot be separated from loving well.
In her essay “The Concept of Beastliness: Philosophy, Ethics, and Animal Behaviour,” Mary Midgely puts humans back in their place in the animal kingdom: “We are not just rather like animals; we are animals. Our difference from our relatives may be striking, but the comparison has always been and must be crucial to our view of ourselves.”[48] Midgely’s target in her essay is the existentialists, social constructivists, and behaviorists—we might add transhumanists—who see humans as infinitely open to malleability, modification, reconstruction, or ‘bio-hacking’. Instead, through her study of biological and ethological literature, Midgely wanted to outline a human nature, constituted by its biological continuity with our non-human relatives. This leads Midgely to argue that humans are more animal-like and that animals are more human-like than we have traditionally admitted. For Midgely, human rationality is not something floating above our nature, this would be a “pneumatic conception of thinking.”[49] Instead, Midgely wants to see our reason as “our nature itself, becoming aware of its own underlying pattern.”[50] Rationality is not some abstract entity, but rather it “belongs to the vocabulary of a particular species with particular needs.”[51] Frans de Wall talks about animal intelligence as shaped by its particular umwelt, that is, “an organism’s self-centered, subjective world, which represents only a small tranche of all available worlds.”[52] Intelligence then is shaped for a creature’s umwelt, which in turn is shaped by a creature’s biological constitution. In the human case, intelligence helps us to balance and harmonize the conflicting desires and drives that creatures such as ourselves experience. As such, rationality cannot be separated from our biological, emotional, and social existence.[53]
Midgely’s account of the human animal is helpfully extended by Iris Murdoch’s discussion of the place of ‘the Good’ in human life. For Murdoch, the central problem of ethical life is our self-absorption, the propensity of the self to turn away from the reality of the world, away from the suffering of others, and into morbid reflexivity. What is required for moral improvement is a movement of attention, “outward, away from the self…”[54] In making these movements, Murdoch finds significance in the concepts of ‘the beautiful’ and the good’. Something beautiful can draw us out of ourselves to recognize it. Murdoch gives her famous example of standing by the window in “an anxious and resentful state of mind.” Suddenly she catches sight of a “hovering kestrel” and is enraptured by its beauty. She finds herself drawn out of herself: “The brooding self with its hurt vanity had disappeared. There is nothing now but kestrel.”[55] Murdoch thinks that the platonic ideal of ‘the Good’ as a transcendent standard of perfection does something similar for us. As she puts it, “are we not certain there is a ‘true’ direction towards better conduct, that goodness ‘really matters’…?”[56] Murdoch notes how in any field of human excellence—in making art, writing mathematical proofs, or playing hockey—we cannot help but measure and compare degrees of perfection. As we grow towards this idea of perfection, we “come to perceive scales, distances, standards, and may inclined to see as less than excellent what previously we were prepared to ‘let by’.”[57] For Murdoch, although the idea of ‘the Good’ cannot be located in any empirical reality, it is required at least imaginatively or conceptually, to orient us, to make sense of our movement of desire towards it.
In his book, All Things Are Full of Gods, David Bentley Hart notes that all our acts of mind or of body are movements of desire. At the most basic level, our “animal impulses—toward food, sex, sensual pleasure of every kind” are movements of a “more fundamental longing for happiness, satiety, repose, and so on.”[58] More profoundly, every act of mind, every movement of rationality is simultaneously an act of desire: “To know the world, the mind has to venture out into the world in a movement of desire.”[59] We speak because we desire to be understood, we want to speak the truth, and we desire to communicate something that is good or something we find beautiful (Or, perhaps because we desire to belong). Our acts of reasoning and understanding are acts of desiring to know what is true. Our productions of art are expressions of desire to reveal something of ourselves or some vision we have grasped. Even (or perhaps especially) the most supposedly ‘dispassionate’ scientist or cool-headed mathematician can understand the burning passion at the heart of the intellectual life: the frustrating desire to know something that just slips our capacity; the single-minded pursuit that leads to bodily neglect; the intellectual rapture; the bliss of insight.[60] This, to me, is the most profound difference between human and machine intelligence: “No machine ventures out from itself in desire—in love—towards the whole of reality, engaging with all the particular things of this world under the canopy of transcendental yearnings and acts of judgments.” [61] There is no movement of desire—to speak the truth, to produce something beautiful—in the oft-times garbled productions of ChatGPT. The distinguishing mark of human intelligence is simply, love. This then, is the real Turing test of the metaphysical animals: “You shall know them by their love.”
Concluding Reflections
In the preceding section, I have spent many words to describe the picture of humans as ‘metaphysical animals’ rather than as ‘efficient calculating machines’. From Midgely, Murdoch, and Hart, a rich picture of human cognition emerges. Our cognition is part of our animal nature, rooted in the evolutionary history we share with other creatures. Rationality does not float free from this embodied history but is rooted in our nature and constrained by it. At the same time, we are creatures who strive towards and are drawn to a ‘Good’ that transcends us. Our acts of cognition are always movements of desire towards a horizon of value that ever escapes us. In this picture of human intelligence, we cannot separate reason from embodiment, understanding from desire, or rationality from love. But if rationality is always a movement of desire for an embodied human subject, this means that our attempts to understand the world are inseparable from our attempts to understand ourselves. (Are there not often deeply personal reasons why certain intellectual problems grip us?)
It should be clear that the model of human intelligence put forward by AI researchers pales in comparison to the picture that has been sketched above. This is only a ‘knock’ on the project if we have mistaken the model for reality. It seems to me that the most useful way to conceive Artificial Intelligence research is by putting forward a model of intelligence. It has usefulness in illuminating parts of how our brains or cognition work, and it has demonstrated tremendous power in simulating tasks that humans have employed intelligence to achieve. However, just as early modern science achieved its great explanatory power by bracketing values, purpose, teleology, and qualitative experience; just so, modern AI research has achieved its tremendous results by bracketing the conscious intentionality that makes human intelligence what it is. Just as we should not assume that our scientific models capture the capaciousness of reality—or indeed, given their limited scope, aims, and methodologies, are capable of doing so—we should not assume that our AI models capture everything there is to human intelligence.
John Searle makes a very useful (and often misunderstood) distinction between weak and strong AI research. Searle is often taken to be distinguishing limited from generally intelligent AI—in fact, this misunderstanding illustrates his point. Weak AI researchers pursue AI research as useful models of elements of cognition, or as tools to do certain tasks. Strong AI researchers come to see these models as literally mind, as actually intelligent.[62] Just as in scientism, the model is mistaken for reality, so in AI research, the powerful programs are taken to be doing what they are modeling. The problem with this is twofold. On the one hand, it turns the open-ended scientific quest for understanding into something brittle and rigid by assuming it has arrived. On the other hand, it distorts our self-conception as human beings to assume that all we are can be captured in AI models or scientific descriptions. This is particularly true in our computer age. These models are so powerful that we find it almost impossible to escape the metaphor of ourselves as computers; moreover, the virtual worlds we have created are so enticing and stimulating that we find it hard to extract ourselves from computers. David Bentley Hart brilliantly describes our condition with the ancient fable of Narcissus who spurned his lover Echo for his reflection in the water. Likewise, we have turned from nature and the real to gaze at our own creations: “Human beings turn for companionship to the thin, pathetic, vapid reflection of their own intelligence in their technology only because they first sealed their ears against the living voice of the natural world, to the point that nothing more than its fading echo is still audible to them.”[63] We become caught in a metaphor of our own making, a picture has held us captive.
What worries me about the rapid development of AI is not that we are racing towards artificial general intelligence, then the Singularity, and finally our sentient AI overlords—given what I have argued above, I find that unlikely. Many other practical worries concern me: The way that AI research is being driven by a zero-sum arms race between the major players of our multipolar world;[64] The emergence of DeepFakes and the use of AI by nefarious actors; The massive power consumption of AI and its contribution to climate change;[65] How reliance on generative AI could lead to an atrophying of human abilities to reason, write, and create. We could also mention the paradox that while generative AI promises to release us from the drudgery of bullshit office jobs, it has spawned thousands of dehumanizing bullshit jobs to perpetuate itself: from YouTube content moderators, to Amazon warehouse workers, to third-world ‘image labelers’ for training AI.[66]
All of these are things worth worrying about. However, what worries me more than anything about AI is how its growth and metabolization of all aspects of life is the culmination of what Jaques Ellul called technique. For Ellul, the characteristic drive of modern society is the pursuit of the most efficient means, the “the one best way,”[67] in any given domain. Ellul sees this pursuit of “the one best way” as increasingly dominating all areas of modern life—relationships, agriculture, politics, education, commerce, science, technology, etc.—such that all of these form a total, interlocking system. Thus for Ellul, technique names both this interlocking system of efficiency-seeking methods and the general drive for the one best way: “Technique is the totality of methods rationality arrived at and having absolute efficiency (for a given stage of development) in every field of activity.[68] There are no moral considerations for technique, only the extension of efficiency under the maxim of Robert Oppenheimer, “When you see something that is technologically sweet, you go ahead and do it.” I cannot but see the development of ever more powerful AI models as extending the logic of technique even further into human life. It is useless to provide examples of this tendency, they are ubiquitous and naked for all to see. At all sides we are presented with a new form of generative AI that can do something for us more ‘efficiently’ (I suppose so that we can spend more time escaping into the virtual world). The question of what all this speed and efficiency is for, what end it serves,what human or natural good it will uphold, or what humanizing tendency it promotes—all these questions are never answered. As Ellul puts it: “…we do not know whither we are going, we have forgotten our collective ends, and we possess great means: we have set huge machines in motion in order to arrive nowhere.”[69]
Is it possible that AI as a technology can be used for what Ivan Illich calls convivial ends? In other words, is it possible for us to use AI as a tool without being used by it? Is it possible to learn from AI research into human cognition without becoming enslaved to an inadequate picture of human intelligence? Is it possible to apply AI as a tool to some limited domains where efficiency is needed (say for eliminating bullshit jobs) while avoiding a totalizing application of it to all domains? I suspect that in general, in our society as a whole, the answer to these questions will be no. Our development of AI will continue to be driven by technique and it will not be limited in life-affirming and convivial ways. For those of us who are worried and disturbed by these trends, we will have to seek smaller-scale, counter-cultural ways of intentionally and convivially using (or not using) these technologies.
[1]. Ludwig Wittgenstein, Philosophical Investigations, 4th ed., trans. G.E.M Anscombe, P.M.S Hacker, and Joachim Schulte., ed. P. M. S. Hacker and Joachim Schulte (Oxford: Wiley-Blackwell,2009), §115.
[2]. I have taken this image from the title of Clare Mac Cumhaill and Rachael Wiseman, Metaphysical Animals: How Four Women Brought Philosophy Back to Life (New York, NY: Anchor Books, 2022).
[3]. Homer, The Odyssey, Translated by Robert Fagels (Ontario: Penguin Books, 1990), 18:488-492.
[4]. Selmer Bringsjord, “Artificial Intelligence.” Stanford Encyclopedia of Philosophy, (2018) https://plato.stanford.edu/entries/artificial-intelligence/index.html#ref-8.
[5]. B Halpern Joseph Y., et. al. “On the Unusual Effectiveness of Logic in Computer Science.” The Bulletin of Symbolic Logic, Vol. 7, No. 2 (June., 2001), 214.
[6]. Jc Beall and Shay Allen Logan, Logic: The Basics 2nd Edition (New York: Routledge, 2017), 5.
[7]. Stuart Russel and Peter Norvig, Artificial Intelligence: A Modern Approach 4th edition, (Hoboken: Pearson, 2021), 40
[8]. Russel and Norvig, Artificial Intelligence, 58.
[9]. Augusta Ada Lovelace, Scientific Memoirs 3: Sketch of the Analytical Engine Invented by Charles Babbage, (1843, Wikisource, 1930), 697. https://en.wikisource.org/wiki/Scientific_Memoirs/3/Sketch_of_the_Analytical_Engine_invented_by_Charles_Babbage,_Esq./Notes_by_the_Translator.
[10]. Hodges, Andrew, “Alan Turing,” Stanford Encyclopedia of Philosophy. (2013), https://plato.stanford.edu/entries/turing/#BuiBra.
[11]. Turing, “Computing Machinery and Intelligence.” Mind Vol. 49, 434.
[12]. Margaret A. Boden, Artificial Intelligence: A Very Short Introduction. (Oxford: Oxford University Press, 2018), 3.
[13]. Boden, A Very Short Introduction, 1.
[14]. Reproduced from Bringsjord, “Artificial Intelligence.” See also, Russel and Norvig, A Modern Approach, 31-38.
[15]. Russel and Norvig, A Modern Approach, 63.
[16]. Boden, A Very Short Introduction, 8.
[17]. Melanie Mitchell, Artificial Intelligence: A Guide to Thinking Humans. (New York: Farrar, Straus, and Giroux, 2019), 25.
[18]. Mitchell, A Guide to Thinking Humans, 10. Patrick D. Smith, Hands-On Artificial Intelligence For Beginners, (Bermingham: Packt, 2018), 8.
[19]. Boden, A Very Short Introduction, 69.
[20]. Mitchell, A Guide to Thinking Humans, 38.
[21]. Smith, Hands-On Artificial Intelligence, 10.
[22]. Mitchell, A Guide to Thinking Humans, 33.
[23]. Russel and Norvig, A Modern Approach, 71.
[24]. Mitchell, A Guide for Thinking Humans, 29.
[25]. Boden, A Very Short Introduction, 69-70.
[26]. Ibid., 70, Mitchell, A Guide to Thinking Humans, 57.
[27]. Mitchell, A Guide to Thinking Humans, 57.
[28]. Ibid., 35.
[29]. Daniel Kahneman, Thinking Fast and Slow (Anchor Canada, 2011) 20.
[30]. Kahneman, Thinking Fast and Slow, 21.
[31]. “Neurosymbolic AI: Bridging the Gap Between Neural Networks and Symbolic Reasoning” Alphanome.AI, Finance AI Research Lab, Feb. 7, 2024, https://www.alphanome.ai/post/neurosymbolic-ai-bridging-the-gap-between-neural-networks-and-symbolic-reasoning.
[32]. Abbas Rahimi and Michael Hersche, “This AI could beat you at an IQ test,” IBM, March 9, 2023, https://research.ibm.com/blog/neuro-vector-symbolic-architecture-IQ-test.
[33]. Ian J. Deary, Intelligence: A Very Short Introduction 2nd ed. (Oxford University Press, 2020).
[34]. It seems that we are in the realm of skepticism: Descartes’ automatons in hats, Wittgenstein’s private language fantasy. There may be some Cavellian insights to develop here.
[35]. I am not criticizing Turing for ‘missing’ this. He emphasizes that he is bracketing the question of conscious awareness because it is impossible to study.
[36]. Searle, John R. “Minds, brains, and programs.” The Behavioural and Brain Sciences 3. (1980), 418.
[37]. Searle, “Minds, brains and programs,” 419.
[38]. Adam Frank, Marcello Gleiser, and Evan Thompson, The Blind Spot: Why Science Cannot Ignore Human Experience (MIT Press: Cambridge, 2024), EPUB, Introduction.
[39]. Paul Richard Bloom. “Michael Polanyi: Can the Mind be Represented by A Machine?” Polanyiana, Vol. 10, No. 2 (2010), 40.
[40]. Bloom, “Michael Polanyi,” 40.
[41]. Ibid., 41.
[42]. Ibid., 41. Emphasis mine
[43]. David Bentley Hart, All Things Are Full of Gods: The Mysteries of Life and Mind. (Yale: Yale University Press, 2024), 274.
[44]. Hubert L. Dreyfus, What Computers Still Can’t Do: A Critique of Artificial Reason, (Cambridge: MIT, 1999), x-xi.
[45]. Mitchell, A Guide for Thinking Humans, 142.
[46]. Cummhail and Wiseman, Metaphysical Animals, 90.
[47]. Cumhaill and Wiseman, Metaphysical Animals, x.
[48]. Mary Midgely, “The Concept of Beastliness: Philosophy, Ethics, and Animal Behaviour,” Philosophy 48, No. 184 (April 1973), 144.
[49]. Wittgenstein, Philosophical Investigations, §109.
[50]. Mary Midgely, Beast and Man: The Roots of Human Nature (New York: Routledge Classics, 2002), 195.
[51]. Midgely, “The Concept of Beastliness,” 133.
[52]. Frans De Waal, Are We Smart Enough to Know How Smart Animals Are? (New York: W.W. Norton & Company, 2016.)
[53]. The last three sentences are reproduced from an essay I wrote called “Talking Animals: Midgely, Wittgenstein and the Grammar of Beastliness.”
[54]. Iris Murdoch, “On God and Good,” in The Sovereignty of Good. (New York: Routledge, 2007), 58.
[55]. Murdoch, “Sovereignty of the Good”, in The Sovereignty of Good, 82.
[56]. Murdoch, “On God and Good,” 59.
[57]. Ibid., 60.
[58]. Hart, All Things are Full of Gods, 425.
[59]. Ibid., 420.
[60]. Consider Bertrand Russell’s description of the passionate intellectual life of Wittgenstein: “His disposition is that of an artist, intuitive and moody. He says every morning he begins his work with hope, and every evening he ends in despair – he has just the sort of rage when he can’t understand things that I have. I have the most perfect intellectual sympathy with him – the same passion and vehemence, the same feeling that one must understand or die, the sudden jokes breaking down the frightful tension of thought… he even has the same similes as I have – a wall parting him from the truth which he must pull down somehow. After our last discussion, he said ‘Well, there’s a bit of wall pulled down.” (Ray Monk, Ludwig Wittgenstein: The Duty of Genius (New York: Penguin Books, 1991), EPUB, Chapter 3.)
[61]. Hart, All Things Are Full of Gods, 292.
[62]. John Searle, “Brains, minds, and programs,” 417.
[63]. Hart, All Things Are Full of Gods, 480.
[64]. Robert Wright, “AI and The China Question,” November 27, 2024, in Robert Wright’s Nonzero Podcast, podcast, MP3 audio, https://podcasts.apple.com/ca/podcast/robert-wrights-nonzero/id505824847?i=1000678446478.
[65]. Casey Crownhart, “AI is an energy hog. This is what it means for climate change.” MIT Technology Review, May 23, 2024, https://www.technologyreview.com/2024/05/23/1092777/ai-is-an-energy-hog-this-is-what-it-means-for-climate-change/.
[66]. Mitchell, A Guide for Thinking Humans, 129.
[67]. Jacques Ellul, The Technological Society, (New York: Vintage Books, 1964), 79.
[68]. Ellul, The Technological Society, x.
[69]. Jacques Ellul, The Presence of the Kingdom, (New York, Seabury Press: 1967), 63.
This is a deeply resonant and beautifully written piece—thank you for articulating with such clarity and nuance what so often goes unsaid in conversations about AI. I discovered your website through a fellow blogger at philosophics.blog, and I’m grateful for the recommendation.
Your reframing of intelligence—not as disembodied logic, but as something rooted in our evolutionary, emotional, and ethical nature—adds a crucial layer to the discussion. The way you draw on thinkers like Midgley, Murdoch, Hart, Ellul, and Illich brings real philosophical depth, which is so often missing from mainstream discourse around technology.
I especially appreciated your critique of how easily we mistake our models for reality. We build systems to simulate narrow facets of our intelligence and then begin to see ourselves through those same reductive lenses. As you note, the metaphor of the human as machine has become deeply embedded, subtly reshaping our aspirations, values, and even our sense of self.
Your analysis of “technique” is also spot-on. It’s not just AI that’s concerning, but the broader cultural momentum toward efficiency at all costs—a momentum that rarely pauses to ask what that efficiency is for, or what it might be displacing. The questions you raise at the end—about whether AI can serve convivial or life-affirming ends—are critical. I share your skepticism that such restraint will come from our dominant institutions, but I also take heart in your call for smaller-scale, countercultural alternatives.
Ultimately, your piece reminds us that the most important question isn’t “What can AI do?” but “What kind of life do we want to live?” And that, as you so rightly point out, is a question no algorithm can answer for us.
LikeLiked by 1 person
Thank you for your generous comment! That’s an excellent summery of what I was up to in the piece!
The key insight for me in this piece was how love and desire is the distinguishing mark of human intelligence: AI might be able to imitate the products of human intelligence, but it is not moved by a desire to know. This makes all the difference.
Right regarding Technique. I think the discussion of this technology in abstraction from the larger tendency towards efficiency at all costs (not the mention the way AI is being developed as part of a global arms race) is going to miss the forest for the trees.
Again thanks for reading and for your engagement!
LikeLiked by 1 person