One might ask: what contributions could philosophy possibly make to an understanding of computer technology, in particular Artificial Intelligence (A.I.)? Is this not the exclusive province of technical people who have no need for a philosopher’s meddling? We shouldn’t prejudge this issue; rather, it’s worth exploring whether philosophy can add anything of value to the discussion. And if so, what value does it add?
A.I.: some philosophical thoughts
Contemporary philosopher Andy Clark has made an important contribution to the study of A.I by raising questions about its assumptions. Clark, who is also trained in the cognitive sciences, has investigated whether A.I.’s model of an abstract computerized ‘mind’ that is separate from the concrete physical reality of the body and external world might be wrong.
Why, he wondered, are our ‘intelligent’ artifacts still so seemingly dumb? Perhaps it is because we have completely misconstrued the nature of intelligence itself. We have conceived of the mind as simply a logical, reasoning device linked to a set of explicit data—a kind of a cross between a logic machine and a filing cabinet.
Instead, Clark offers an alternative: the philosophical theory of the extended mind, which questions the ‘natural boundary’ between the mind and the world. This is a scientific operationalization of Kantian epistemology—a computational and neuroscientific theory known as “Predictive Processing”—in which the mind is not a passive spectator, but actively engaged with sensation. Instead of accepting the empiricist thesis that the brain merely receives and processes sensate data from putative external causes, Predictive Processing , a la Kant, argues for generative schema—“chains of endogenous procedural rules”—which actively shape and structure raw experiential data (though Predictive Processing frames these Kantian themes inside a very non-Kantian biological and evolutionary theory). The human mind/brain is an active player in the experiential world, rather than merely reacting to stimuli.
Clark further observes how studies in robotics and A.I. have tended to discount the role of intelligence in functioning in the physical environment, such as walking or performing tasks. This smooth interaction of the body, world, and mind—often seemingly an unconscious process—conflicts with A.I.’s abstract, logic machine model, bifurcated from the external, natural world.
A.I.: philosophically neutral?
Andy Clark has called our attention to the fact that the cognitive model assumed by current researchers in A.I. does not have a monopoly on the nature of the mind, but is in tension with a countervailing paradigm of the mind rooted in a previous philosophical tradition. Despite the impressive technical accomplishments inherent in building such robotic systems, they are not philosophically neutral, that is, they are not just pieces of technology constructed in a vacuum, without assuming a particular philosophical context. A.I. is not the last word, but only one voice, albeit impressive, among other conflicting models of knowledge and the mind.
Distinctions made by philosopher Ludwig Wittgenstein suggest a way of exploring this issue: because multiple “language games” have their own unique rules, they are incommensurable. For example, the language of science and mathematics differs from any number of other language games such as the language of religion, which in turn differ from each other. Furthermore, A.I. has its own unique language game separate from alternative paradigms of learning and knowledge, which are based on an active engagement of the human biological body with the physical world—a physicality that machine-based A.I., by definition, does not have.
Artificial vs. Creative Intelligence
Extrapolating from Clark’s analysis and Wittgenstein’s insights, we can now discern the limits of A.I.’s abstract machine model: it obviates the need for the human body and emotions as the means to know and learn. A.I. researchers, in designing robotics to perform functional tasks, such as playing chess games, translating languages, verifying financial fraud, and computing mathematical theorems, have not touched upon the other ways flawed, non-digitized humans obtain knowledge.
For example, as John Dewey argued, humans naturally think experimentally, testing hypotheses in their encounters with physical reality and social problems in order to find knowledge rather than relying on ‘infallible’, preexisting dogmas for guidance. Artificial Intelligence does not mirror this type of creative intelligence, in which fallible homo sapiens, without absolute rules, are immersed in the world confronting, as Dewey wrote, the unexpected, the “reaching forward into the unknown,” not only learning but changing the given. Moreover, creative intelligence, naturally entwined with the human organism, is conducive to serving human purposes and interests by solving problems and finding knowledge that benefit individuals in a social context.
Thus A. I.’s abstract machine model, despite its important uses, fails to emulate human intelligence in all its richness; instead it is grounded in specific, limited types of cognition, or, in Wittgensteinian parlance, particular language games (e.g. language translation, solving mathematical formulas) that are different from forms of creative, experimental and moral intelligence used, for example, in social reform, public policy, or even artistic innovation.
Multiple intelligences, not one…
If Andy Clark is right, then Artificial Intelligence’s perceived threat to human knowledge workers, as well as to their essential emotional intuitions, is overblown. One-dimensional robotic minds that can win at chess, predict the weather, and perform other problem-solving tasks will not be able to replace human creativity, intuition, activism, empathy, and judgment— that is, multiple forms of alternative intelligences that do not fit the abstract machine model paradigm, the “logical, reasoning device.” Even Facebook—that vaunted user of A.I.—is finding that it does not replace human intelligence. Facebook’s reliance on A. I., is failing to combat fake news; keywords often can’t effectively identify misinformation. Human intelligence is needed. In other words, Artificial Intelligence cannot replace human intelligence.
Humans, unlike robotic systems, experience their minds and lives through many different contextual grounds, learning and knowing via emotional, artistic, political, musical, literary, and biological encounters with the world that go beyond just technical problem resolution. This means that there will always be new challenges for philosophers—the ultimate knowledge workers—to understand the different forms of intelligence that humans use in their efforts to comprehend—and change—the world.
Thomas White is a Wiley-Blackwell journal author, and previous contributor to Undercurrent Philosophy, Aeon, The Philosopher’s Eye, and other journals. He is also a poet and speculative fiction writer whose work has appeared in print and online in Australia, Canada, United States, and Great Britain.