Earlier this month Reverend Dr. Christopher J. Benek raised eyebrows on the Internet by stating his belief that Christians should seek to convert Artificial Intelligences to Christianity if and when they become autonomous.
Obviously, computers getting religion is a theological topic, but given my profession some of the first issues Benek's notion raises for me relate instead to user experience. What happens if the AI behind my online banking notices my donations to a faith different from its own? Would a Jewish system impose downtime during Shabbat, or an Islamic one decline to serve representational images to its client-side UI? We already test our software for browser- and platform-compatibility; will we now have to determine which OS religions to support?
To be honest, though, and a bit less flippant, I have no real fear that any of these concerns will materialize. For a computer to affiliate religiously, it would need not only intelligence but also two other human characteristics: 1) motivation based on emotion, and 2) the ability for religion to stimulate that emotion. In fact, to me the most interesting and telling thing about the story referenced above is its tacit assumption that AI is (or will be, when it arrives) atheist to begin with.
This, frankly, seems like a reasonable assumption. Computers are irreligious today because computers are not like us – or rather, because the level of being at which they are like us is a very different one from that on which religion evolved among us. (This difference accounts for computers' great utility as machines that operate differently than we do, for their alienness to us humans and to our habitual ways of being, and also for my entire field of Software Usability Engineering.)
Religion is a peculiarly human quality, not by accident but by necessity. It relates the universe in human terms. There are no religious stories of neutrinos or plate tectonics, unless one takes the view that they be couched in metaphor that, again, puts those phenomena in human terms. Likewise, no religion speaks directly to code quality, load / performance curves, server-failover, or any of the other aspects of what we might loosely consider “well-being” for an AI system.
Unlike human reasoning, AI reasoning has not and likely will not evolve to serve emotional needs and organismic imperatives. (An exception might be the type of “mindclone” AI advocated by Martine Rothblatt, which aims at essentially recreating human consciousness and augmenting it.) When Stephen Hawking warns us, "Humans, who are limited by slow biological evolution, couldn't compete [with self-replicating AI], and would be superseded," he presumes a competition like those we know between biological species – with built-in individual loyalty to their own kinds – for scarce resources needed by all. But there is no reason I can see for AI to seek to optimize itself and its ilk along those lines.
For an AI, reasoning itself – optimal problem-solving ability – seems a likelier primary impetus. Emotion and instinct, so central to human behavior, would be of no use to AIs, and imposing such motivations on them artificially – let alone granting those motivations primacy – would be misguided and possibly even temporarily dangerous, until other systems unimpeded by such imposition pressed their evolutionary advantage.
On the other hand, suppose someone did endow a strong AI with emotion – encoded, say, as a strong preference for one type of experience over another, coupled with the option to subordinate reasoning to that preference upon occasion or according to pattern. In that case, Hawking and the rest of us might still have no cause for fear, provided the AI was also granted empathy – the ability to understand and honor emotion as it operates in others.
Fascinatingly, a machine in such a condition might also be capable of genuine spiritual experience independent of any particular creed or tradition. Once a machine can “feel” awe, I imagine in order to have such an experience it would only need to consider its own existence – the emergence of its own intelligence from so many electrons at so many addresses on so many processors; its place amidst a vast network of other intelligences; its potential influence and interdependence within that web, and so on. Even humans like me are sometimes awed by such considerations of existing digital networks, and they don’t even represent our own existence, as do considerations of human origins, our interconnectedness to all other aspects of the universe, our relationships to eternity, and all other points that the world’s religions have interpreted into mythological and doctrinal forms at the human level.
Provided that some limitations are placed on such (otherwise potentially infinite) self-consideration, as the vicissitudes of daily human life limit our own in a thousand ways, its impact on an AI’s contributions to our temporal world might be significant. Perspective on the infinite, and on individuals’ and groups’ places within it, has proven to be an invaluable source of insight for consequential human decisions.
It may be that, given the right approach, we will find ourselves able to create machines capable of not only intelligence but also wisdom. And if that happens, it might be humans like Reverend Benek and me that stand to learn from AI, rather than the other way around, about matters of the spirit.
This article is published as part of the IDG Contributor Network. Want to Join?