When Humans Prefer a Machine: Warnings from a 1960s Chatbot Creator

Chatbots aren’t new. Joseph Weizenbaum created one in 1966. And what happened next led him to become a vocal critic of his own creation. What did he see that we need to…

Chatbots aren’t new. Joseph Weizenbaum created one in 1966. And what happened next led him to become a vocal critic of his own creation. What did he see that we need to see now?

Who was Joseph Weizenbaum?

Weizenbaum lived the first thirteen years of his life in Germany until he and his family escaped the Nazi regime and came to America. He was drafted in 1942, serving for five years as a meteorologist for the Army Air corps. After his service, he finished college thanks to the GI Bill and landed a job as a programmer for General Electric in Silicon Valley. His big break came in 1963 when he received a call from the Massachusetts Institute of Technology inviting him to join the faculty.

At MIT Weizenbaum was among the best in the burgeoning world of computer science and artificial intelligence. He even distinguished himself in this crowd by developing ELIZA, a breakthrough computer program that simulated human conversation and could interact with users in a natural language, like English. Which is to say, he invented a chatbot. One of the simulations that ELIZA could run was called DOCTOR, where the program “listened” to a patient and replied with follow up questions, much like a psychiatrist would. Weizenbaum, a poor Jewish boy who escaped Nazi Germany, had made it big.

ELIZA’s Unintended Consequences

But what came next surprised him. Some of his colleagues, who should have known better, began humanizing ELIZA, even requesting privacy to share with it their intimate secrets. As he explains in Computer Power and Human Reason: From Judgment to Calculation, he was “startled to see how quickly and how very deeply people conversing with DOCTOR became emotionally involved with the computer and how unequivocally they anthropomorphized it.” What concerned him even more was when psychiatrists started suggesting using such computer programs in the field with real human patients. Weizenbaum wondered, “what can the psychiatrist’s image of his patient be when he sees himself, as therapist, not as an engaged human being acting as a healer, but as an information processor following rules?” Furthermore, the public at large began claiming that ELIZA “demonstrated a general solution to the problem of computer understanding of natural language.” Despite Weizenbaum’s numerous publications making it clear that this was not the intention nor accomplishment of ELIZA, and his argument that “no general solution to that problem was possible, i.e., that language is only understood in contextual frameworks,” the public continued to run with the idea.

Weizenbaum was at a crossroads, and he chose the path less traveled, becoming a high-tech heretic—an outcast among his peers at MIT and a gadfly to those proclaiming the gospel of technological progress. He came to see that the “experience with ELIZA was symptomatic of deeper problems” and to ask hard questions about these problems:

  1. What is it about the computer that has brought the view of man as machine to a new level of plausibility?
  2. In what wider senses has man come to yield his own autonomy to a world viewed as a machine?
  3. How is it that man has not only begun to rely on autonomous machines, but then has come to describe those machines as human-like, and himself as machine-like?

Weizenbaum foresaw where the tech-world was headed and insisted that “ultimately a line dividing human and machine intelligence must be drawn. If there is no such line, then advocates of computerized psychotherapy may be merely heralds of an age in which man has finally been recognized as nothing but clock-work.”

Human Thought is More than Computation

On the eve of the computer revolution Weizenbaum argued that the introduction of computers “into our already highly technological society” would amplify the forces “that have driven man to an ever more highly rationalistic view of his society and an ever more mechanistic image of himself.” And herein lies his main critique of the world of high technology and AI, of which he was on the cutting edge during his MIT tenure: “The question is whether or not every aspect of human thought is reducible to a logical formalism, or to put it into the modern idiom, whether or not human thought is entirely computable.” Weizenbaum answered with a hard no. There are aspects of human thought that transcend information processing or computer code. He was one of the early thinkers questioning the information-processing/computer model of the brain and human intelligence, arguing that “an entirely too simplistic notion of intelligence has dominated both popular and scientific thought, and that this notion is, in part, responsible for permitting artificial intelligence’s perverse grand fantasy to grow….Although man certainly processes information, he does not necessarily process it in the way computers do. Computers and men are not species of the same genus.”

Weizenbaum saw the limits of viewing human knowledge as essentially encodable. As an example, he considers the sensitive human touch of empathy that doesn’t necessarily result from a logical thought-process, but rather flows from intuition, or even a gut-level sense of care. “The knowledge involved is in part kinesthetic,” he explains, and “its acquisition involves having a hand to say the very least. There are, in other words, some things humans know by virtue of having a human body. No organism that does not have a human body can know these things in the same way as humans know them. Every symbolic representation of them must lose some information that is essential for some human purposes.” He also notes that “there are some things people come to know only as a consequence of having been treated as human beings by other human beings.” What he is driving at is that human intelligence exists in the whole body, and in a community of bodies; it cannot be captured fully by a computer.

All of this led Weizenbaum to push for firm limits on what we ask computers to do, regardless of what they become capable of doing. He poses it this way:

What human objectives and purposes may not be appropriately delegated to computers? We can design an automatic pilot, and delegate to it the task of keeping an airplane flying on a predetermined course. That seems an appropriate thing for machines to do. It is also technically feasible to build a computer system that will interview patients applying for help at a psychiatric out-patient clinic and produce their psychiatric profiles complete with charts, graphs, and natural-language commentary. The question is not whether such a thing can be done, but whether it is appropriate to delegate this hitherto human function to a machine.

As he further argued “There are important differences between men and machines as thinkers. I would argue that, however intelligent machines may be made to be, there are some acts of thought that ought to be attempted only by humans.”

What Computers Can and Can’t Do

Weizenbaum acknowledges many salutary uses for computers but never loses sight of “the central question of what it means to be a human being and what it means to be a computer.” He is even willing to grant that with additional advancements, a computer or robot may be able to develop a sense of itself, in understanding the limits of what it is, versus what is outside of itself. He even acknowledges “willingness to consider it a kind of animal.” But he avoids humanizing it and warns against putting it to use in human tasks of choosing. There is a distinction between a computer’s calculation in order to make a binary decision, and a human’s judgment in order to make a moral choice. As the programmer-pariah notes, “however much intelligence computers may attain, now or in the future, theirs must always be an intelligence alien to genuine human problems and concerns.” Or to put it another way, “there are problems which confront man but which can never confront machines, and that man therefore comes to know things [that] no machine can ever come to know.”

This is part of why Weizenbaum argued certain technological projects should be off-limits: “I would put all projects that propose to substitute a computer system for a human function that involves interpersonal respect, understanding, and love in the same category.” He was horrified by the attempts even in his day to operationalize ELIZA in psychiatric settings: “Do we really believe that it helps people living in our already overly machine-like world to prefer the therapy administered by machines to that given by other people?”

Recent surveys show that a significant number of people answer yes to his question, results that would distress Weizenbaum. A 2025 Common Sense Media survey reported that over half of US teens regularly interact with AI companions, and 31% of those who do find it as satisfying (or more) than talking to people. Mandy McLean highlights that 90% of AI companion users describe their companions as “human-like.” And even a 2022 study found that 55% of its sample preferred “AI-based therapy.”

But to such survey numbers, Weizenbaum’s response still stands: “There are some human functions for which computers ought not to be substituted. It has nothing to do with what computers can or cannot be made to do. Respect, understanding, and love are not technical problems.” Can does not imply ought. Just because we can, doesn’t mean we should. The technological imperative must be challenged.

Another concern of Weizenbaum was that the computer/machine model of man and the universe eclipses other metaphors and ways of understanding ourselves and the world. As he explains, “tools shape man’s imaginative reconstruction of reality and therefore instruct man about his own identity.” The fact that scientists, technologists, and everyday people utilize the man/machine metaphor is “a sign of how marvelously subtly and seductively modern science has come to influence man’s imaginative construction of reality.” Even as a high-priest in the temple of high-technology (MIT), Weizenbaum contended that “science itself had been gradually converted into a slow-acting poison…[and] has virtually delegitimized all other ways of understanding.”

Weizenbaum also foresaw how computers aren’t like prior tools: “the arrival of all sorts of electronic machines, especially of the electronic computer, has changed our image of the machine from that of a transducer and transmitter of power to that of a transformer of information.” The mechanical and industrial technologies that remade the world were in large part about energy and machine-power. But Weizenbaum saw the shift toward information and data, with the computer as the template for objectivity, rationality, and human thought.

A Call for Courage

Weizenbaum’s prophetic AI-critique also was a call for courage, something he demonstrated throughout his tumultuous life. He writes:

It is a widely held but a grievously mistaken belief that civil courage finds exercise only in the context of world-shaking events. To the contrary, its most arduous exercise is often in those small contexts in which the challenge is to overcome the fears induced by petty concerns over career, over our relationships to those who appear to have power over us, over whatever may disturb the tranquility of our mundane existence…. If this book is to be seen as advocating anything, then let it be a call to this simple kind of courage.

He especially issues a call for the teacher of computer science, who “must teach the limitations of his tools as well as their power.” Even here Weizenbaum shows surprising balance and fairness on the many things computers can do. “I want them to have heard me affirm that the computer is a powerful new metaphor for helping us understand many aspects of the world,” he writes, “but that it enslaves the mind that has no other metaphors and few other resources to call on. The world is many things, and no single framework is large enough to contain them all, neither that of man’s science nor that of his poetry, neither that of calculating reason nor that of pure intuition.”

He leaves his readers with this concluding paragraph in 1976, which is just as fitting today in the age of LLMs:

If anyone is to be an example of a whole person to others, he must first strive to be a whole person. Without the courage to confront one’s inner as well as one’s outer worlds, such wholeness is impossible to achieve. Instrumental reason alone cannot lead to it. And there precisely is a crucial difference between man and machine: Man, in order to become whole, must be forever an explorer of both his inner and his outer realities. His life is full of risks, but risks he has the courage to accept because, like the explorer, he learns to trust his own capacities to endure, to overcome. What could it mean to speak of risk, courage, trust, endurance, and overcoming when one speaks of machines?

The implied answer is that it means nothing to speak of machines in this way. These are uniquely human experiences and traits that must be practiced and protected from the infiltration of the machine. For, as Weizenbaum reminds us, computation is different than comprehension. Calculation is different than judgment. Risk, friction, challenge are essential to human formation in wisdom, virtue, love. And we need to remember this now even more than when ELIZA wrote its first words.

Image via Wikimedia.

Enjoying what you’re reading?

Support FPR’s print journal and selection of books.
Subscribe
A stack of three Local Culture journals and the book 'Localism in the Mass Age'

Joshua Pauling

Joshua Pauling is vicar at All Saints Lutheran Church (LCMS) in Charlotte, NC. He is author of the book Education’s End: Its Undoing Explained, Its Hope Reclaimed and co-author with Robin Phillips of the book Are We All Cyborgs Now? Reclaiming Our Humanity from the Machine. He is contributing editor at Salvo, columnist at Modern Reformation, and has written for a variety of other publications including Areo, CiRCE, Forma Journal, Front Porch Republic, Logia: A Journal of Lutheran of Lutheran Theology, The Lutheran Witness, Mere Orthodoxy, Merion West, Public Discourse, Quillette, The Imaginative Conservative, Touchstone, among others. He is a frequent guest on Issues Etc. Radio Show/Podcast. Josh also taught high school history for thirteen years and is now a classical educator and runs his own business making custom furniture and restoring vintage machinery. He studied at Messiah College, Reformed Theological Seminary, and Winthrop University, and is continuing his studies at Concordia Theological Seminary. He and his wife Kristi have two children.

Leave the first comment