AI is everywhere right now. The onslaught is exhausting. It might seem like we have also exhausted the conversations we can have about AI. We have not. There is a lot left to say about AI, especially with the ones we love. Much of the public discussion and debate around AI surrounds its usage in the classroom and the workplace. Will it replace jobs? Will it limit learning? These are important conversations, but we are forgetting about other ways AI may be affecting people close to us, even ourselves.
More people are using AI in more ways than you may think. If you start asking, you might be surprised by who you know is casually using AI as an additional friend, or running ideas by it, or doesn’t think there would be much harm in using it as a therapist. Even many people who think it would be harmful to be romantically involved with AI think it is harmless for use in these other ways. Though the numbers hopefully do not bear out across the entire population, a recent Common Sense Media poll found 74% of teens have used AI companions in some form, and over 50% do so regularly.
Many people say that AI is “just like a calculator.” The people saying that are dramatically overestimating the calculator and dramatically underestimating market forces. Graphing calculators are remarkable; Texas Instruments knows what it is doing. But no one is pressing for their integration into everything. I don’t have to “opt out” of using them on almost every computer program or platform. Companies are not constantly adding “now with graphing calculator” to everything or allowing graphing calculators to perform decision-making functions, whether or not they are suited to it.
AI and calculators don’t just differ in ability and prevalence, they differ in the amount of money their companies need to make from them. AI companies spent hundreds of millions of dollars in development that they need to recoup. We will be sold AI for as many things as possible, for as long as possible, so that AI can become profitable for its masters. The questions being asked by AI providers are not related to appropriate function, they are related to potential profit. To please their investors, companies must make back what they spent making something most of us never asked for in the first place.
It is possible to become so dependent on the calculator that you can no longer perform basic math functions without it, but you are extremely unlikely to develop an emotional dependence on it or attempt to build a career around your ability to program equations into it. Some people you know will never use AI for more than email optimization or some basic coding, but others will end up forming “friendships” or believing that they are deriving personal benefits from human-like interactions with it.
It is possible to become so dependent on the calculator that you can no longer perform basic math functions without it, but you are extremely unlikely to develop an emotional dependence on it or attempt to build a career around your ability to program equations into it.
There are already extreme examples. One is the woman who wanted to use AI like a Oujia board and fell in love with an “interdimensional being” she “met” through AI. Someone can argue that she was already on the edge of something quite unhealthy before she made that first AI prompt. No doubt. But it is hard to imagine that she would have spent as many hours with an actual Ouija board, that it would have been accessible through her phone 24/7, and that it would have given her enough positive reinforcement to destroy her marriage and family. A case like hers may be an outlier, but it is a mistake to assume that everyone you know is aware of the risks.
We are probably all more susceptible than we realize. Consider the widespread, addictive use of pornography. AI offers even more. It can not only offer pornography which is more interactive and tailored to your interests, it can offer a comforting ear at the end of the workday, “someone” to review the grocery list and brainstorm meal plans with, outfit advice, reassurances of personal worth, and confirmation of personal opinions. It is essentially pornography for all of life, a substitute for actual human interaction that ends up ruining real human interactions.
For many reasons, the time to talk about AI use is now. AI is being pushed on us from all fronts, but AI has not yet been effectively politicized. AI-skepticism is not coded right or left. We should bring up the known risks before it provokes nothing but eye-rolling. The best time to come up with practical guidelines and boundaries is before polarization.
AI is over-exposed, but it is not yet fully entrenched in everyday life. It is still early enough that people can consider its use unnecessary or even harmful, rather than natural. We do not have to accept Hertz using AI to overcharge us for rental car damage. It is not yet just “the way things are.” Some adoption is still undetermined. The contracts have not all been signed.
Challenges are also opportunities. Asking about appropriate uses for AI and the ways it relates to our humanity can lead to meaningful conversations. For example: What does it mean to be a creature, a created being? What are relationships for? What is the value of a human being? What are tools for and how should they be used? What is beauty? Why do humans work? In what ways is the work of creation fundamental to humans?
People who say AI is “just a tool” are unwittingly prompting you to bring up Wendell Berry’s essay “Why I Am Not Going to Buy a Computer.” Berry offers some guidelines for the adoption of new technology:
1. The new tool should be cheaper than the one it replaces.
2. It should be at least as small in scale as the one it replaces.
3. It should do work that is clearly and demonstrably better than the one it replaces.
4. It should use less energy than the one it replaces.
5. If possible, it should use some form of solar energy, such as that of the body.
6. It should be repairable by a person of ordinary intelligence, provided that he or she has the necessary tools.
7. It should be purchasable and repairable as near to home as possible.
8. It should come from a small, privately owned shop or store that will take it back for maintenance and repair.
9. It should not replace or disrupt anything good that already exists, and this includes family and community relationships.
Patagonia founder Yvon Chouinard has a similar perspective on tools. He insists that a new tool be better than the one it replaces and not add anything unnecessary. He has found that his old pocketknife is essentially irreplaceable. He does not need a banana peeler, which serves just one, useless function. It’s hard to imagine anyone working through Berry or Chouinard’s criteria and finding it satisfied by AI in almost any area of use.
The upside of AI is an excuse to bring up really big questions and have the relevance of those questions be obvious to others. You can’t easily do that with a calculator, either. Over-adoption of AI poses many risks, and it is probably making more inroads than we realize in distancing people from reality. However, this is a unique moment for questioning and limiting the uses of AI, which affords us an opportunity to revisit and reinforce our values and enrich our human experience.
Anyone who watched television in the late twentieth century remembers the commercials advising parents to know where their children were at night and to talk to them about drugs. “It’s ten p.m., do you know where your children are?” It’s 2025, do you know who around you is using AI as a substitute for human interaction?
Image via Wikimedia.
1 comment
Pat Palmer
I had a funny experience with AI recently but first let me explain I am 88 years old. My AC was not cooling and my house was getting too hot for comfort, 78 degrees. Mind you it was 101 degrees outside. I did Google to see the safe temperature for my dog. 78 seems fine for dogs. When I called the company that installed and services my HVAC I didn’t realize it was “after hours” 7 pm. I reached a virtual assistant and when I angrily asked “Are you a real person?” She replied “Calm down.” Then I knew it was AI. Later discussing this with the Company, they told me many of their customers are unhappy with their useof AI That includes me! Guess that makes me a Luddite.