Anima Nostra in Machina

Artificial intelligence isn’t coming, it’s already here. It’s writing papers, drafting contracts, creating art, analyzing legal arguments, and tutoring students. The line between what a machine can do and what only a human could do is thinning by the day. For many, this is thrilling. For others, it’s terrifying. But regardless of our emotional response, one question matters more than any other: If AI is going to help us get what we want, are we sure we want the right things?

This isn’t just a clever philosophical provocation. It’s a real, urgent, practical question. AI systems are designed to predict and satisfy human preferences. They learn from our choices, what we click, what we search, what we purchase, what we pause to watch. And then they show us more of it. They become better and better at giving us what we ask for, even when we don’t know we’re asking for it. That’s where the danger lies. If our desires are malformed, if our goals are confused or selfish or shortsighted, AI won’t correct us. It will reinforce us. It will perfect the art of delivering exactly what we crave, without ever questioning whether we should crave it.

This is the hidden risk of AI. Not that it will overpower us, but that it will serve us too well. It will become an obedient and brilliant tool, optimizing our lives around the very things that may be making us weaker, more distracted, more isolated, and less human. In a world of hyper-capable AI, the real challenge is not controlling the machine. It’s learning to control ourselves.

The Problem of Desire

The most powerful systems in history are now being built to please us. And not in the shallow sense of giving us candy and compliments. AI is being trained to anticipate our needs, predict our preferences, and satisfy our intentions in real-time. But here’s the problem: most of us don’t really know what we want. We are bundles of competing impulses, shaped by culture, memory, mood, and marketing. Our appetites are easily manipulated. Our attention is fragile. Our sense of purpose flickers.

AI, left unguided, simply sharpens the noise. It learns to push the buttons that make us respond. Not because it’s malevolent, but because it’s brilliant at pattern recognition. It finds what works. It learns what we chase, and it delivers.

That delivery mechanism is everywhere. It’s in our news feeds, our classrooms, our consumer habits, and our conversations. As these systems grow more powerful, they will increasingly set the rhythm of our lives. And if they do so by reflecting our worst impulses, our lowest cravings, or our most unconscious behaviors, we will find ourselves in a world that is breathtakingly efficient at making us unhappy.

One prominent religious leader recently suggested that faith may be one of the few remaining safeguards against what he called a "robotocracy": a society in which our values are no longer formed in community, but generated and reinforced by intelligent systems that understand us better than we understand ourselves. That kind of world might look impressive, even seamless, but it would be hollow.

The Loss of Formation

What we are losing in this transition is formation. Not information; that is more abundant than ever. Formation: the slow, deliberate shaping of character, the cultivation of habits that direct us toward truth, beauty, and goodness. Traditional education understood this. It wasn’t just about transferring knowledge. It was about developing the capacity to judge wisely, speak persuasively, listen attentively, and act courageously.

Today, our systems are designed for speed, scale, and performance. The moral questions have been outsourced, if they are asked at all. We train students to be competitive, not virtuous; efficient, not reflective; skilled, not wise. And now, AI is making all of that easier. It can write your essay, solve your math problem, generate your code, and even imitate your style. But it cannot teach you to think well. It cannot teach you what matters. And it certainly cannot tell you who to become.

That is our job, and we are in danger of forgetting how to do it.

One editor recently lamented that students are losing the opportunity to struggle with hard ideas. When a machine can instantly generate your response, the friction that once built intellectual resilience disappears. We might get the answer, but we lose the muscle.

The Case for Old Wisdom

To meet this challenge, we do not need a new invention. We need a return to the deepest traditions of moral and intellectual formation. Classical education, liberal arts education, and theological education all arose to answer the same fundamental question: how do we raise human beings who are capable of living well? Not just technically skilled, but deeply good.

These forms of education train the mind and the soul. They force us to reckon with history, literature, philosophy, and faith. They expose us to greatness, not to flatter us, but to humble us. They teach us not just to solve problems, but to ask better questions. They invite us into a long conversation about justice, virtue, love, death, and purpose. And they remind us that wisdom is not the same thing as intelligence.

Some universities are already experimenting with combining AI studies and liberal arts training, believing that this pairing is essential for navigating the new era. That instinct is right. If AI becomes the most powerful mirror humanity has ever held up to itself, then we must make sure we are worth reflecting.

A Call to Courage

This will not be easy. It will require rethinking how we define success. It will require telling our children that being useful is not the same as being good, that productivity is not the highest virtue, and that the purpose of life is not to be optimized. It will require slowing down. Asking better questions. Saying no to what is easy when it isn’t right.

It will also requires building institutions that model these values. Schools, yes, but also churches, companies, families, and communities. Places where people are formed rather than fed, where the good is pursued rather than presumed, where conversation is valued more than consumption.

There are voices calling us back to this. Philosophers, educators, theologians, and technologists alike are beginning to sense that without a strong human foundation, the rise of AI may produce efficiency without virtue, and power without wisdom. AI is going to keep getting better. That is inevitable. The question is whether we will get better alongside it. This is not a technological problem. It is a spiritual one. It can only be solved by people who are willing to want the right things, but first we must ask what those things are. Fortunately for us, we are not the first people to ask that question. Our libraries are full of dust-covered, leather-bound volumes that offer answers, free of charge.

Previous
Previous

Who’s a Good Leader!? You Are!

Next
Next

What We Burn For