Why we cling to absolutes—and how we might break free
A growing number of contemporary thinkers argue that we’d all be better off if we treated our beliefs as matters of degree rather than as all‑or‑nothing certainties. Graded beliefs lead to better decisions, smoother conversations, and far less of the psychological whiplash that comes from clinging to an idea long after the evidence has shifted. Yet despite these advantages, most of us still default to absolutes. Certainty is quick, comforting, and cognitively cheap. It gives us closure. Probabilistic thinking, by contrast, asks us to hold nuance, to tolerate ambiguity, and to admit—at least to ourselves—that we might be wrong.
It’s no surprise, then, that schools rarely teach this skill. Traditional education is built around right answers, clear rubrics, and standardized tests. The system rewards certainty: you either get the question correct or you don’t. There’s little room for expressing degrees of confidence or exploring how strongly a claim is supported by evidence. Even subjects that live and breathe uncertainty—science, history, literature—are often taught as if they were collections of settled facts rather than evolving bodies of knowledge.
And the social skills we teach reinforce the same pattern. Schools invest far more time in debate than in negotiation. Debate trains students to defend a position at all costs, to project confidence, and to “win” by overpowering the other side. Negotiation, by contrast, requires perspective‑taking, graded beliefs, and the willingness to adjust one’s position as new information emerges. Debate rewards certainty; negotiation rewards nuance. It’s no wonder we grow up fluent in argument but clumsy in collaborative reasoning. By the time we reach adulthood, most of us have been thoroughly trained—academically and socially—to think in binaries.
And if our schooling nudges us toward black‑and‑white thinking, our emotions shove us the rest of the way. Stress, anger, fear, and even excitement narrow our cognitive bandwidth. When we feel threatened—socially, intellectually, or emotionally—the brain shifts into a defensive mode that favors quick judgments over careful reflection. Nuance becomes a luxury. Ambiguity feels dangerous. Under pressure, we’re far more likely to declare something “definitely true” or “obviously false,” not because the evidence changed, but because our nervous system is trying to regain a sense of control. In moments of emotional intensity, binary logic becomes a kind of psychological shortcut: fast, simple, and reassuring, even when it leads us astray.
This is where artificial intelligence can play a surprisingly constructive role. AI systems are built to handle uncertainty; they traffic in likelihoods, not proclamations. By analyzing information and estimating how well‑supported a claim actually is, AI can help us see when a confident statement rests on shaky ground or when a tentative idea deserves more credit. But its potential goes far beyond evaluating facts. Imagine an AI that reads a text‑message thread and gently warns you that your next reply might escalate the tension. Or a system that scans a heated comment section and highlights which claims are backed by evidence and which are running on pure emotion. AI could even monitor the tone of a live conversation and alert you—privately, discreetly—when the discussion is drifting toward a point where people stop listening and start defending.
In these ways, AI becomes more than a fact‑checker; it becomes a kind of cognitive companion, helping us navigate the emotional and social pitfalls of certainty. It can flag when we’re slipping into absolutist language, suggest more nuanced phrasing, or remind us that a belief we’re clinging to might deserve a lower level of confidence. It can help us recognize when a conversation is about to turn unproductive, or when someone else’s certainty is masking a lack of evidence. And crucially, AI can also act as a coach rather than a crutch—guiding us through the process of evaluating evidence, managing emotional triggers, and expressing graded beliefs so that, over time, we learn to do these things ourselves. The goal isn’t to outsource our judgment to machines, but to strengthen our own capacity for nuance and reflection.
By making uncertainty visible and manageable, AI gently nudges us toward a more flexible, realistic way of thinking—one where changing our minds is a sign of maturity rather than weakness, and where conversations become less about winning and more about understanding. If anything, the real promise of AI is not that it will make us more like machines, but that it may help us become better humans. By easing the mental burden of uncertainty, softening our instinct for absolutes, and giving us a moment’s pause before emotion takes the wheel, AI can help us reclaim a kind of intellectual humility that modern life often squeezes out of us. It can remind us that most beliefs live on a spectrum, that confidence is not the same as correctness, and that growth often begins with the simple admission, “I’m not entirely sure.” And perhaps that small shift, repeated across millions of conversations, is how we begin to build a more thoughtful, more patient, and more understanding culture.
