Frozen Insight in a Moving World

Jeff Uren ★ 2026-01-25

One of the most divisive technical, and work-related topics of our current times, is the rift between people who wholeheartedly believe in AI and it's application to optimization, efficiency, and growth versus the people that are maintaining strong, and consistent levels of scepticism.

This is an article exploring that divide, where it might arise from, and why what we believe about the long-term prevalence of AI is misaligned with the reality of how society adopts and approaches new technologies and "paradigm shifts".

I am guilty of getting hung up on individual specifics about tooling around AI and how it's applied in specific areas. I admittedly had fun generating images and funny songs, but it ultimately felt hollow for me and I slowly but surely stopped using "creative" AI over a short period of time. I don't feel "good" about it. I've historically been vocal about being anti-AI in general. But the problem i've found is that the perceived issue with AI, isn't necessarily the one that people highlight continually in anti-AI posts. I find myself disagreeing with anti-AI sentiment, just as often as I do pro-AI sentiment and it's hard to pin down the "why".

The Anti-AI Narrative

These are by no means exhaustive, i'm sure there are more arguments. The current "narratives" around anti-AI sentiment fall into a broad spectrum of issues. These get rehashed over and over in varying ways to different degrees:

Fragility
AI systems produce confident answers that can be subtly wrong, making errors harder to detect and increasing the cost of verification.
Loss of Systems Context
Models reason locally and statistically, not holistically, missing emergent behaviour, long-term coupling, and cross-domain consequences.
Energy-Value Imbalance
The environment and financial cost of training and running models is often disproportionate to the marginal productivity gains they deliver.
Accountability Dilution
When decisions are mediated by AI, responsibility becomes diffused. "The model said so", which weakens ownership, reviews, and outcomes.
Cognitive Atrophy
Over-reliance on AI can erode skills, judgement, and learning by removing the struggle where expertise is developed.
Phsychological Harm
Work mediated by AI can feel less meaningful, reducing pride, authorship, and connection. This contributes to high stress and disengagement.
Homogenization of Outputs
Widespread AI use flattens out style, approaches, and solutions. It reduces diversity of thinking and kills competitive differentiation.
Frozen Past Bias
Models encode historical data and assumptions, reinforcing old norms and hardening to cultural or domain change.
False Authority
Statistical outcomes are mistaken for understanding which give AI unearned credibility in complicated or moral decisions.
Governance Lag
The rate at which AI is being adopted, and pushes far exceeds the speed at which appropriate oversight, auditability, and resources arrive.
Tool-Role Confusion
AI shifts from being an assistant to implicit decision-making without explicit consent, clarity, or restructuring of responsibility.
Security Surface Area
These models and their infrastructure and deep ties to internal, sensitive information are new vectors for data leakage, prompt injections, model inversion, and supply-chain risks.
Inappropriate Workflows
Workflows relegate the person executing the work to sign-posting versus decision-making.
Enshittification
Historically human spaces are being flooded with AI outputs. Forums, social media, books, movies, TV, and other content causing people to distrust and walk away from them.

The Other Side

If we're going to have a proper, constructive conversation about AI we need to represent both sides of the fence. If all we do is talk about the things that are being derided, we lack the opportunity to figure out where the balance is between the positive and negative aspects of the change.

Leverage of Expertise
AI allows scarce expert knowledge to be applied more broadly, reducing bottlenecks and raising the baseline capability of teams.
Acceleration of Iteration
By reducing time spent on boilerplate, scaffolding, and recall, AI shortens feedback loops and increases the speed of experimentation.
Reduction of Mechanical Load
AI offloads repetitive cognitive tasks (formatting, translation, refactoring, documentation), freeing humans to focus on higher-level reasoning.
Error Surface Reduction
In constrained domains, AI can catch classes of mistakes humans routinely miss (syntax errors, common security flaws, consistency issues).
Search and Recall at Scale
AI excels at synthesizing large bodies of existing information, making institutional knowledge more accessible.
Accessibility and Inclusion
AI lowers barriers for non-experts by translating, summarizing, or scaffolding work across language, skill, or domain.
Parallelism of Thought
AI enables multiple solution paths to be explored simultaneously, increasing the chance of discovering viable approaches.
Economic Efficiency
At scale, AI can reduce marginal cost per unit of output, particularly for high-volume, low-variance tasks.
Human-in-the-loop Augmentation
When properly constrained, AI can act as a "second set of eyes", improving outcomes without replacing decision authority.
Shift from Recall to Judgement
Proponents argue AI allows humans to spend less time remembering facts and more time evaluating trade-offs.

Insurance companies replacing claim evaluations wholly with AI, call centers laying off staff and using AI systems, most modern businesses using AI chatbots for website support. An important observation here is that most proponents assume AI is subordinate to human judgement, even when real-world deployments more often than not drift far beyond that assumption.

Reframing

We can tighten these concerns and benefits down to some discrete, rough categories. These again aren't exhaustive but they serve our purpose here of breaking down what is happening inside the technological rift that people are experiencing.

For AI

Against AI

The sheer level of fear-mongering overall, and catastrophizing around AI is having wide-scale mental health ramifications in and of itself. It's like meta-anxiety layered over top of the actual anxiety. The concerns underpinning these arguments are valid, but the protagonists tend to treat the problem as a collection of technical and ethical shortcomings that need be addressed as individual problems.

That resistance needs to be reframed as something more fundamental. The individual issues that are called out are a response to lower-level, deeper, more visceral shift that people can feel. We need to unpack that tension, hold it up to the light and describe it as it really is, and we need to understand why only some people are feeling it.

Setting up

When we talk about AI we tend to default to a few main framings that fall into categories like productivity, accuracy, ethics, and employment. Those framings are useful, but they keep the debate firmly centered around outcomes.

What people seem to be looking at less is the kind of value that AI optimizes for, and what that optimisation displaces. We need a lens or framework that operates above the outcomes, use cases, and technical details to help us understand.

I can only surmise that either AI chatbots are being inundated with Pirsigs metaphysics, and / or, everyones just finished reading Lila and have been heavily influenced because of how easy it is to apply it to tech in general. But most, seemingly don't go deep enough. A few people have tried to vaguely, hand-wavey frame the conversation around AI and tech in general around Pirsig's metaphysics of quality.

The Metaphysics of Quality, developed by Robert Pirsig, offers a way of understanding reality that places value (not matter or mind) at the centre of experience. Rather than treating quality as subjective preference or objective measurement, it treats it as something immediately felt, guiding action before it can be analysed or named. The framework is useful, not because it provides answers, but because it helps explain why systems, ideas, and institutions can feel either alive or dead long before we can articulate what's gone wrong.

At a basic level, we can distinguish between two modes of value.

Static value
codification, repetition, optimisation
Static value shows up as rules, metrics, models, standards, and "best practices", and it's essential for scale and coordination.

Dynamic Quality
situational judgement, lived experience, intuition, and responsibility in the moment
Dynamic Quality is how new value enters a system in the first place, before it gets formalized and stratified into static values or tossed aside through irrelevance.

Modern life and even more specifically work, are built on top of a constant tension between these two modes.

Static culture ⇒ stabilizes what we already know
Dynamic judgement ⇒ adapts what we don't know

AI enters into this whole equilibrium at a pretty sensitive spot. It doesn't just automate execution. More and more it operates in spaces where judgement has traditionally dominated.

We need to boil this down to a single, important and meaningful question that we can work towards an answer for. Otherwise we're going to end up with tons of different answers, tons of questions, and only a few that match up.

Our main, meaningful question we need to arrive at an answer for is this:

"What happens when systems designed to optimize for frozen understanding begin to claim dominance over human judgement?"

That will bring us back around to viewing the individual, familiar arguments we keep seeing repeated online, in parliaments, and at the office cooler in a light that frames them less like individual, isolated concerns and more like symptoms of a deeper cultural issue and potentially the signs of a growing higher sense of self-awareness.

How does change actually happen?

Change follows a fairly straightforward lifecycle in terms of the tension between static and dynamic.


1. Dynamic breakthrough
Something works or feels right before you can actually explain why.
2. Selection
Society, sub-cultures, or even individuals notice that the breakthrough has value.
3. Stabilization
It becomes a rule, tool, process, or abstraction.
4. Decay
The static pattern starts to resist new Dynamic Quality.
5. Tension
Innovation vs. preservation.

Repeat


All progress lives inside of this tension between static and dynamic.

The Industrial Revolution

I actually really don't like this comparison because it's rooted in a lot of fear-mongering. AI gets framed through analogy to the Industrial Revolution pretty often. Just the same as mechanisation transformed physical labour, the automating of repetitive tasks, increasing output, and ultimately reshaping and forming large economies. AI is presented as the next inevitable step. A step where automation of thinking happens. These arguments frame resistance to AI as the same as workers resisiting industrialization. Fear-based, short-sighted, and even more annoyingly, futile and out of our control.

That comparison is really powerful because it has a sort of built-in sense of accomplishment and reformation of society associated with it. There was disruption, change, hardships, and social upheaval. In the long-term we came out of that era with massive gains in productivity, shifts in living standards, and all kinds of technological progress.

By comparing to that period of history, AI adoption gets framed as not only an absolute necessity, but the right direction from a moral perspective. It's touted as something that we have to endure in order to benefit later.

I don't see any automated creative painting machines, or book-writing automotons that have remained in use past their initial exposure during the revolution still kickin' around. But that whole analogy really screws up and over-simplifies the nature of change itself. We replaced human muscle with machinery, but we still retained a large share of the ability to express judgement, meaning, and the ability to take responsibility. AI operates directly on domains where judgement is the work. Things like reasoning, decision-making, creativity, and evaluation. Treating these two transformational periods as equivalent obscures what's actually changing.

I don't want to go too far into the comparison, because frankly, it's not really worth it and I only mention it here because it's so often the first port of call in an argument. People who use this comparison are reaching for the fear-mongering, and trying to project the feeling of inevitability about that period onto AI without very much else in common to stand on. We can do better by honing in on a much more granular aspect of that era.

The Victorians

Since we all seem to be hell-bent on applying Pirsig everywhere, let's use the example he cites in Lila for a major comparison to societal change. Victorian moral codes emerged during a period of rapid industrial and social change, offering a rigid framework of propriety, discipline, and "correct" behaviour intended to impose order on a very quickly evolving society.

These norms were codified into social rules, institutions, and moral expectations that claimed universal authority, usually in tension with how people actually lived and felt. They're an important comparison because they show how systems of static moral certainty, introduced to managed upheaval, can outlive their relevance. They carry on as prescriptions long after the lived experience that gave them meaning moved on.

Victorians optimized for order, productivity, and propriety. But they ignored how people actually experience life. As society evolved and moved on, those frozen values became moralizing, brittle, and finally irrelevant.

This is a much better historical comparison because because AI embodies static intellectual value at scale.

It produces answers without any kind of struggle, passes judgement without risk, and produces results without lived engagement.

The thing is, meaning arises where dynamic quality is felt:

Compared to Victorian morality, the trajectory of AI won't be one where it sees AI fail outright. It will be a cultural failure and not a full rejection, and it will play out most likely over decades. If it follows Victoria values, it will look something like this:

When systems claim moral or creative authority over lived experience, culture doesn't outright revolt against it, history shows time and time again that it evolves past it.

Metaverse vs. Anything Else

The metaverse was presented as the next major evolution of the internet. It was a persistent, shared digital space where people would work, socialise, create, and trade through immersive virtual environments. Rather than browsing websites or using apps, users were meant to enter the internet. They would be present themselves as avatars in three-dimensional worlds that blurred the boundaries between physical and digital life. The metaverse would become a universal layer beneath social media, commerce, and collaboration, reshaping how people interact online the way that the smartphone did.

Initially the metaverse held a lot of Dynamic Quality.

The problem is that it hardened into something else altogether, and the rigidly defined usage outpaced lived experience.

Human judgement was slowly replaced with narrative inevitability.

"This is the future, you just haven't caught up yet"

The key thing to note is that no one raged against the metaverse, there was just complete and utter disengagement from the concept. VR itself still survives in a few different places (gaming, training simulations, specialized professional tools), but they're very niche and not as widespread as the Metaverse's ambitions were aiming for.

It's really important to point out specifically what happened here, because the metaverse didn't fail because of any of the following:

It failed because static predetermined meaning arrived before the lived meaning, the system demanded participation without earning it, and optimization replaced discovery.

It's a funny one, because all the other examples we cover here are instances of past frozen static values trying to exert control over fresher dynamic quality. In the metaverse's case it tried to freeze the future before people had experienced why it mattered.

Crypto/Blockchain and AI

Blockchain is a near perfect precursor to AI trajectory. Moreso than a comparison to the Industrial Revolution. Blockchain was and still attempts to overreach into Dynamic Quality.

The Promise

Crypto promised trust without institutions, and code as law.

Static Overreach

But it came with frozen values, immutable ledgers, and algorithmic morality.

Dynamic Quality Lost

Crypto removed discretion, forgiveness, and contextual judgement from financial transactions.Ultimately resulting in a loss of meaningful participation in trade for efficiency.

Cultural Reaction

Both AI and crypto are tolerated where static value fits, rejected where Dynamic Quality matters.

We can frame the outcome from this even tighter by saying that Crypto tried to replace social morality. What AI is poised to do is replace intellectual judgement.

From a cultural perspective, we heavily value our judgement, it's a core part of our being and our identity. Morality shifts and changes over time, but our judgement we hold very near and dear to our hearts.

That makes AIs backlash faster, more emotional, and more culturally visible.

Ultimately, Crypto has collapsed into background infrastructure, mostly in very niche spaces because being able to express our morals, and for our morals to shift and change over time as more new information and insight has become available is more important than ascribing to a frozen set of static morals that can't be changed, or controlled.

Taylorism

Taylorism was an early industrial management approach that treated work like a machine problem to be optimized. Tasks were broken down, timed, standardizd, and controlled from above, with thinking separated from doing. It worked well for repetitive factory labour, but it also stripped workers of judgement, craft, and ownership.

It turned people into components rather than participants. Over time, that loss of agency proved just as costly as the inefficiencies Taylorism actually set out to fix.

This should sound familiar to any software engineers with an ounce of experience. Taylorism arrived through practical learning. Factory managers were dealing with disorder, waste, and unsafe conditions. By watching work more closely, trying changes, and keeping what improved output and safety, they found better ways to organise repetitive labour. Those improvements came from direct experience with real problems. That was Dynamic Quality in action, responding to conditions as they were.

The failure came when those early insights were frozen into rules and treated as permanent, universal truths. Tasks were standardized beyond their usefulness, judgement was pushed up the hierarchy, and workers were expected to follow instructions rather than think. Static value replaced ongoing learning. Judgement on the floor wasn't trusted anymore.

The outcome again wasn't open revolt, it was quiet withdrawal. People did the job as written, not as understood. Skill, pride, and responsibility faded. Over time, organisations lost adaptability and paid for it through disengagement, inefficiency, and brittle systems. Taylorism survives only where work is simple and repeatable, and is avoided where judgement matters.

AI is following a similar path. It begins with real gains from experimentation and assistance, where people try tools, keep what helps, and stay in control of decisions. This early phase is Dynamic Quality at work.

The risk comes when those tools harden into defaults and authorities. Outputs are trusted because they're generated, workflows are shaped around the mode, and judgement is quietly removed from the person doing the work.

Digital Social Norms

For a long time, unrestricted access to social media was treated as a settled "good". It rested on a value that favoured openness, connection, and free participation. That value held while the harms it caused were abstract or easy to dismiss.

Over time, those harms have become more concrete. Rising anxiety, sleep disruption, attention problems, bullying, and compulsive use have begun to show up consistently in children. These aren't abstract concerns. They cut into biological patterns like rest and nervous system regulation, and social patterns of trust, belonging, and safety. At that level, the costs are immediate and hard to argue away.

The recent bans on social media based on age reflect newer dynamic judgement asserting itself. Faced with lived evidence, society has begun to prioritize protection and care over openness. Lower-level patterns demanded attention, and rightly took precedence over an older intellectual ideal that no longer fit reality.

We have only seen the surface of what AIs impact is on mental health, socialization, and other cognitive components of being a human. But the surface layer, after a short period of time isn't looking particularly great. Like social media, it will take time to understand what the impact is, but like social media, the rate at which is adopted will mean that it's growth will outpace our ability to understand it's true impact until some damage has already been done.

Smartphones

This one might strike pretty close to home for a few people. I was in my early 20's when the first Apple iPhone was released. I stood in line at a Bell store in downtown Toronto and waited to see if I could nab one. I ended up having to wait a week for restocking, but I eventually got my hands on one, a white one!

I was enthralled. Smartphones were a whole new world. We can look back now and realize how horribly bad the user interfaces and overall design was compared to more modern aesthetics, but at the time it was utterly groundbreaking.

They were a clear expression of dynamism. A convergence of communication, computation, and creativity that felt open-ended and empowering. Early smartphones expanded what people could do in the moment. Navigation, creation, connecting, and experimenting without prescribing how those capabilities were used. They earned their place by augmenting judgement and extending human agency into new spaces.

People like to decry this as consumers being eternally unhappy unless they have the next, newest, greatest thing, but frankly, I haven't bought a new fridge, or office chair, or car in decades. And yet I always seem to have the latest iPhone. I'm chasing that original feeling. Unfortunatey, over time, that dynamic promise has hardened into static value. Smartphones have become platforms optimized for engagement metrics, behavioural prediction, and constant connectivity. Attention has been systemitized, interaction patterns standardized, and "smartness" increasingly means automated nudges, recommendations, and background decisions made on the user's behalf. The device shifted from tool to intermediary, shaping our behaviour rather than simply supporting it.

Viewed this way, the growing interest in "dumb" phones and deliberately limited devices isn't just nostalgia or technophobia. It's a value correction. What people are pushing back on isn't computation, but the loss of agency, focus and intentionality.

In quality terms, static optimization (engagement, convenience, automation, etc...) has begun to crowd out the dynamic qualities that made smartphones valuable in the first place. Things like presence, choice, and lived control. The conflict is between systems that decide for us and tools that allow us to decide with them.

My next phone will likely be a much simpler one after decades of Apple iPhones, when I can afford something like a Light Phone, or maybe a Boox Palma and Nokia 3310. The failure mode of the smartphone isn't technical, it's human. Focus is declining, stress is increasing, and people feel less in control of their time. The outcome is a slow, quiet correction. People are turning off features, uninstalling apps, cancelling subscriptions, limiting use, or simply choosing simpler devices. And they're not doing it because they're "rejecting technology", they're trying to regain some agency and space to express their own judgement.

Smartphones are losing trust because static optimisation replaced the dynamic judgement that once made them useful, not because they've become more capable.

Comparison to other things

That's probably enough examples to drive the theme home in general. But feel free to take this same lens to other historical shifts between dynamic and static quality that have occurred throughout history and even more modern times.

My sense of "fun" might not be very good. Aw heck, just to drive the point home, lets do a whirlwind tour of a few more because it's fun:


Scholasticism vs. Empirical Science
Static Value Authoritative texts and ineherited doctrine
Dynamic Value Observation, experiments, lived experience
Failure Mode Authority treated as truth, even when experience contradicts it.
Outcome Science routed around doctrine and kept on truckin'
Scholasticism failed when static authority outweighed lived evidence.
Central Planning vs. Market Discovery
Static Value Fixed plans, quotas, top-down certainty
Dynamic Value Local knowledge, feedback, adaptation
Failure Mode Plans froze assumptions that no longer matched reality
Outcome Black markets, inefficiency, eventual collapse or reform
Central planning failed by replacing discovery with control
Academic Canon vs. Living Culture
Static Value Approved works, fixed definitions of merit
Dynamic Value New voices, evolving forms, lived expression
Failure Mode The canon stopped listening while culture kept changing
Outcome Culture moved on, the canon became optional
The canon failed by preserving taste instead of renewing it
Corporate Process Frameworks vs. Real Work
Static Value Documented workflows, compliance, predictability
Dynamic Value Judgement, adaptation, problem-solving under pressure
Failure Mode Process replaced understanding
Outcome Workarounds, shadow processes, quiet disengagement
Process failed when it tried to stand in for thinking
SEO-Driven Content Farms vs. Human Knowledge Seeking
Static Value Keywords, ranking signals, volume
Dynamic Value Insight, experience, explanation
Failure Mode Content was written for machines, not people
Outcome Search distrust and migration to forums and communities
SEO failed when optimization replaced understanding
Social Media Feeds vs. Human Social Interaction
Static Value Engagement metrics, ranking algorithms
Dynamic Value Conversation, trust, shared context
Failure Mode Optimisation distorts how people relate to each other
Outcome Fatique, distrust, retreat to smaller spaces
Feeds fail by optimizing attention instead of relationships
Corporate Agile vs. Craft-Based Software Development
Static Value Ceremonies, velocity metrics, roles
Dynamic Value Judgement, design sense, technical ownerhip
Failure Mode Process theatre replacing adaptability
Outcome Teams comply outwardly and ignore it inwardly
Agile fails because it becomes ritual instead of practice.
Learning Management Systems vs. Learning
Static Value Completion rates, standard curricula
Dynamic Value Curiosity, mentorship, struggle
Failure Mode Education was reduced to compliance
Outcome Parallel learning outside formal systems
LMS fails when measurement replaces understanding
Enterprise SaaS Monoculture vs. Local Problem Solving
Static Value Standardisation, vendor "best practices"
Dynamic Value Domain knowledge, situational fit
Failure Mode One size was forced into many contexts
Outcome Shadow IT and brittle systems
Monocultures fail by freezing choice.

It honestly just goes on and on, but across all cases the same things are generally happening:

Whenever static value demans obedience without lived Quality, culture responds with indifference towards that thing, not outright rebellion.

Warning Signs

We've seen these patterns play out over and over, with each new technological expansion, as well as across less technological domains like planning, and social media, politics and more. The patterns are so prevalent you can reduce them into a series of warning signs that "something" is likely on the same path.

If 3+ of these are true you can generally expect:

A more realistic AI timeline

I'm sure you've read at least one of the doomsday, apocalyptic narratives online about how AI is going to destroy the world and how we'll all be running around like peasants to billionaire tech fiefdoms ruling over us from their bubbles on Mars. Scare-mongering doesn't do anyone good.

I'm throwing in what feels a more realistic (and simpler) timeline based on historic reactions to technological change.


Phase 1 - Adoption (0-3 years)

"This is magic"

⇒ Dynamic quality is outsourced, not yet missed


Phase 2 - Saturation (3-6 years)

"Everything sounds the same"

⇒ Static value overwhelms peoples dynamic experience


Phase 3 - Alienation (6-10 years)

"I didn't do this, the system did"

⇒ People feel the loss of Dynamic Quality


Phase 4 - Bypass (10-15 years)

"We don't use AI for that"

⇒ Dynamic Quality reasserts moral priority


Phase 5 - Relegation (15-20 years)

"It's just plumbing"

⇒ Static patterns stabilized under Dynamic control


Phase 6 - Irrelevance (20+ years)

"Why did we think this mattered?"

⇒ Dynamic Quality moves on but static AI patterns remain behind


AI isn't going to fail because it's wrong, it'll fail because it answers questions long after culture has stopped caring about the answers.

A Standard Quality Trajectory

Whenever a technology optimizes behaviour without honouring lived judgement, people don't fight it, people quiety stop caring and work around it.

Phase Past Tech. AI
Promise "This will scale, standardize, optimize" "This will think for you"
Static Capture Metrics, Ranking, Frameworks harden Models train on frozen past data
Moral Authority The process says so The models says so
Dynamic Loss Judgement, Taste, Responsibility erode Authorship, Meaning, Agency erode.
Bypass Shadow systems, Human curation Human-only spaces, AI Bans
Residue Becomes background tooling Becomes invisible infrastructure
  1. AI is more volatile because it targets intellectual and creative domains, not just coordination
  2. It removes the felt act of thinking, not just choice
  3. It scales static frozen intellect faster than culture can absorb

AI doesn't just optimize work, it full on short-circuits innovation and the formation of change altogether.

Past Systems ⇒ optimized around human judgement

AI ⇒ attempts to replace human judgement itself

This creates a much more volatile, and potentially faster cultural response, framing AI as a deep threat versus an annoying inconvenience. Other historic technical change like smartphones, or Victorian values have played out (and are still playing out) over extremely long timelines because the removal of judgement and dynamic quality has been slow, and sometimes insidious. With AI, what it's going to do is right out there, front and center, unabashedly.

How we experience AI

AI is primarily experienced as static intellectual pattern.

Humans experience their own work as partly dynamic quality.

When we view AI through this lens, you realize that the backlash isn't against the efficiency of AI itself. It's against the displacement of of a lot of things that we know we value, and that we know drive the evolution of our species. From the perspective of quality Productivity is a static metric but meaning comes from Dynamic Engagement .

AI improves static value (speed, scale, consistency) but at the cost of removing the human from the moment when quality is actually felt.

"It works" is not equivalent to "I experienced Quality doing it"

This arrangement creates alienation, not resistance to technology itself. People don't fear the replacement of labour, they fear replacement of their judgement.

Dynamic quality requires choice, responsibility, and risk. But AI introduces:

This violates a deep quality intuition that Moral decisions should hurt a little.

Domain Reaction Why
Art & Writing Strong DQ-heavy, identity-linked
Software Mixed Craft vs. Abstraction tension
Operations Weak Already static-dominated
Governance Strong moral authority challenged

The more a role derives meaning from Dynamic Quality, the stronger the resistance to it will be. looking at the above, we can almost predict where AI will make the most traction.

Getting down to the real underlying issue

AI freezes today's understanding and calls it best practices.

This is static value claiming moral supremacy which is a classic signal of cultural stagnation. People intuitively sense that AI may optimize the past, while quietly suffocating the future.

People resist AI not because it lacks intelligence, but because it threatens the fragile space where Dynamic Quality turns into meaning.

It poses real threats to jobs, software quality, and peoples ability to express themselves creatively. But beyond that, people can feel that something static, frozen, and built on top of a corpus of knowledge which is feeding itself its own information, is a seismic threat to our cultures future.

Circling Back

If we go back to the long litany of complaints that surface about AI, we can potentially map them down to more specific issues where there's instability in the balance between static and dynamic, and what the specific cost is that causes that imbalance.

Epistemic Fragility Static correctness masquerading as understanding
Lack of System Context Static local optimisation overriding holistic judgement
Energy-Value Imbalance Static efficiency metrics crowding out lived value
Accountability Dilution Static authority displacing personal responsibility
Cognitive Atrophy Dynamic Quality starved by premature optimization
Psychological Alienation Loss of dynamic meaning in the act of work
Homogenization of Outputs Static pattern replication suppressing creative variation
Frozen past bias Static historical norms resisting present judgement quality
False Authority Static probability elevated above human discretion
Governance Lag Static institutions unable to keep page with dynamic change
Tool-Role Confusion Static tooling promoted to moral decision-maker
Inappropriate Workflows Dynamic judgement reduced to mechanical signalling
Enshittification Static optimisation overwhelming human-quality spaces

Each of these concerns appears different on the surface, but they all describe the same failure mode: static value systems expanding into spaces where dynamic judgement is essential.

Let's have a look at the pro-sides arguments through the lens of our static-dynamic framework of thought and see how those arguments stand up to the same treatment.


Leverage of Expertise

Static gain
Codifies past expert patterns at scale
Holds when
Expertise is stable, well-understood, and repeatable
Breaks when
Expertise requires situational judgement or moral trade-offs
Failure Mode
Static expertise replaces apprenticeship and growth

Acceleration of Iteration

Static gain
Faster cycling through known solution spaces
Holds when
Iteration is exploratory but bounded
Breaks when
Speed substitutes for reflection
Failure mode
Velocity displaces learning

Reducton of Mechanical Load

Static gain
Removes low-value cognitive friction
Holds when
Tasks are truly mechanical
Breaks when
"Mechanical" work is actually where understanding forms
Failure Mode
Premature abstraction starves out Dynamic Quality

Error Surface Reduction

Static gain
Detects known error classes reliably
Holds when
Errors are formal and enumerable
Breaks when
Correctness depends on context or intent
Failure mode
False confidence replaces vigilance

Search and Recall at Scale

Static gain
Compresses institutional memory
Holds when
Recall is a bottleneck
Breaks when
Synthesis is mistaken for understanding
Failure mode
Knowing about replaces knowing why

Accessibility and Inclusion

Static gain
Lowers entry barriers
Holds when
AI scaffolds learning
Breaks when
It substitutes for skill formation
Failure mode
Access without agency

Parallelism of Thought

Static gain
Explore multiple known patterns quickly
Holds when
Options are evaluatied by humans
Breaks when
Selection is automated
Failure mode
Quantity replaces discernment

Standardization Where Variation Adds No Value

Static gain
Reduces accidental complexity
Holds when
Variation truly adds no value
Breaks when
Standards encroach on judgement zones
Failure mode
Taylorism in cognitive work

Economic Efficiency

Static gain
Lowers marginal cost of outputs
Holds when
Outputs are commodities
Breaks when
Meaning and responsibility matter
Failure mode
Cheap output, expensive disengagement

Human-in-the-loop Augmentation

Static gain
Tool remains subordinate
Holds when
Human authority is explicit and real
Breaks when
The loop becomes ceremonial
Failure mode
Accountability theater

Shift from Recall to Judgement

Static gain
Humans can "focus on judgement"
Holds when
Judgement is protected and exercised
Breaks when
Judgement is gradually automated
Failure mode
Deskilling disguised as empowerment

There's a pattern here to these as well, but the things to really focus on across these is that they can only hold true so long as the following statements hold true:

  1. AI remains subordinate
  2. AI remains optional
  3. AI is reversible
  4. AI is visibly non-authoritative

Anyone can see that these are super fragile and are subject to so many factors that could violate them, like economic incentives, that they could very easily be broken one at a time, or all at once causing AI to slip into the Failure Modes described above.

As a an experienced Software Engineer I also recognize that some people need to go down that roads to feel it for themselves, and you should let them so they build their ability to recognize those patterns and learn to feel them. As an experienced software engineer (almost 3 decades now) I absolutely hate inefficiency. I don't like going down the same road I explored a year, 2 years, or 10 years ago when I know the outcome won't have changed. So much of Software Engineering is pattern recognition, and having lived through tech from the days of the Commodore 64, i've seen a lot of patterns repeated, over and over, with roughly the same results. It doesn't surprise me that people are as anti-AI as they are, because there's an innate recognition of old, repeated patterns here that we've seen before.

Closing

Across the comparisons explored here, the same patterns repeat. Each begins with genuine insights, hardens into a system of static values, and then overreaches by claiming moral or intellectual authority over human judgement. What comes after isn't the immediate collapse of society, it's the withdrawal of society through disengagement, workarounds, loss of trust, and eventual cultural bypass. The systems persist, but only as infrastructure, they're no longer the sources of meaning or legitimacy. Like Victorian morals, they're still around, just relegated to spaces where we can safely ignore them and not have to interact with them.

Seen through that lens, resistance to AI isn't reactionary, and it's not a replay of past technological fear. It's a response to an old familiar pattern, and failure mode: the elevation of frozen understanding over lived judgement. Hallucinations, accountability gaps, environmental cost, mental health impacts, these aren't isolated, simple flaws that you can just patch out in the next release. They're early signs of an extremely deep imbalance, where optimisation displaces responsibility and efficiency crowds out peoples ability to participate.

This also explains why AI's isn't going to be a clean story of dominance or rejection. Like the things that have come before, AI will survive, but not in the form its advocates envision. Systems that present themselves as authorities, replacements, or moral arbiters will see growing resistance over time as they're quietly routed around. Systems that are subordinate (tools that can scaffold judgement but don't replace it) will endure. Acceptance will come through restraint, not persuasion or inevitability.

The open question, then, isn't whether AI will improve, or whether its costs can be reduced. Those issues are more or less irrelevant and are at the same level as the pros and cons that are constantly regurgitated. It's whether we are willing to keep judgement human, responsibility local, and meaning intact. The outcome of that will ultimately decide if the positives hold and the negatives are overcome or become irrelevant.

History suggests that when technologies forget their place, society doesn't argue them out of existence, it just moves on. There is nothing so special about AI that it will be an exception to that rule.