How Innovation Disappears Without Anyone Noticing. AI quietly becomes the judge of what’s worth trying.

Two paths to innovation with AI.

Most of what I hear about AI in business and education falls into two camps. One wants to go all in. Adopt everything, automate everything, let the model drive. The other wants to ban it. Block it, police it, treat it like a threat to learning, work, and trust.

I understand both instincts. But both are incomplete. The biggest risk isn’t solved by pretending AI doesn’t exist, and it isn’t solved by letting AI become the default operating system for creativity. The real danger is quieter and doesn’t require advanced AI use to feel it.

We’ve been here before. Social media promised connection and creativity. For a while, it delivered. What it quietly took in return (depth of attention, tolerance for uncertainty, ability to sit with an idea before seeking validation) didn’t disappear in a single moment.

It eroded in a million rational decisions until dependency on external validation became walls we stopped seeing.

We watched it happen to our teenagers first (likes, follower counts, compulsive checking, anxiety when it didn’t come). A generation learned to measure the worth of a thought, a face, a body, a life by how many strangers approved of it. Not just one generation. How much am I thinking if people will “like” this when I post it on LinkedIn?

Some of what was lost with social media (certain habits of mind, expectations of privacy, unmediated experience) feel like they’ve not recoverable. How about confidence, patience, trust in instinct, feeling comfortable in your own skin? We didn’t notice them leaving. A pattern worth considering as AI moves into the center of how we work, create, and decide.

The real danger isn’t takeover. It’s permission.

Innovation doesn’t die because AI becomes “too smart.” It dies when AI becomes the permission structure.

When AI is the first draft and the final judge, teams begin to internalize a new rule. “If the model didn’t suggest it, it’s probably not worth trying.” That sounds dramatic, but in practice it shows up as premature convergence. Ideas get smoothed down into something plausible, defensible, and safe. AI makes the walls feel like the edge of possible.

Creative leaps disappear, not because people lack imagination, but because they stop trusting imagination that can’t be justified by the machine.

There’s a second, less visible danger that compounds this one. It’s about what happens to the ideas that do get conceived, once they enter a world where AI drives gatekeeping. Not just on publish platforms, and funding algorithms, but inside organizations.

It’s a manager who runs a proposal through an AI tool before greenlighting it. A client who asks for validation before approving an original direction. A colleague who reports back that an idea “scored low on feasibility.” A hiring process that screens for a pattern of past success rather than shape of future potential.

At every layer, the filter is the same: does this match what has worked before? Radical ideas may still get born. The problem is they have a harder time surviving the rooms they have to pass through due to the invisible walls created by AI. Thus, the threat to innovation is psychological and infrastructural.

The third way: humans in the loop, with a method

This is why I’m convinced the answer isn’t all-out AI or zero AI. The answer is AI + human judgment, guided by a method that protects what AI can’t replace.

Design thinking can be that method. “Design” is most associated with how things look. But “design thinking” is a human-centered problem-solving process. It’s a disciplined way of framing questions, testing assumptions, and learning from reality that applies to anything: a product, a strategy, a curriculum, a client relationship, a hiring decision. This includes problems where no data exists. Where the market hasn’t formed, the behavior hasn’t been measured, the future hasn’t happened.

AI requires data to recommend. Humans can imagine a world that doesn’t exist and work backwards to it.

That capacity, to conceive of something genuinely new and then build toward it, is what the permission structure quietly erodes. Here is why designing thinking works. It forces a loop that AI can’t complete on its own:

Frame → Prototype →Test → Learn

Crucially, it keeps validation anchored in human reality rather than model confidence. It doesn’t just keep humans in the loop. It gives humans a reason to trust their own judgment over the machine’s. It helps you see the walls which is a prerequisite to getting past them.

Where AI flattens innovation

AI systems trained on existing data are fundamentally engines of interpolation. They’re built to find the most plausible path between what already exists. That’s what makes them sound so confident. They’re great at optimizing the room you’re already in with incremental improvements. This makes them structurally ill-suited for the leap to the adjacent possible: the space just beyond the walls of the known, close enough to reach but far enough that no algorithm has mapped it yet.

Anyone who has studied business recognizes the pattern. Organizations built on early innovations by visionary founders stagnate into small improvements and eked-out efficiencies, only to be upset by a startup that sees past the corporate walls entirely. AI, deployed as the primary innovation engine, accelerates that stagnation. It generates answers without discovery.

You see it when teams substitute model output for human contact: personas never met, best practices that replace lived tensions, polished concepts that fit a category but not a context, strategies that sound smart yet feel interchangeable.

Novelty is cheap now. What’s rare is originality grounded in human reality. AI doesn’t create the boundaries. It just makes them feel load-bearing. The first act of innovation is realizing they’re not. Most are assumptions calcified into convention. Design thinking is the renovation process with a willingness to ask which walls actually hold up the roof and which are just where someone put the furniture forty years ago. Knock the right ones down and you get an open concept mind and a clearer path to something nobody imagined before.

Why design thinking helps (in practical terms)

Design thinking mitigates the loss of innovation risk because it changes what you optimize for and where you look for proof. And it works whether you’re new to AI or already deep in it. It’s less about correcting a habit than building the right one from the start.

Design thinking prioritizes problem framing before solution picking

AI will happily answer any question you ask. Design thinking asks whether you’re asking the right question in the first place. Reframing is where originality often lives. If you only optimize inside an inherited frame, you’ll get better and better at solving the wrong problem. Reframing is also how you first become aware of the walls. The assumptions so embedded in how you’re thinking that you stopped noticing they were assumptions at all.

Design thinking replaces “AI says” with “reality says”

If AI becomes the validator, teams outsource judgment. Design thinking moves validation back into the world of prototypes and user behavior. You don’t need permission from a model when you can say, “We built something small and tested it. Here’s what people actually did.” This is also the antidote to the infrastructure problem. When you can demonstrate real-world proof, you don’t need the algorithm’s blessing to proceed.

Design thinking trains the habit AI can quietly erode: human agency

One of the subtler effects of AI is the habit of letting the tool do the thinking you used to do yourself. You reach for it before you’ve sat with the problem, before you’ve let the uncomfortable question breathe.

Design thinking builds the opposite muscle: observe, synthesize, imagine, test, learn. Repeat. These aren’t soft skills. They’re the capabilities that keep unexpected ideas alive long enough to find an audience. Design thinking keeps you oriented toward the adjacent possible rather than the algorithmic average.

Two paths (a quick mental model)

If you want a simple operating principle: AI is a powerful teammate. It shouldn’t be the judge. Whether you’re just beginning to use AI or have been using it for a while, it helps to see the two patterns it tends to create and to decide deliberately which one you want.

Two paths to innovation with AI. Which path will you follow? Click on the graphic to download a PDF.

The first path is faster and cleaner. If you’re new to AI, it’s the one that forms without noticing. Ask, get an answer, justify it, move on. It’s not wrong exactly, but it keeps you inside the walls, optimizing the known and mistaking the plausible for the possible. The second path can feel slower at the start. But it protects originality because it stays anchored in the Territory where the proof of concept is the world, not just the algorithm.

Innovation isn’t dead. But it’s no longer default.

In an AI world, “good enough” will be cheap. Polished content, plausible strategies, and decent ideas generated fast, living comfortably within the walls of what’s already been tried.

The advantage shifts to people and teams who can see past those walls, step into the Territory, and make Leaps toward the adjacent possible. The futures that are close enough to reach but too original for any model to have predicted. The subtler challenge is that the infrastructure around innovation. How ideas get funded, published, amplified, and taken seriously will increasingly be shaped by AI.

This is why I believe the Freeman College of Management’s Markets, Innovation & Design (MiDE) approach is timely. MiDE trains students to work at the intersection of markets, innovation, and human-centered design. Before AI becomes the default way of thinking, it becomes a tool inside a disciplined learning loop.

There is one final irony worth considering. AI itself was born from the human capacity this piece is arguing to protect. Alan Turing imagined a world that didn’t exist yet, with no data to confirm it was possible, and leaped. The question now is whether what human AI visionaries built will illuminate that capacity in us by quietly dimming it.

Your turn

As AI moves further into your work: what role do you want it playing in how you think and create? The walls we stopped seeing with social media took years to become invisible. With AI, most of us are still early enough to decide what we let form and what we don’t.

A confession.

I wrote this post with AI assistance for research, drafts, and pushing my thinking further than I could alone. At one point, I asked Claude whether a thought was good enough and then caught myself wondering whether it would perform well on LinkedIn. Two different tools. The same instinct. Both optimizing against past data. Neither capable of telling me whether an idea is genuinely novel or worth pursuing. I caught myself doing exactly what this post argues against. 

Beyond the Binary: Your Narrative Brain vs. AI’s Rear-View Mirror

I’ve been forcing myself to regularly read physical books again.

Not articles. Not threads. Not AI summaries. Actual books. Cover to cover. It’s my way of reclaiming an attention span fragmented by years of algorithmic feeds designed to keep me scrolling on shallow tidbits.

If AI can consume a library of data in seconds, maybe my competitive advantage is going slower and deeper.

Two books that have been sitting on my shelf are S.I. Hayakawa’s Language in Thought and Action and Angus Fletcher’s Primal Intelligence. The first was written in 1939 and the second 2025. As I read them over several weeks, something clicked.

My brain, the neural synapses Fletcher writes about, made a connection no algorithm would have surfaced: Hayakawa’s framework for “sane” thinking during WWII and Fletcher’s research on how human brains “imagine” new paths or plans in the future.

S. I Hayakawa Language in Thought and Action and Angus Fletcher Primal Intelligence.
No AI would have picked up these two books and made a connection to imagine a new path forward.

Our Narrative Brain

This is what your Narrative Brain does. It makes imaginative leaps across disparate ideas. It asks “What if these two things connect?” A semantics book and neuroscience book written 86 years apart. No dataset, predictive analytics, or AI could have made this creative leap.

It’s a unique capability we risk losing if we don’t understand how to partner with AI correctly.

Many conversations about AI in business and marketing position it as an all or nothing proposition. AI will and should replace employees or (because of this threat) we should avoid using AI at all.

In AI lessons from 2025, I shared how I explored AI partnership versus replacement last year. But I still didn’t understand the core biological barriers and benefits.

Hayakawa and Fletcher gave me the answer. Fletcher explained the fundamental difference between how AI processes information and how our brain works. Hayakawa helped me understand the challenges in AI adoption. Both are key to staying sane (and essential) as a knowledge worker in the AI revolution.

Light Switch vs. Dimmer

Hayakawa described two ways of looking at the world. A Two-Value Orientation is like a light switch. It’s binary: people are all evil or all good. Knowledge work should be all human or all AI. When we approach business, marketing or communications this way, we ask “Should we use AI?” and expect a simple Yes or No.

A Multi-Value Orientation, however, is like a dimmer switch. It recognizes that reality exists on a scale. Instead of automatically labeling people as good or evil, we consider nuance like perspective, circumstance, and intent. Instead of asking “If” we should use AI, we ask, “To what degree and in what context is AI appropriate for each task?”

Key Insight: Two-value thinking creates conflict. Multi-value thinking creates a roadmap for collaboration.

Light Switch vs Dimmer AI Integration
Let’s consider a more nuanced approach to AI integration.

Your Biological Advantage

In his book Primal Intelligence, Angus Fletcher points out a biological truth that changes how we may view AI.

AI runs on transistors that perform Correlation. Its logic is A = B. It looks at massive datasets of the past to see what usually happens. Given A, there’s a 95% chance that B comes next.

If you ask AI for a business or marketing idea, it calculates the statistical probability of which words usually go together. It is, effectively, a high-speed rear-view mirror. It can tell you where the market has been.

Your brain, however, runs on neural synapses that perform Conjecture. Your logic is A → B. You don’t just see two things are typically related. You can imagine a potential causal link. You can look at a set of facts and ask, “What if we did the opposite?” or “Why can’t these go together?”

You can also see possible ways forward when faced with missing, incomplete, or unexpected information. Whereas AI is prone to hallucinations when faced with a lack of data.

For example, AI looks at the data and says: “90% of successful luxury brands use minimalist black-and-white logos.” That’s correlation. But a human looks at a crowded, monochrome market and asks: “What if we used neon yellow to signal a different kind of rebellion?” AI follows the trend to be safe. You break the trend to be noticed.

When correlation said people wanted better keyboards on their phones, Steve Jobs used conjecture to imagine a different story: a single piece of glass that could hold the internet. That strategy drove Apple to fill in the gaps to make that “improbable” narrative happen. AI could not have “imagined” that possibility based on previous data.

AI is a map of the past (Correlation). You are the driver of the future (Conjecture).

The Abstraction Ladder

Hayakawa also taught us about the Ladder of Abstraction. For business and marketing the top would be the “Map” with vague labels like “Customer Satisfaction.” At the bottom is the “Territory” such as the actual, concrete facts and interactions with real people.

AI is great at the top of the ladder. It can summarize the Map of “General Trends” all day. But because it lacks a physical body and lived experience (what Fletcher calls “Embodied Intelligence”), it can’t feel the Territory. Stepping into a customer’s perspective to understand their motives is a human act. AI can track a click, but it can’t feel a wince.

It is why your human empathy can’t be outsourced to AI.

Example: AI can tell you “Gen Z engagement is down 15%.” That’s a top of the ladder abstraction. You climb down to the Territory by observing and talking to Gen Z customers. By understanding their lived experience, you sense an erosion in trust or a shift in culture that doesn’t hit a data log. Territory AI can’t access without embodied experience.

A multi-value approach uses AI to handle the high-level abstractions, which frees up your human brain to climb down the ladder to the real lived experience. We use our Narrative Brain to find the specific, human story, the A → B sequence, that makes a brand feel real.

In a world where AI levels the data playing field, competitive advantage returns to the humans companies employ. Your edge is no longer who has the most data. You’ll need people who can look at a spreadsheet and still see the human story.

Instead of acting in the past you’ll begin imagining new futures and designing marketing actions to make them happen.

5 Levels of AI Integration

To help us navigate this, I created a 5-level scale of AI Integration based on multi-value orientation and our biological advantage. Not every task deserves Level 5 automation. As a professional you’ll know when to turn the dimmer switch up or down based on the human value required.

5 levels of AI integration with a multi-value orientation that leverages our brain’s primal intelligence advantage. Click image to download a PDF.

Now It’s Your Turn

If you’ve been avoiding AI, start at Level 1. This week, ask it to proofread an email you’ve already written. That’s it. You’re still the author. You’re still making all the decisions. Notice how it feels, what it catches and misses.

Then try Level 2. Or if you’re doing that try higher. Try deep research, brainstorming, outlining, drafting, feedback or variations with a reasoning model. Don’t know how? Ask AI.

The goal isn’t to become a better prompt engineer. It’s to become a better thinker.

Become someone who knows when to leverage speed and when to trust your human ability to imagine what doesn’t exist yet. Leverage AI to speed up low value tasks to free up more time for your unique human contribution.

This is why I’m back to physical books. Reading deeply is training for your Narrative Brain. It builds the stamina to stay “low on the ladder” and follow complex stories in the market, in your life and in our world. Real life is not black and white, one’s and zeros.

It ensures that when you step into a meeting, you aren’t just looking at the rear-view mirror of data. You’re the one who can internalize the customer’s perspective and imagine a future the data hasn’t seen for true innovation.

Two Books on a Shelf

Remember those two books on my shelf? No AI would have recommended I read them together. No algorithm would have surfaced their connection. But my Narrative Brain, the same you use every day in your work, made an imaginative leap that created this framework.

That’s what makes you irreplaceable: the ability to make connections that don’t exist in any dataset. Only a human can see the gray areas where the next big idea usually hides.

AI can tell you the most likely next word, but only you can imagine the most meaningful next chapter.

Moving from a two-value “Either/Or” mindset to a multi-value “Degrees-of” mindset, enables you to start imagining and start creating a better future with your narrative brain.

About This Post’s Creation

This was developed in partnership with Google Gemini 3.0 and Claude Sonnet 4.5. Both helped organize and refine. The connection of General Semantics and Narrative Science is my own. One that came from the kind of deep, sustained reading and cross-pollination of ideas that only a human narrative brain can produce.