How Innovation Disappears Without Anyone Noticing. AI quietly becomes the judge of what’s worth trying.

Two paths to innovation with AI.

Most of what I hear about AI in business and education falls into two camps. One wants to go all in. Adopt everything, automate everything, let the model drive. The other wants to ban it. Block it, police it, treat it like a threat to learning, work, and trust.

I understand both instincts. But both are incomplete. The biggest risk isn’t solved by pretending AI doesn’t exist, and it isn’t solved by letting AI become the default operating system for creativity. The real danger is quieter.  And doesn’t require you to be a heavy AI user to feel it.

We’ve been here before. Social media promised connection and creativity. For a while, it delivered. What it quietly took in return (depth of attention, tolerance for uncertainty, ability to sit with an idea before seeking validation) didn’t disappear in a single moment. It eroded through a million individual rational decisions until dependency on external validation became the walls we stopped seeing.

We watched it happen to our teenagers first (likes, follower counts, compulsive checking, anxiety when it didn’t come). A generation learned to measure the worth of a thought, a face, a body, a life by how many strangers approved of it. Not just one generation. How much am I thinking if people will “like” this when I post it on LinkedIn?

Some of what was lost with social media (certain habits of mind, expectations of privacy, a quality of unmediated experience) turned out to not be recoverable. How about confidence, patience, trust in instinct, feeling comfortable in your own skin? We didn’t notice it going. That’s a pattern worth considering as AI moves into the center of how we work, create, and decide.

The real danger isn’t takeover. It’s permission.

Innovation doesn’t die because AI becomes “too smart.” It dies when AI becomes the permission structure.

When AI is the first draft and the final judge, teams begin to internalize a new rule. “If the model didn’t suggest it, it’s probably not worth trying.” That sounds dramatic, but in practice it shows up as premature convergence. Ideas get smoothed down into something plausible, defensible, and safe. The walls are still there. They’re just harder to see.

Creative leaps disappear, not because people lack imagination, but because they stop trusting imagination that can’t be justified by the machine.

But there’s a second, less visible danger that compounds this one. It’s not just about what happens inside teams. It’s about what happens to the ideas that do get conceived, once they enter a world where AI drives gatekeeping. Not just on publishing platforms, and funding algorithms, but inside organizations themselves.

In the manager that runs a proposal through an AI tool before greenlighting it. The client who asks for validation before approving an original direction. The colleague who reports back that an idea “scored low on feasibility.” The hiring process that screens for the pattern of past success rather than the shape of future potential.

At every layer, the filter is the same: does this match what has worked before? Radical ideas may still get born. The problem is they have a harder time surviving the rooms they have to pass through due to the invisible walls created by AI. Thus, the threat to innovation is psychological and infrastructural.

The third way: humans in the loop, with a method

This is why I’m convinced the answer isn’t all-out AI or zero AI. The answer is AI + human judgment, guided by a method that protects what AI can’t replace.

Design thinking can be that method. “Design” is most associated with how things look. But “design thinking” is a human-centered problem-solving process. It’s disciplined way of framing questions, testing assumptions, and learning from reality that applies to anything. A product, a strategy, a curriculum, a client relationship, a hiring decision. Including problems where no data exists. Where the market hasn’t formed, the behavior hasn’t been measured, the future hasn’t happened.

AI requires data to have an opinion. Humans can imagine a world that doesn’t exist yet and work backwards from it.

That capacity, to conceive of something genuinely new and then build toward it, is what the permission structure quietly erodes. Here is why designing thinking works. It forces a loop that AI can’t complete on its own:

Frame → Prototype →Test → Learn

Crucially, it keeps validation anchored in human reality rather than model confidence. It doesn’t just keep humans in the loop. It gives humans a reason to trust their own judgment over the machine’s. It helps you see the walls which is a prerequisite to getting past them.

Map → Territory → Leap (why AI gets stuck in the Map)

I keep coming back to a simple model: innovation moves through three stages: the Map, the Territory, and the Leap. A model inspired by my bookshelf.

AI is incredible at the Map: patterns, dashboards, benchmarks, past behavior, summarized research, quick drafts. The Map feels safe. Professional. Defensible.

But certainty is not the same thing as insight. Innovation doesn’t happen when you stay inside the Map. It happens when you enter the Territory (human context, constraints, emotion, workarounds, meaning) and then make a Leap toward a future the data hasn’t seen yet.

The deeper problem is that AI systems trained on existing data are fundamentally engines of interpolation. LLMs are built to find the most plausible path between what already exists. That’s what makes them sound so confident. They’re great at optimizing the room you’re already in with little incremental improvements.

This makes AI structurally ill-suited for the Leap. The Leap, by definition, goes somewhere the training data doesn’t. It goes to the adjacent possible. The space just beyond the walls of the known, close enough to reach but far enough that no algorithm has mapped it yet. Anyone who’s studied business sees the pattern of disappearing innovation.

Organizations built on early innovations created by visionary founders stagnate into small improvements and eked out efficiencies only to be upset by a startup who sees past the corporate walls entirely.

Where AI flattens innovation

AI can generate a hundred ideas in seconds. That’s useful. It’s also exactly how you get AI sameness in answers without discovery.

You see it when teams substitute model output for human contact:

  • Personas that feel “realistic” but were never met
  • “Best practices” that replace lived tensions
  • Polished concepts that fit a category but don’t fit a context
  • Strategies that sound smart yet feel interchangeable

Novelty is cheap now. What’s rare is originality grounded in human reality and shaped by thoughtful judgment. AI doesn’t create the boundaries. It just makes them feel load-bearing. The first act of innovation is realizing that they’re not. Most are assumptions calcified into convention. Design thinking is the renovation process. A willingness to ask which walls hold up the roof and which are just where someone put furniture forty years ago. Knock the right ones down and you get a more “open concept” mind and a clearer path to something nobody imagined before.

Why design thinking helps (in practical terms)

Design thinking mitigates the loss of innovation risk because it changes what you optimize for and where you look for proof. And it works whether you’re new to AI or already deep in it. It’s less about correcting a habit than building the right one from the start.

It prioritizes problem framing before solution picking

AI will happily answer any question you ask. Design thinking asks whether you’re asking the right question in the first place. Reframing is where originality often lives. If you only optimize inside an inherited frame, you’ll get better and better at solving the wrong problem. Reframing is also how you first become aware of the walls. The assumptions so embedded in how you’re thinking that you stopped noticing they were assumptions at all.

It replaces “AI says” with “reality says”

If AI becomes the validator, teams outsource judgment. Design thinking moves validation back into the world of prototypes and user behavior. You don’t need permission from a model when you can say, “We built something small and tested it. Here’s what people actually did.” This is also the antidote to the infrastructure problem. When you can demonstrate real-world proof, you don’t need the algorithm’s blessing to proceed.

It trains the habit AI can quietly erode: human agency

One of the subtler effects of AI is the habit of letting the tool do the thinking you used to do yourself. You reach for it before you’ve sat with the problem, before you’ve let the uncomfortable question breathe. It happens gradually, not all at once. Design thinking builds the opposite muscle: observe, synthesize, imagine, test, learn. Repeat. These aren’t soft skills. They’re the capabilities that keep unexpected ideas alive long enough to find an audience. Design thinking keeps you oriented toward the adjacent possible rather than the algorithmic average.

Two paths (a quick mental model)

If you want a simple operating principle: AI is a powerful teammate. It shouldn’t be the judge. Whether you’re just beginning to use AI or have been using it for a while, it helps to see the two patterns it tends to create — and to decide deliberately which one you want.

Two paths to innovation with AI. Which path will you follow? Click on the graphic to download a PDF.

The first path is faster and cleaner. If you’re new to AI, it’s the one that forms without noticing. Ask, get an answer, justify it, move on. It’s not wrong exactly, but it keeps you inside the walls, optimizing the known and mistaking the plausible for the possible. The second path can feel slower at the start. But it protects originality because it stays anchored in the Territory where the proof of concept is the world, not just the algorithm.

Innovation isn’t dead. But it’s no longer default.

In an AI world, “good enough” will be cheap. Polished content, plausible strategies, and decent ideas generated fast, living comfortably within the walls of what’s already been tried.

The advantage shifts to people and teams who can see past those walls, step into the Territory, and make Leaps toward the adjacent possible. The futures that are close enough to reach but too original for any model to have predicted. The subtler challenge is that the infrastructure around innovation. How ideas get funded, published, amplified, and taken seriously will increasingly be shaped by AI.

This is why I believe the Freeman College of Management’s Markets, Innovation & Design (MiDE) approach is timely. MiDE trains students to work at the intersection of markets, innovation, and human-centered design. Before AI becomes the default way of thinking, it becomes a tool inside a disciplined learning loop. That’s a differentiator that matters now more than ever, as the tendency toward AI-validated solutions quietly narrows what organizations are willing to try.

There is one final irony worth considering. AI itself was born from the human capacity this piece is arguing to protect. Alan Turing imagined a world that didn’t exist yet, with no data to confirm it was possible, and leaped. The question now is whether what human AI visionaries built will illuminate that capacity in us or quietly dim it.

Your turn

As AI moves further into your work: what role do you want it playing in how you think and create? The walls we stopped seeing with social media took years to become invisible. With AI, most of us are still early enough to decide what we let form and what we don’t.

A confession.

I wrote this post with AI assistance for research, drafts, and pushing my thinking further than I could alone. At one point, I asked Claude whether a thought was good enough and then caught myself wondering whether it would perform well on LinkedIn. Two different tools. The same instinct. Both optimizing against past data. Neither capable of telling me whether an idea is genuinely new or worth pursuing. I caught myself doing exactly what this post argues against. 

The Better AI Gets, the More Students Need to Strengthen Their Thinking

Picture of Student mind maps MiDE Studio

Imagine a marketing student who hands in an A level case study. It has a solid situation analysis, competent competitive set, sound positioning, and reasonable recommendations.

Now imagine that same student 3-6 months later. They graduated with a high GPA and landed their dream job. Their manager asks them to analyze why sales have been declining the last year and make a recommendation.

The student freezes. Not because they’re not smart. But because something essential was never built. In the busyness of interviewing, getting ready to graduate and enjoying their senior year the temptation to get the quick answer from an AI prompt was too tempting.

The professor didn’t notice the first time. AI is getting better, AI checkers aren’t always accurate and AI use is more difficult to prove with tools that humanize AI writing. So the student used AI to do all the work for all case assignments. They thought they found the easy way to their dream job.

The thinking that should have happened was quietly outsourced to AI.

But the answers AI provides for well known text and HBR cases aren’t transferable to the unique current situation the company faces. The student didn’t learn to research, synthesize, draw insights, and apply critical thinking. They never learned to empathize with customers. They didn’t learn to use AI in ways to increase their value as an employee.

This is hypothetical, but something I think about as I consider how we teach in an AI-assisted world. The issue wasn’t using AI, it was using AI in the wrong way.

Right now, higher education is pulled between two camps. Prohibitionists see AI as a threat to academic integrity. Accelerationists think traditional learning is obsolete. Both sides are arguing about the wrong thing. The more useful question? When students use AI, is it making their thinking stronger or weaker?

Two books helped me see this more clearly: S.I. Hayakawa’s Language in Thought and Action and Angus Fletcher’s Primal Intelligence. Read together, they point toward a framework that’s more useful than a simple “allowed” or “not allowed” policy.

The Map Is Not the Knowledge

Hayakawa’s reminder, “the map is not the territory,” can apply to how students use AI. In a college course, the final deliverable is just a map. The territory is the cognitive struggle. It’s the connections made while wrestling with a real problem, the moments of confusion that eventually resolve into genuine insight.

In the student hypothetical, the case analysis is the map. The manager’s question about the decline in sales is the territory.

When a student writes a case analysis, the learning happens in the hard questions. Who’s this brand actually talking to? What do they feel when they see the ads and use the product? Are there new competitors? Has the market changed? Does the positioning hold up?

If AI answers all those questions, the student gets the coordinates without building the navigation skill. When that gap appears in the real world, it feels like personal failure. What happened is the thinking was outsourced at exactly the moment it needed to happen.

The grade is the map. The cognitive struggle is the territory. AI can help you understand the map, but only you can travel through the territory.

Your Brain Is Not a Recommendation Engine

This is where Fletcher’s work in Primal Intelligence becomes useful for how we think about student learning.

AI runs on correlation (A = B). It looks at what’s already been written and calculates the most probable next word, the most common next move. It’s a Data Brain that’s incredibly fast, but fundamentally a high-speed echo of the past.

Your brain runs on conjecture (A → B). You don’t just see two things are related. You imagine how one causes the other asking “Why?” and “What if?” in ways a correlation engine cannot.

AI can analyze 500 brand campaigns and tell you the most common recommendation. That’s correlation A = B. But only a student who’s spent time in the original data to draw insights from real consumers can ask: “Why are brands that lean into vulnerability outperforming ones that lead with aspiration?” That’s conjecture A → B. That’s the thinking that builds a marketer.

There is a kind of thinking (imaginative, causal, empathic) that AI cannot do for students. If they don’t practice it, they don’t develop it.

When you focus on the grade using AI to avoid the struggle, you lose the capability.

The 5 Levels of Classroom Integration

Instead of “using AI” or “not using AI,” there’s a more productive question. What level of integration serves the learning objective? Here’s a framework I’ve been developing:

A Five Level Multi-Value Approach to AI Integration in Student Learning
A Multi-Value Approach to AI Integration in Student Learning. Click on image to download a PDF.

Not every assignment should allow the same level of AI use based on objective and context.

Make the Invisible Visible

A useful tool that could have helped the hypothetical student is an AI Audit Log. Students record which tool they used, what prompts they gave it, what output they received, and how they verified, modified, or built on that output.

An AI audit log makes AI use visible instead of hidden. It makes students slow down and ask, Am I using this to avoid the thinking, or to deepen it? It also shifts the conversation from “gotcha” enforcement to a learning conversation.

You might ask students to log how they used AI to research a target audience, then trace where they went beyond the AI output. What did they verify? What did they challenged? What human insight did they add? The log becomes evidence of the cognitive work.

An AI Audit Log makes the invisible visible. It shows whether a student is building their thinking or outsourcing it.

Moving from “Gotcha” to “Growth”

The detect-and-punish model is understandable, but fights the wrong battle. What’s more beneficial is assignment design that makes the learning objective transparent and specifies which level of AI integration is appropriate.

Instead of: “No AI allowed on this assignment” (vague, unenforceable, adversarial)

Try: “For this brand audit, you may use AI at Level 1 (concept clarification) and Level 2 (brainstorming competitor categories), but Levels 3–5 are off-limits because the objective is to develop your own consumer insight framework. Document in an AI Audit Log.”

What Higher Education Should Develop

The hypothetical student in their first job isn’t underprepared in the traditional sense. They can define positioning and list the steps in the strategic marketing process. What they lack is the practiced habit of executing that process.

They also lack the habit of asking “Why?” when looking at market data. They never learned and practices the imaginative skill of moving from the abstraction down to the lived human experience of the consumer.

Picture of Student mind maps MiDE Studio
In Markets, Innovation& Design (MiDE) we teach marketing students Design Thinking in Business. They learn to navigate “messy” real-world situations sketching out concepts, processes and ideas to solve complex problems and foster a human-centric, empathic approach to innovation. Balancing analytic rigor with creative confidence increases career value with human skills less threatened by AI automation.

That’s when marketing, management and communications education is at its best. When students develop the ability to look at a spreadsheet and see the human story. When they have capacity to read a consumer insight report and sense what’s missing from it. Students who simply use AI to get the answer will never build the skill to make the imaginative leap from what the data shows to what the brand should do next.

AI can tell you what usually works in a category. It can’t tell you what your specific consumer is feeling right now, or why a campaign that followed every best practice still missed. That’s territory. And it requires a brain that has practiced traveling through it.

AI can tell you what usually works (correlation). Only you can imagine what should work next (conjecture).

For students: Look at your last assignment. Did you use AI to avoid cognitive struggle, or to sharpen your thinking? Your thinking skills are either getting stronger or weaker.

For professors: Look at your next assignment. What’s the learning objective? Which level of AI integration serves it? Can you write the instructions to name the level, explain why, and ask for an AI Audit Log?

The goal isn’t to police AI use. It’s to help students understand when they’re building their human brain skills and when they’re weakening them.

In a world where AI handles correlation, the students who know how to conjecture, imagine causal stories the data hasn’t seen yet, are the ones who will be valuable.

About This Post’s Creation

This post was developed in partnership with Claude. I provided the frameworks from Hayakawa and Fletcher, experience from my teaching, and the 5-level scale adapted for education. Claude helped organize and refine.