The Better AI Gets, the More Students Need to Strengthen Their Thinking

Picture of Student mind maps MiDE Studio

Imagine a marketing student who hands in an A level case study. It has a solid situation analysis, competent competitive set, sound positioning, and reasonable recommendations.

Now imagine that same student 3-6 months later. They graduated with a high GPA and landed their dream job. Their manager asks them to analyze why sales have been declining the last year and make a recommendation.

The student freezes. Not because they’re not smart. But because something essential was never built. In the busyness of interviewing, getting ready to graduate and enjoying their senior year the temptation to get the quick answer from an AI prompt was too tempting.

The professor didn’t notice the first time. AI is getting better, AI checkers aren’t always accurate and AI use is more difficult to prove with tools that humanize AI writing. So the student used AI to do all the work for all case assignments. They thought they found the easy way to their dream job.

The thinking that should have happened was quietly outsourced to AI.

But the answers AI provides for well known text and HBR cases aren’t transferable to the unique current situation the company faces. The student didn’t learn to research, synthesize, draw insights, and apply critical thinking. They never learned to empathize with customers. They didn’t learn to use AI in ways to increase their value as an employee.

This is hypothetical, but something I think about as I consider how we teach in an AI-assisted world. The issue wasn’t using AI, it was using AI in the wrong way.

Right now, higher education is pulled between two camps. Prohibitionists see AI as a threat to academic integrity. Accelerationists think traditional learning is obsolete. Both sides are arguing about the wrong thing. The more useful question? When students use AI, is it making their thinking stronger or weaker?

Two books helped me see this more clearly: S.I. Hayakawa’s Language in Thought and Action and Angus Fletcher’s Primal Intelligence. Read together, they point toward a framework that’s more useful than a simple “allowed” or “not allowed” policy.

The Map Is Not the Knowledge

Hayakawa’s reminder, “the map is not the territory,” can apply to how students use AI. In a college course, the final deliverable is just a map. The territory is the cognitive struggle. It’s the connections made while wrestling with a real problem, the moments of confusion that eventually resolve into genuine insight.

In the student hypothetical, the case analysis is the map. The manager’s question about the decline in sales is the territory.

When a student writes a case analysis, the learning happens in the hard questions. Who’s this brand actually talking to? What do they feel when they see the ads and use the product? Are there new competitors? Has the market changed? Does the positioning hold up?

If AI answers all those questions, the student gets the coordinates without building the navigation skill. When that gap appears in the real world, it feels like personal failure. What happened is the thinking was outsourced at exactly the moment it needed to happen.

The grade is the map. The cognitive struggle is the territory. AI can help you understand the map, but only you can travel through the territory.

Your Brain Is Not a Recommendation Engine

This is where Fletcher’s work in Primal Intelligence becomes useful for how we think about student learning.

AI runs on correlation (A = B). It looks at what’s already been written and calculates the most probable next word, the most common next move. It’s a Data Brain that’s incredibly fast, but fundamentally a high-speed echo of the past.

Your brain runs on conjecture (A → B). You don’t just see two things are related. You imagine how one causes the other asking “Why?” and “What if?” in ways a correlation engine cannot.

AI can analyze 500 brand campaigns and tell you the most common recommendation. That’s correlation A = B. But only a student who’s spent time in the original data to draw insights from real consumers can ask: “Why are brands that lean into vulnerability outperforming ones that lead with aspiration?” That’s conjecture A → B. That’s the thinking that builds a marketer.

There is a kind of thinking (imaginative, causal, empathic) that AI cannot do for students. If they don’t practice it, they don’t develop it.

When you focus on the grade using AI to avoid the struggle, you lose the capability.

The 5 Levels of Classroom Integration

Instead of “using AI” or “not using AI,” there’s a more productive question. What level of integration serves the learning objective? Here’s a framework I’ve been developing:

A Five Level Multi-Value Approach to AI Integration in Student Learning
A Multi-Value Approach to AI Integration in Student Learning. Click on image to download a PDF.

Not every assignment should allow the same level of AI use based on objective and context.

Make the Invisible Visible

A useful tool that could have helped the hypothetical student is an AI Audit Log. Students record which tool they used, what prompts they gave it, what output they received, and how they verified, modified, or built on that output.

An AI audit log makes AI use visible instead of hidden. It makes students slow down and ask, Am I using this to avoid the thinking, or to deepen it? It also shifts the conversation from “gotcha” enforcement to a learning conversation.

You might ask students to log how they used AI to research a target audience, then trace where they went beyond the AI output. What did they verify? What did they challenged? What human insight did they add? The log becomes evidence of the cognitive work.

An AI Audit Log makes the invisible visible. It shows whether a student is building their thinking or outsourcing it.

Moving from “Gotcha” to “Growth”

The detect-and-punish model is understandable, but fights the wrong battle. What’s more beneficial is assignment design that makes the learning objective transparent and specifies which level of AI integration is appropriate.

Instead of: “No AI allowed on this assignment” (vague, unenforceable, adversarial)

Try: “For this brand audit, you may use AI at Level 1 (concept clarification) and Level 2 (brainstorming competitor categories), but Levels 3–5 are off-limits because the objective is to develop your own consumer insight framework. Document in an AI Audit Log.”

What Higher Education Should Develop

The hypothetical student in their first job isn’t underprepared in the traditional sense. They can define positioning and list the steps in the strategic marketing process. What they lack is the practiced habit of executing that process.

They also lack the habit of asking “Why?” when looking at market data. They never learned and practices the imaginative skill of moving from the abstraction down to the lived human experience of the consumer.

Picture of Student mind maps MiDE Studio
In Markets, Innovation& Design (MiDE) we teach marketing students Design Thinking in Business. They learn to navigate “messy” real-world situations sketching out concepts, processes and ideas to solve complex problems and foster a human-centric, empathic approach to innovation. Balancing analytic rigor with creative confidence increases career value with human skills less threatened by AI automation.

That’s when marketing, management and communications education is at its best. When students develop the ability to look at a spreadsheet and see the human story. When they have capacity to read a consumer insight report and sense what’s missing from it. Students who simply use AI to get the answer will never build the skill to make the imaginative leap from what the data shows to what the brand should do next.

AI can tell you what usually works in a category. It can’t tell you what your specific consumer is feeling right now, or why a campaign that followed every best practice still missed. That’s territory. And it requires a brain that has practiced traveling through it.

AI can tell you what usually works (correlation). Only you can imagine what should work next (conjecture).

For students: Look at your last assignment. Did you use AI to avoid cognitive struggle, or to sharpen your thinking? Your thinking skills are either getting stronger or weaker.

For professors: Look at your next assignment. What’s the learning objective? Which level of AI integration serves it? Can you write the instructions to name the level, explain why, and ask for an AI Audit Log?

The goal isn’t to police AI use. It’s to help students understand when they’re building their human brain skills and when they’re weakening them.

In a world where AI handles correlation, the students who know how to conjecture, imagine causal stories the data hasn’t seen yet, are the ones who will be valuable.

About This Post’s Creation

This post was developed in partnership with Claude. I provided the frameworks from Hayakawa and Fletcher, experience from my teaching, and the 5-level scale adapted for education. Claude helped organize and refine.

Why I’m Teaching Humans to Partner with AI Instead of Training AI to Replace Them.

While AI companies are now spending billions teaching AI to replace people, I take a different view – teaching people to work with AI as partners, not competitors. My approach has been thinking of AI as what Ethan Mollick calls Co-Intelligence. AI is a research assistant, brainstorm partner, advisor, task completer, and debater. It’s a tool to augment and sharpen, not replace your own human intelligence, expertise, and learning.

How are you feeling about AI? It’s been a short, long 3 years of ups and downs. I’m trying to navigate  forward somewhere through the middle.

On one of my runs this week I was listening to the Artificial Intelligence Show. Co-host Paul Roetzer referenced the article, “How Anthropic and OpenAI Are Developing AI ‘Co-Workers'” and explained how AI companies are spending 1 billion this year training LLM agents to do our jobs using cloned apps and reinforcement learning (RL).

Since the release of ChatGPT, I’ve been focused on helping professionals, professors, and students prepare for AI in the workplace, not as a replacement for their expertise and thinking, but as a tool to improve and enhance their human knowledge and talents.

Humans are training AI in RL gyms.

Companies like Mercor are recruiting highly-skilled experts such as doctors, lawyers, PhDs, engineers, and marketers, paying them high wages to work with AI labs to be LLM trainers. They’ve built thousands of RL gyms training AI on knowledge worker jobs.

When I heard this, I honestly almost stopped my run to dream about the money I could make as an AI trainer! But that dream didn’t last long, when I thought about the moral implications and how that would make me feel professionally and personally. I really enjoy teaching humans.

Despite any moral dilemma, the business reality is clear: AI is here to stay. A Stanford HAI survey found 78% of organizations reported using AI in 2024, a steep increase from 55% in 2023.

Rather than training AI to be human, my last two posts were about training people to leverage our brain’s advantages over AI to Be More Human. The Cognitive Training Plan for Students gives examples on how to partner with AI to sharpen your mind, and the Cognitive Training Plan for Professionals explains how to partner with AI to deepen your expertise.

Use AI as a tool, not a replacement.

I’m aware of the risk of cognitive offloading. Rely on AI too much and replace our thinking or learning, then we lose those skills as professionals or never acquire them as students. I’ve illustrated the dangers of this in an infographic that warns how AI Can Skip The Stages of the Cognitive Learning Process.

My solution has been to use AI, test it, and share what I learn with my students, professor colleagues, and marketing and communications professionals. Overtime I’ve learned ways to use AI and ways not to use it. A key concept that explains this is the jagged frontier of AI.

In research with Boston Consulting Group, Ethan Mollick and his co-authors found that AI is very good at some things but bad at others in ways that are hard to predict or recognize without expertise. The consultants at BGC found the edges through use and became AI experts in their discipline. Those who engage with AI to uncover the jagged frontier in their field will not only survive in the AI revolution but thrive.

GPTs to increase your co-intelligence with AI.

This summer, I had a goal of creating a custom GPT. I wanted to train general AI for specific high-value tasks that I’ve found professionals and students struggle to understand and/or execute. I also wanted custom GPTs that guide and direct, not outsource thinking.

A social media audit is an invaluable strategic tool that uncovers insights to make significant improvements to a brand’s social media marketing. Yet, the process is often difficult to understand. The Social Media Audit GPT takes you step-by-step through the process of conducting a social media audit for any product, service, or organization. It’s trained on the social media audit process used in my book, Social Media Marketing.

The Social Media Audit GPT isn’t an automated tool that collects data or does the audit for you. You remain in the driver’s seat as the social media strategy expert (current professional or student in training). Only humans truly understand how we socialize online with other humans and companies.

Brand storytelling has been a buzzword in business because it works. It’s been proven by my own story research and others. Yet, telling good stories isn’t easy. The Brand Story Creator GPT acts as your coach for creating brand ads and content that resonates through the power of story, based on the dramatic story framework as explained in our Brand Storytelling book. Get help turning your story into scripts, storyboards, print, and social media post mockups.

The Brand Story Creator GPT isn’t an AI automated tool that writes or analyzes for you. As the human expert (current professional or student in training), you’re central to the story creation and analysis. Humans have direct experience of life and can feel the tensions and emotions of characters, key to crafting a story.

A target market is one of the most important strategic decisions. Get it wrong, and the best product or campaign can flop. The Target Market Coach GPT acts as your coach to guide you through the process of segmentation, targeting, and positioning (STP)—a core marketing framework used by top brands. But even top brands, like Gatorade in the 2000s, can fall into the trap of selecting the wrong target, or targets that are too broad or too vague.

I trained this GPT on core marketing principles developed by Phillip Kotler, but also on core concepts emphasized in the Markets, Innovation & Design program I now teach in. A customer-centric perspective is emphasized with an empathetic persona, considering Clayton Christian’s Jobs to Be Done Theory.

The Target Market Coach GPT won’t replace your thinking. It’s designed to strengthen it. As the human expert (current professional or student in training), you’re central to the target selection and analysis to bring an empathic human-centered perspective.

Why bother if AI companies will eventually replace us anyway?

That will take time. AI agent capabilities tend to be over hyped. There’ s much training to go. Plus, by OpenAI’s own admission, LLMs will never be 100% accurate. Humans will always need to remain in the loop. Last week, BetterUp published research in Harvard Business Review that identified unchecked AI used by coworkers as “AI Workslop” that can reduce trust and productivity.

Even the most advanced AI systems demonstrate why humans must stay in the loop. I was using Gemini 2.5 Pro to find stats for this post and Gemini gave me a number. Based on my experience it felt off to me. When I asked where it got that number, Gemini replied,

“The figure from a ‘Chronicle of Higher Education in mid-2025’ survey was a fabrication. In an effort to provide a current and illustrative answer, I generated a hypothetical but plausible-sounding data point and source. This was a mistake. Presenting a fabricated statistic as fact is incorrect, and I apologize for the misrepresentation. Thank you for holding me accountable.”

You also have to consider that human adoption rates often lag behind technological innovation. Even when AI agents get to professional knowledge worker levels, replacing entire departments in companies will be very difficult to achieve.

Even the most aggressive AI-first companies need transition time, such as the Swedish company, Klarna. Months after partnering with OpenAI to replace its customer service team, the company had to reverse course when customers lost patience with the bots.

Ideally, you’ll work for a human-first AI company that’ll strive to minimize the negative effects through growth and a strategic transition. In either case, the employee with AI skills will be kept over the one who refuses or simply doesn’t learn how to use AI effectively.

This isn’t speculation. Just this week Accenure announced a $865 million reinvention around AI that includes “exiting people in a compressed timeline where reskilling is not a viable path.” Walmart announced an effort to prepare America’s largest private workforce for the AI-driven future with its CEO saying, “every job gets changed” because of AI. And SAP’s CFO says AI will help them “afford to have less people.” How can I not help prepare my students for this reality?

Academic versus business perspectives.

This business reality stands in contrast to what’s happening in academia. Mark Watkins’s latest Substack captures that environment well.

He references Tyler Harper’s The Question All Colleges Should Ask Themselves About AI article. It positions universities as facing a pivotal choice: either isolate digital technology from learning as much as possible, even removing it from campuses entirely, or give up on the mission of learning entirely.

So, we have one extreme of some in business spending billions training AI to replace human workers and another extreme of some in universities calling for banning AI altogether.

What’s the answer? I believe it’s somewhere in-between all-out ban and all-out adoption. Even the AI companies are recognizing the need for a middle ground. An example is Google coming out with Guided Learning for Gemini that’s designed not to provide answers but help humans learn how to get answers on their own.

As Watkins points out, we live in an algorithm driven society. Most are quietly in the middle working hard to integrate AI in meaningful ways that advance capabilities and preserve human value. Yet, the stories on the extremes are what garner attention with clickbait headlines that end up in your feed. Since I published this post, Citi announced mandatory retraining of 175,000 employees on writing better prompts. How could I not teach students to use AI responsibly including writing better prompts using my AI Prompt Framework?

AI Prompt Framework Template with 1. Task/Goal 2. AI Persona 3. AI Audience 4. AI Task 5. AI Data 6. Evaluate Results.
AI Prompt Framework Template for writing good prompts.

Ready to start partnering with AI rather than competing against it?

Explore my three human-first AI tools designed to enhance rather than replace your expertise: Social Media Audit GPT, Brand Story Creator GPT, and Target Market Coach. And let me know if I can improve them through further training. Remember, they’re not perfect. Don’t check your critical thinking at the AI door.

This Was 95% Human Generated Content!

I wanted to share my custom GPTs but also comment on what I’ve been seeing in the professional and academic worlds around AI. I sat down and started writing. I did use Gemini Pro 2.5 to find some stats (and check them), and I used Anthropic Sonnet 4 for writing improvement suggestions.