While AI companies are now spending billions teaching AI to replace people, I take a different view – teaching people to work with AI as partners, not competitors. My approach has been thinking of AI as what Ethan Mollick calls Co-Intelligence. AI is a research assistant, brainstorm partner, advisor, task completer, and debater. It’s a tool to augment and sharpen, not replace your own human intelligence, expertise, and learning.
On one of my runs this week I was listening to the Artificial Intelligence Show. Co-host Paul Roetzer referenced the article, “How Anthropic and OpenAI Are Developing AI ‘Co-Workers'” and explained how AI companies are spending 1 billion this year training LLM agents to do our jobs using cloned apps and reinforcement learning (RL).
Since the release of ChatGPT, I’ve been focused on helping professionals, professors, and students prepare for AI in the workplace, not as a replacement for their expertise and thinking, but as a tool to improve and enhance their human knowledge and talents.
Humans are training AI in RL gyms.
Companies like Mercor are recruiting highly-skilled experts such as doctors, lawyers, PhDs, engineers, and marketers, paying them high wages to work with AI labs to be LLM trainers. They’ve built thousands of RL gyms training AI on knowledge worker jobs.
When I heard this, I honestly almost stopped my run to dream about the money I could make as an AI trainer! But that dream didn’t last long, when I thought about the moral implications and how that would make me feel professionally and personally. I really enjoy teaching humans.
Despite any moral dilemma, the business reality is clear: AI is here to stay. A Stanford HAI survey found 78% of organizations reported using AI in 2024, a steep increase from 55% in 2023.
Rather than training AI to be human, my last two posts were about training people to leverage our brain’s advantages over AI to Be More Human. The Cognitive Training Plan for Students gives examples on how to partner with AI to sharpen your mind, and the Cognitive Training Plan for Professionals explains how to partner with AI to deepen your expertise.
Use AI as a tool, not a replacement.
I’m aware of the risk of cognitive offloading. Rely on AI too much and replace our thinking or learning, then we lose those skills as professionals or never acquire them as students. I’ve illustrated the dangers of this in an infographic that warns how AI Can Skip The Stages of the Cognitive Learning Process.
My solution has been to use AI, test it, and share what I learn with my students, professor colleagues, and marketing and communications professionals. Overtime I’ve learned ways to use AI and ways not to use it. A key concept that explains this is the jagged frontier of AI.
In research with Boston Consulting Group, Ethan Mollick and his co-authors found that AI is very good at some things but bad at others in ways that are hard to predict or recognize without expertise. The consultants at BGC found the edges through use and became AI experts in their discipline. Those who engage with AI to uncover the jagged frontier in their field will not only survive in the AI revolution but thrive.
GPTs to increase your co-intelligence with AI.
This summer, I had a goal of creating a custom GPT. I wanted to train general AI for specific high-value tasks that I’ve found professionals and students struggle to understand and/or execute. I also wanted custom GPTs that guide and direct, not outsource thinking.
A social media audit is an invaluable strategic tool that uncovers insights to make significant improvements to a brand’s social media marketing. Yet, the process is often difficult to understand. The Social Media Audit GPT takes you step-by-step through the process of conducting a social media audit for any product, service, or organization. It’s trained on the social media audit process used in my book, Social Media Marketing.
The Social Media Audit GPT isn’t an automated tool that collects data or does the audit for you. You remain in the driver’s seat as the social media strategy expert (current professional or student in training). Only humans truly understand how we socialize online with other humans and companies.
Brand storytelling has been a buzzword in business because it works. It’s been proven by my own story research and others. Yet, telling good stories isn’t easy. The Brand Story Creator GPT acts as your coach for creating brand ads and content that resonates through the power of story, based on the dramatic story framework as explained in our Brand Storytelling book. Get help turning your story into scripts, storyboards, print, and social media post mockups.
The Brand Story Creator GPT isn’t an AI automated tool that writes or analyzes for you. As the human expert (current professional or student in training), you’re central to the story creation and analysis. Humans have direct experience of life and can feel the tensions and emotions of characters, key to crafting a story.
A target market is one of the most important strategic decisions. Get it wrong, and the best product or campaign can flop. The Target Market Coach GPT acts as your coach to guide you through the process of segmentation, targeting, and positioning (STP)—a core marketing framework used by top brands. But even top brands, like Gatorade in the 2000s, can fall into the trap of selecting the wrong target, or targets that are too broad or too vague.
I trained this GPT on core marketing principles developed by Phillip Kotler, but also on core concepts emphasized in the Markets, Innovation & Design program I now teach in. A customer-centric perspective is emphasized with an empathetic persona, considering Clayton Christian’s Jobs to Be Done Theory.
The Target Market Coach GPT won’t replace your thinking. It’s designed to strengthen it. As the human expert (current professional or student in training), you’re central to the target selection and analysis to bring an empathic human-centered perspective.
Why bother if AI companies will eventually replace us anyway?
That will take time. AI agent capabilities tend to be over hyped. There’ s much training to go. Plus, by OpenAI’s own admission, LLMs will never be 100% accurate. Humans will always need to remain in the loop. Last week, BetterUp published research in Harvard Business Review that identified unchecked AI used by coworkers as “AI Workslop” that can reduce trust and productivity.
Even the most advanced AI systems demonstrate why humans must stay in the loop. I was using Gemini 2.5 Pro to find stats for this post and Gemini gave me a number. Based on my experience it felt off to me. When I asked where it got that number, Gemini replied,
“The figure from a ‘Chronicle of Higher Education in mid-2025’ survey was a fabrication. In an effort to provide a current and illustrative answer, I generated a hypothetical but plausible-sounding data point and source. This was a mistake. Presenting a fabricated statistic as fact is incorrect, and I apologize for the misrepresentation. Thank you for holding me accountable.”
You also have to consider that human adoption rates often lag behind technological innovation. Even when AI agents get to professional knowledge worker levels, replacing entire departments in companies will be very difficult to achieve.
Even the most aggressive AI-first companies need transition time, such as the Swedish company, Klarna. Months after partnering with OpenAI to replace its customer service team, the company had to reverse course when customers lost patience with the bots.
Ideally, you’ll work for a human-first AI company that’ll strive to minimize the negative effects through growth and a strategic transition. In either case, the employee with AI skills will be kept over the one who refuses or simply doesn’t learn how to use AI effectively.
This isn’t speculation. Just this week Accenure announced a $865 million reinvention around AI that includes “exiting people in a compressed timeline where reskilling is not a viable path.” Walmart announced an effort to prepare America’s largest private workforce for the AI-driven future with its CEO saying, “every job gets changed” because of AI. And SAP’s CFO says AI will help them “afford to have less people.” How can I not help prepare my students for this reality?
Academic versus business perspectives.
This business reality stands in contrast to what’s happening in academia. Mark Watkins’s latest Substack captures that environment well.
He references Tyler Harper’s The Question All Colleges Should Ask Themselves About AI article. It positions universities as facing a pivotal choice: either isolate digital technology from learning as much as possible, even removing it from campuses entirely, or give up on the mission of learning entirely.
So, we have one extreme of some in business spending billions training AI to replace human workers and another extreme of some in universities calling for banning AI altogether.
What’s the answer? I believe it’s somewhere in-between all-out ban and all-out adoption. Even the AI companies are recognizing the need for a middle ground. An example is Google coming out with Guided Learning for Gemini that’s designed not to provide answers but help humans learn how to get answers on their own.
As Watkins points out, we live in an algorithm driven society. Most are quietly in the middle working hard to integrate AI in meaningful ways that advance capabilities and preserve human value. Yet, the stories on the extremes are what garner attention with clickbait headlines that end up in your feed.
Ready to start partnering with AI rather than competing against it?
Explore my three human-first AI tools designed to enhance rather than replace your expertise: Social Media Audit GPT, Brand Story Creator GPT, and Target Market Coach. And let me know if I can improve them through further training. Remember, they’re not perfect. Don’t check your critical thinking at the AI door.
This Was 95% Human Generated Content!
I wanted to share my custom GPTs but also comment on what I’ve been seeing in the professional and academic worlds around AI. I sat down and started writing. I did use Gemini Pro 2.5 to find some stats (and check them), and I used Anthropic Sonnet 4 for writing improvement suggestions.