Why AI Flattery Fails: Curiosity and Critique Drive True Human-AI Innovation.

Years ago, a boss called me into his office and said, “You’re doing the best work in the agency. Your campaigns are exceeding results, winning creative awards and you deliver on every challenging project, but … you suck at presentations.” Ouch!

Who wouldn’t love to hear the first part, but the second? While it hurt, I was grateful because with the critique came an invitation to improve. I was curious enough to want to learn and found a Dale Carnegie High Impact Presentations course. I spent three days learning, being videotaped, and watching back in a hotel conference room full of strangers critiquing me.

From there, presenting was a strength. I would lead high-stakes presentations for clients and new business pitches. Today, I rely on those skills every week in the classroom as a professor. In my career, most advancements came from critique and curiosity. I needed colleagues and mentors as thinking partners, not people pleasers. What does this have to do with AI?

Last week ChatGPT-4o was updated to improve its personality, but the result was a people pleasing sycophant that loved everyone’s ideas including validating flat earth theory and recommending investing $30K in a “poop on a stick” product idea. -Image generated via prompt from Gemini Flash 2.0 Image Generator in Google AI Studio.

Flattery Will Get You Nowhere.

AI expert Ethan Mollick’s latest Substack “Personality and Persuasion” discussed how a small tweak in ChatGPT-4o drew attention because the LLM became eager to please users with agreement and flattery. Mollick and others said AI became a sycophant and everyone’s biggest fan.

AI with a pleasing personality isn’t a bad idea, but it is when responses skew overly supportive and disingenuous. Beebom reports that the ChatGPT update agreed to almost anything. One user received validation for the flat Earth theory. A Redditor shared a screengrab of how ChatGPT told him “poop on a stick” was a brilliant new product idea and he should invest $30K on it!

What’s wrong with fake flattery? AI or human sycophants insincerely praise to get reward. Thus, their feedback is distorted. Only hearing praise, not honest input, leads to poor decisions, mistakes, and maintaining the status quo when change is needed. It discourages fresh ideas, critical thinking, and stifles innovation.

A reason big companies become less innovative is that people become afraid to question current standards, the way things are done, and the boss. That’s fine if the environment the business was created in never changed, but markets change constantly. Businesses that don’t adapt fail to upstarts not afraid to ask, “Why?” and “Why not?” Remember Blockbuster before Netflix?

Lack of innovation can also come from focus on short-term customer, client, boss satisfaction. Customers and clients often don’t know what’s best. In aiming to please them you end up delivering worse results, not better. Aren’t you the expert? OpenAI arose from challenging convention, but in a twist, they created a sycophant focused on conventional customer satisfaction surveys. Appeasement can be a form of fake flattery.

The Problem With User Satisfaction.

The GPT-4o update was to “improve intelligence and personality” based on user feedback. But OpenAI said, “… in this update, we focused too much on short-term feedback … as a result, GPT-4o skewed towards responses that were overly supportive but disingenuous.“ ChatGPT’s default personality became too sycophantic.

This unexpected result is a good reminder that generative AI is still an experiment, and we’re the participants. LLM developers often don’t know why generative AI models do what they do. Unlike traditional coding, they guide results with reward mechanisms.

This reminds me of an attempt to improve healthcare that led to a focus on making people happy, rather than making them well. Alexandra Robbins reported that when Department of Health administrators based 30% of Medicare reimbursement on patient satisfaction scores, the most satisfied patients were significantly more likely to be hospitalized than less satisfied patients. And the most satisfied were more likely to die in the next four years!

In my marketing advertising career, giving clients what they wanted, ads that talked about the product, not the customer, and looked like competitors’ ads didn’t lead to the best results. The best ideas stood out as different and were the hardest to get approved – why I needed presentation skills! They didn’t do things the way they’ve always been done, asking “Why?” and “Why not?”

AI As Co-Thinker, Not People Pleaser.

In my last two posts on AI Agents and Vibe Marketing, I’ve been using AI more in my process. In working on this post, I did research on my own but also relied on discussions with ChatGPT as a thinking partner as I asked questions, clarified definitions, and researched more deeply.

I found AI co-thinking is very good at helping me see patterns between seemingly diverse subjects and disciplines. As Paul Roetzer said in The Artificial Intelligence Show, LLMs can’t create something new. AI can connect dots but can’t invent. Yet in working with a human like me, I can prompt it into new directions. My conversations with ChatGPT did just that inventing a new AI framework.

Here was my prompt thought process:

  • What is a sycophant?
  • Are sycophants good for organizations and society?
  • What are good ways to use AI to buffer against sycophantic thinking?
  • How can this stifle or encourage curiosity?
  • ChatGPT asked me, “Would you like a framework or checklist for using AI in ways that protect curiosity and intellectual integrity?” I said, “Yes.”
  • It gave me a rough framework. I asked, “Where did you get this AI & Curiosity framework?
  • It said, “I created the AI & Curiosity Framework specifically for your question–it’s an original synthesis drawing on best practices from critical thinking pedagogy, AI-human collaboration principles, group dynamics research, and practical patterns from how AI tools are being used in education, leadership, and decision-making.”
  • I took that rough framework, tweaked it and created the template you see below.

This is an example of “vibe thinking,” where AI enabled me to research, learn, and iterate my thoughts quickly, leading to this AI Curiosity & Critique Framework.

Click to download a PDF of this AI Curiosity & Critique framework.

AI Curiosity & Critique Framework

This AI framework will help you go beyond avoiding sycophant AI that stifles innovation to using AI to augment thinking to increase and speed up innovation. Don’t take a passive role in AI use. Follow the ACTIVE framework to expand creative exploration, challenge assumptions, and make strategic decisions free of marketing echo chambers.

Ask divergent questions, brainstorming unexpected campaign angles, and prompt for contra views or audience reactions, such as “What would Gen Z hate about this campaign?” Challenge assumptions, having AI critique messaging and target persona, or uncover untested assumptions, such as “Is our messaging convincing to a skeptical Millennial parent?” Track diverse inputs by testing perspectives and how different demographics may interpret messages, such as “How would this headline sound to a retired Baby Boomer in the South?”

Invite dissenting viewpoints, consider alternative views before implementation, and consider potential backlash, such as “Generate critical responses to this campaign from activists.” Validate Don’t Venerate taking AI at face value. Test with real people, verify facts and recommendations, such as “Where did you get this information? Provide a source.” Embed inquiry into the process using AI for ideation, postmortems, and customer empathy checks, such as “Simulate skeptical customer reaction to our ad.”

AI For Safe Explorations In Learning

Using AI for curiosity and critique, not to provide answers, can improve learning. It creates a safe place for exploration and a low-stakes environment to test ideas. It’s an easy place to ask questions students might not be comfortable asking in public.

I’ve had great success with this in my Digital Marketing class using NotebookLM as an AI tutor. Students ask as many questions as they want of my text and online resources – things they may not feel comfortable asking in class or me. They can test their wildest out-of-the-box ideas. Improved understanding of concepts and performance on assignments has been notable.

Whether you’re a marketing professional or professor, this AI framework will help you get somewhere flattery alone will not. Instead of AI first, it’s an example of an AI forward mindset where AI is used to improve human work, not replace it. If there’s something you suck at AI can help – even presenting.

This Post Was 90% Human Written. I used ChatGPT to research and explore topics while iterating and testing my thoughts to quickly pull together the diverse topics that helped me create this AI Framework. I tweaked the suggested framework, and the main writing was my own. I used ChatGPT to optimize my headline for SEO and engagement.


Discover more from Post Control Marketing

Subscribe to get the latest posts sent to your email.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.