The AI Agents Are Coming! So Are The Reasoning Models. Will They Take Our Jobs And How Should We Prepare?

AI image generated using Google ImageFX from a prompt “Create a digital painting depicting Paul Revere on his midnight ride, but instead of a person riding the horse it is a futuristic robotic AI agent yelling 'The AI Agents are coming for your jobs!'"

Last Fall I traveled to MIT to watch my daughter play in the NCAA volleyball tournament. On the way, we passed signs for Lexington and Concord. AI agents were on my mind. There was a sudden buzz about AI agents and how they’re coming for our jobs. The image of Paul Revere came to my mind.

Instead of warning about the Redcoats stealing munition at Concord, Revere’s of today warn of AI agents stealing our jobs. Then new AI reasoning models released causing another rise in discussion. Like Lexington Green have the first shots been fired on our jobs with reasoning AI agents?

AI image generated using Google ImageFX from a prompt “Create a digital painting depicting Paul Revere on his midnight ride, but instead of a person riding the horse it is a futuristic robotic AI agent yelling 'The AI Agents are coming for your jobs!'
AI image generated using Google ImageFX from the  prompt “Create a painting depicting Paul Revere on his midnight ride, but instead of a person it is a robotic AI agent yelling ‘The AI Agents are coming for your jobs!’.” https://labs.google/fx/tools/image-fx

What is an AI agent?

Search interest in AI agents spiked in January. If you search AI agents Google returns 216 results. Reading through many of them there are probably half as many definitions. For simplicity, I will begin by quoting AI Marketing Institute’s Paul Roetzer, “An AI agent takes action to achieve goals.”

That doesn’t sound scary. What’s driving interest and fear is adding the word autonomous. Roetzer and co-founder Mike Kaput have created a helpful Human-to-Machine Scale that depicts 4 levels of AI autonomous action.

Marketing AI Institute’s Human-to-Machine Scale:

  • Level 0 is all human.
  • Level 1 is mostly human.
  • Level 2 is half and half.
  • Level 3 is mostly machine.
  • Level 4 is all machine or full autonomy.

Full autonomy over complete jobs is certainly fear inducing! Large language model companies like OpenAI, Google, and SAAS companies integrating AI are promising increased autonomous action. Salesforce has even named their AI products Agentforce, which literally sounds like an army coming to take over our jobs! Put some red coats on them and my Paul Revere analogy really comes to life.

Every player in AI is going deep.

In September Google released a white paper “Agents” with little attention. Now, after the release of reasoning models, everyone including Venture Beat is analyzing it. In the paper, Google predicts AI agents will reason, plan, and take action. This includes interacting with external systems, making decisions, and completing tasks – AI agents acting on their own with deeper understanding.

Recently OpenAI has claimed that its new tool called Deep Research can complete a detailed research report with references in “tens of minutes.” Something that may take a human many hours. Google’s DeepMind also has Deep Research, Perplexity has launched Deep Research, CoPilot now has Think Deeper, and there is a new Chinese company called DeepSeek. The Redcoats are coming, but they need a marketing director to tell them about differentiation.

Graphs of Google Trends search data showing an increase in search for AI Agents and Reasoning Models.
Interest in and discussion about AI Agents and AI Reasoning Models has risen sharply. Graphs from https://trends.google.com/trends/

What is a reasoning model?

Google explains Gemini 2.0 Flash Thinking is “our enhanced reasoning model, capable of showing its thoughts to improve performance and explainability.” A definition for reasoning models may be even more difficult and contested than AI agents. This term returns 163 results in a Google search and perhaps just as many definitions.

For my definition of a reasoning model, I turn to Christopher Penn. In his “Introduction to Reasoning AI Models,” Penn explains, “AI – language models in particular – perform better the more they talk … The statistical nature of a language model is that the more talking there is, the more relevant words there are to correctly guess the next word.” Reasoning models slow down LLMs to consider more words through a process.

LLMs and reasoning models are not magic.

Penn further explains that good prompt engineering includes a chain of thought, reflection, and reward functions. Yet most people don’t use them, so reasoning models make the LLM do it automatically. I went back to MIT, not for volleyball, but for further help on this definition. The MIT Technology Review explains that these new models use using chain of thought and reinforcement learning through multiple steps.

An AI prompt framework, such as the one I created, will improve your results without reasoning. You also may not need a reasoning model for many tasks. Reasoning models cost more and use more energy.

Many want a magical AI tool that does everything. In reality, different AI is better for different things. Penn explains generative AI is good with language, but for other tasks, traditional forms of AI like regression, classification, or even non-AI statistical models can be a better solution.

How we talk about AI matters.

Humans are attracted to the magic capabilities of AI. Folk tales like The Sorcerer’s Apprentice which you may know from Disney’s Fantasia, are about objects coming to life to do tasks for us. Reasoning models are said to have agentic behavior – the ability to make independent decisions in pursuit of a goal. Intentional or not, it sounds like angelic, bringing up mystical thoughts of angels and the supernatural.

Since the first post in my AI series, I’ve argued for maintaining human agency and keeping humans in the loop. Therefore, I want to be careful in how I talk about these new “reasoning” models that show us their “thinking.” I agree with Marc Watkin’s recent Substack “AI’s Illusion of Reason,” that the way we talk about these AI models matters.

An AI model that pauses before answering and shows the process it followed doesn’t mean it is thinking. It’s still a mathematical prediction machine. It doesn’t comprehend or understand what it is saying. Referring to ChatGPT or Gemini as it versus he or she (no matter the voice) matters.

Google Gemini 2.0 Flash Thinking
I asked Google’s reasoning model Gemini 2.0 Flash the difference between human thinking and AI “thinking.” From https://aistudio.google.com/

What’s the difference between human and AI thinking?

I asked Google’s reasoning model Gemini 2.0 Flash the difference between human thinking and AI thinking. It said, “AI can perform tasks without truly understanding the underlying concepts or the implications of its actions. It operates based on learned patterns not genuine comprehension.” Does this raise any concerns for you as we move toward fully autonomous AI agents?

Humans need to stay in the loop. Even then, you need a human who truly understands the subject, context, field, and/or discipline. AI presents its answers in a convincing well-written manner – even when it’s wrong. Human expertise and discernment are needed. Power without understanding can lead to Sorcerer’s Apprentice syndrome. A small mistake with an unchecked autonomous agent could escalate quickly.

Let’s make sure 2025 is not like 1984.

I recently got the 75th anniversary edition of George Orwell’s 1984. I hadn’t read it since high school. It was the inspiration behind Apple’s 1984 Super Bowl ad – an example of the right message at the right time. It may be a message we need again.

AI isn’t right all the time and right for everything. It’s confident and convincing even when it’s wrong. No matter how magical AI’s “thinking” seems we must think on our own. As AI agents and reasoning models advance discernment is needed not unthinking acceptance.

The 250th anniversary of Paul Revere’s ride and the “Shot heard ‘round the world” is in April this year. Will AI agents and reasoning models be a revolution in jobs in 2025? In my next post, I take a deep dive into how AI may impact marketing and communications jobs and education. What’s your excitement or fear about AI agents and reasoning models?

This Was Human Created Content!

The Big Story About The Big Game for Super Bowl Ads is Brand Storytelling.

(Updated January 31, 2025)

For advertisers paying $8 million for a 30-second TV ad in the NFL Championship game, the big story isn’t the Philadelphia Eagles versus Kanas City Chiefs, Jalen Hurts versus Patrick Mahomes, or even the odds on a Travis Kelce-Taylor Swift Super Bowl proposal.

Advertisers need to please a lot of eyeballs.

For them, Super Bowl LIX is about the 2025 Super Bowl of advertising and which brand ads will garner the most votes in the Super Bowl ad polls (winners get lots of press) and the most views on social media before, during, and after Sunday’s game. There’s a lot of pressure on marketing managers, ad agencies, and the creative team.

Neilson reports 123 million people in the U.S. watched last year’s Super Bowl LVII with 120 million in the U.S. – roughly 34% of the country. The most popular TV shows like Yellowstone only reach 11.5 million. How do you write a hit Super Bowl Ad for TV and social media?

How are this year’s brand advertisers trying to please?

Adweek reports that 2025’s Super Bowl ad trends include nostalgia, celebrities, animals, Americana imagery, bro culture, and crowd-sourced commercials. Reports say there will be more ads for AI, not ads created by AI.

As an ad copywriter, I felt pressure with regular TV ads. I never had a national Super Bowl ad, but I did create one that ran locally during the Super Bowl. I also worked on Spot Bowl for years – our ad agency’s national Super Bowl ad ratings poll. I gave each ad a title and description as they ran so we could get them up on the website for voting.

Our research of Super Bowl ads found the best way to please is story.

So, what makes one ad more likable to finish in the top ten of USA Today Ad Meter and Spot Bowl versus the bottom ten? When I became a professor my colleague Michael Coolsen and I asked that very question. Was it humor or emotion? Sex appeal or cute animals? This year will it be nostalgia or using TikTok influencers?

Our two-year analysis of 108 Super Bowl commercials published in the Journal of Marketing Theory and Practice found the key to popularity was telling a story. It didn’t matter if you had animals or celebrities and used humor or sex appeal, the underlying factor to likability was a plot. Super Bowl Ad Poll ratings were higher for ads that follow a full five-act story arc and the more acts commercials had the higher the ratings.

The key is a five-act dramatic story structure.

Why five-acts? Remember studying five-act Shakespearian Plays in high school? There was a reason Shakespeare was so popular and why he used to tell a story in five-acts. It is a powerful formula that has drawn people’s attention for hundreds of years.

The classical drama framework we used was conceived by Aristotle, followed by Shakespeare and depicted by German novelist and playwright Gustav Freytag as a pyramid. His theory of drama advanced Aristotle’s to include a more precise five-act structure as seen below.

Five-act stories also draw views and shares in social media.

Ad rating polls of TV ads are one thing, but how does a story perform in social media? We wanted to find out, so we conducted another research study published in the Journal of Interactive Marketing. We analyzed 155 viral advertising YouTube videos from randomly selected brands in different industries over a year.

Videos that told a more developed or complete story had significantly higher shares and views. We coded the videos based on the same five-act dramatic structure in Freytag’s Pyramid: introduction, rising action, climax, falling action, and resolve.

Analyze this year’s Super Bowl ads for story with this template.

Try doing a little storytelling analysis for yourself! Use the downloadable template below. It describes what needs to happen in each act on the left. Then on the right fill out your description of what happened as you watch the Super Bowl ads.

Some will have all five-acts. Some will have only three, two, one, or even zero. In our viral ad study, only 25% of our sample were five-act stories. In fact, there were more zero-act ads at 31%. After coding for the number of acts compare your results to see how they fare in the two ad polls (Ad Meter, Spot Bowl) and in YouTube views.

Budweiser’s Clydesdales are back this year. How will they do?

Budweiser is bringing back its storied Clydesdale ads for a second year after they abandoned them in 2015. The Clydesdale ads were storied because they told full five-act stories and finished in the top 5 of USA Today’s Ad Meter 8 times in 10 years.

In 2014, I successfully predicted that Bud’s Clydesdale ad “Puppy Love” would be the winner because it was a full full-five-act story and it did finish first in ad polls.

In 2016, I successfully predicted their first non-Clydesdale ad “Don’t Back Down” would not finish in the top 10 because it did not tell a complete story – it finished 28th. I recently found this article from iSpot.tv and how their data confirms our academic research findings.

If you’re interested in applying story to all forms of marketing communications our book Brand Storytelling explains how to follow this 5-act dramatic form for TV, online video, and all IMC touchpoints such as print ads, banner ads, direct, radio, and PR.

This Was Human Created Content!