The AI Agents Are Coming! So Are The Reasoning Models. Will They Take Our Jobs And How Should We Prepare?

AI image generated using Google ImageFX from a prompt “Create a digital painting depicting Paul Revere on his midnight ride, but instead of a person riding the horse it is a futuristic robotic AI agent yelling 'The AI Agents are coming for your jobs!'"

Last Fall I traveled to MIT to watch my daughter play in the NCAA volleyball tournament. On the way, we passed signs for Lexington and Concord. AI agents were on my mind. There was a sudden buzz about AI agents and how they’re coming for our jobs. The image of Paul Revere came to my mind.

Instead of warning about the Redcoats stealing munition at Concord, Revere’s of today warn of AI agents stealing our jobs. Then new AI reasoning models released causing another rise in discussion. Like Lexington Green have the first shots been fired on our jobs with reasoning AI agents?

AI image generated using Google ImageFX from a prompt “Create a digital painting depicting Paul Revere on his midnight ride, but instead of a person riding the horse it is a futuristic robotic AI agent yelling 'The AI Agents are coming for your jobs!'
AI image generated using Google ImageFX from the  prompt “Create a painting depicting Paul Revere on his midnight ride, but instead of a person it is a robotic AI agent yelling ‘The AI Agents are coming for your jobs!’.” https://labs.google/fx/tools/image-fx

What is an AI agent?

Search interest in AI agents spiked in January. If you search AI agents Google returns 216 results. Reading through many of them there are probably half as many definitions. For simplicity, I will begin by quoting AI Marketing Institute’s Paul Roetzer, “An AI agent takes action to achieve goals.”

That doesn’t sound scary. What’s driving interest and fear is adding the word autonomous. Roetzer and co-founder Mike Kaput have created a helpful Human-to-Machine Scale that depicts 4 levels of AI autonomous action.

Marketing AI Institute’s Human-to-Machine Scale:

  • Level 0 is all human.
  • Level 1 is mostly human.
  • Level 2 is half and half.
  • Level 3 is mostly machine.
  • Level 4 is all machine or full autonomy.

Full autonomy over complete jobs is certainly fear inducing! Large language model companies like OpenAI, Google, and SAAS companies integrating AI are promising increased autonomous action. Salesforce has even named their AI products Agentforce, which literally sounds like an army coming to take over our jobs! Put some red coats on them and my Paul Revere analogy really comes to life.

Every player in AI is going deep.

In September Google released a white paper “Agents” with little attention. Now, after the release of reasoning models, everyone including Venture Beat is analyzing it. In the paper, Google predicts AI agents will reason, plan, and take action. This includes interacting with external systems, making decisions, and completing tasks – AI agents acting on their own with deeper understanding.

OpenAI claims its new tool Deep Research can complete a detailed research report with references in “tens of minutes.” Something that may take a human many hours. Google’s DeepMind also has Deep Research, Perplexity has launched Deep Research, CoPilot now has Think Deeper, Grok3 has a Deep Search tool, and there’s the new Chinese company DeepSeek. Anthropic now has released what it is calling the first hybrid reasoning model. Claude 3.7 Sonnet can produce near-instant responses or extended step-by-step thinking that is made visible. The Redcoats are coming and they’re all in on deep thinking.

Graphs of Google Trends search data showing an increase in search for AI Agents and Reasoning Models.
Interest in and discussion about AI Agents and AI Reasoning Models has risen sharply. Graphs from https://trends.google.com/trends/

What is a reasoning model?

Google explains Gemini 2.0 Flash Thinking is “our enhanced reasoning model, capable of showing its thoughts to improve performance and explainability.” A definition for reasoning models may be even more difficult and contested than AI agents. This term returns 163 results in a Google search and perhaps just as many definitions.

For my definition of a reasoning model, I turn to Christopher Penn. In his “Introduction to Reasoning AI Models,” Penn explains, “AI – language models in particular – perform better the more they talk … The statistical nature of a language model is that the more talking there is, the more relevant words there are to correctly guess the next word.” Reasoning models slow down LLMs to consider more words through a process.

LLMs and reasoning models are not magic.

Penn further explains that good prompt engineering includes a chain of thought, reflection, and reward functions. Yet most people don’t use them, so reasoning models make the LLM do it automatically. I went back to MIT, not for volleyball, but for further help on this definition. The MIT Technology Review explains that these new models use using chain of thought and reinforcement learning through multiple steps.

An AI prompt framework, such as the one I created, will improve your results without reasoning. You also may not need a reasoning model for many tasks. Reasoning models cost more and use more energy. Experts like Trust Insights recommend slightly different prompting for reason models such as Problem, Relevant information, and Success Measures. Brooke Sellas of B Squared Media shared President of OpenAI Greg Brockman’s reasoning prompt of Goal, Return Format, Warnings, and Context Dump.

Many want a magical AI tool that does everything. In reality, different AI is better for different things. Penn explains generative AI is good with language, but for other tasks, traditional forms of AI like regression, classification, or even non-AI statistical models can be a better solution.

How we talk about AI matters.

Humans are attracted to the magic capabilities of AI. Folk tales like The Sorcerer’s Apprentice which you may know from Disney’s Fantasia, are about objects coming to life to do tasks for us. Reasoning models are said to have agentic behavior – the ability to make independent decisions in pursuit of a goal. Intentional or not, it sounds like angelic, bringing up mystical thoughts of angels and the supernatural.

Since the first post in my AI series, I’ve argued for maintaining human agency and keeping humans in the loop. Therefore, I want to be careful in how I talk about these new “reasoning” models that show us their “thinking.” I agree with Marc Watkin’s recent Substack “AI’s Illusion of Reason,” that the way we talk about these AI models matters.

An AI model that pauses before answering and shows the process it followed doesn’t mean it is thinking. It’s still a mathematical prediction machine. It doesn’t comprehend or understand what it is saying. Referring to ChatGPT or Gemini as it versus he or she (no matter the voice) matters.

Google Gemini 2.0 Flash Thinking
I asked Google’s reasoning model Gemini 2.0 Flash the difference between human thinking and AI “thinking.” From https://aistudio.google.com/

What’s the difference between human and AI thinking?

I asked Google’s reasoning model Gemini 2.0 Flash the difference between human thinking and AI thinking. It said, “AI can perform tasks without truly understanding the underlying concepts or the implications of its actions. It operates based on learned patterns, not genuine comprehension.” Does this raise any concerns for you as we move toward fully autonomous AI agents?

Humans need to stay in the loop. Even then, you need a human who truly understands the subject, context, field, and/or discipline. AI presents its answers in a convincing, well-written manner – even when it’s wrong. Human expertise and discernment are needed. Power without understanding can lead to Sorcerer’s Apprentice syndrome. A small mistake with an unchecked autonomous agent could escalate quickly.

In a Guardian article, Andrew Rogoyski, a director at the Institute for People-Centred AI warns of people using responses by AI deep research verbatim without performing checks on what was produced. Rogoyski says, “There’s a fundamental problem with knowledge-intensive AIs and that is it’ll take a human many hours and a lot of work to check whether the machine’s analysis is good.”

Let’s make sure 2025 is not like 1984.

I recently got the 75th anniversary edition of George Orwell’s 1984. I hadn’t read it since high school. It was the inspiration behind Apple’s 1984 Super Bowl ad – an example of the right message at the right time. It may be a message we need again.

AI isn’t right all the time and right for everything. It’s confident and convincing even when it’s wrong. No matter how magical AI’s “thinking” seems, we must think on our own. As AI agents and reasoning models advance, discernment is needed, not unthinking acceptance.

The 250th anniversary of Paul Revere’s ride and the “Shot heard ‘round the world” is in April this year. Will AI agents and reasoning models be a revolution in jobs in 2025? In my next post, “How Will AI Agents Impact Marketing Communications Jobs & Education? See Google’s AI Reasoning Model’s “Thoughts” And My Own” I take a deep dive into how AI may impact marketing and communications jobs and education. What’s your excitement or fear about AI agents and reasoning models?

This Was Human Created Content!

AI Task Framework: Examples of What I’d Outsource To AI And What I Wouldn’t.

Copilot created this image of a college age man sitting said in a basement looking lonely at an old dusty unused exercise bike.

This is the second post in a series of five on AI. In my last post, I introduced an AI task framework to be more intentional about why and how we use AI in our jobs, businesses, or organizations. In this post, I give examples based on my previous advertising career.

AI Framework Template for AI Use Click on the image to download a PDF template.

As an advertising copywriter, some everyday Tasks and Goals included:

  1. Fill out timesheets detailing what I worked on each day to bill time to clients and projects to get paid.
  2. Research a client’s business and industry to demonstrate knowledge of their unique challenges and opportunities.
  3. Create ideas for campaigns and individual ads to sell to a client and publish to meet marketing objectives.
  4. Write social media ad copy for social media marketing to generate engagement and conversions for a client.

(1.) I would outsource timesheets to AI.

I envision an AI assistant that Extracts (AI Function) file use logs from programs like Microsoft Word, Categorizes (AI Function) by job number, and Creates (Level of Thinking) a spreadsheet listing client, job, and time. I could review and adjust it before submitting.

After thinking of this example, I discovered that Microsoft is adding this capability. Copilot for time entry creates time entries for team members without navigating through forms or filling details with dropdowns, generating first drafts for users to modify and confirm for timesheet submission.

The Level of Thinking in this example is Applying a process to Create a suggestion for my time entry (AI Capabilities). It doesn’t require creativity or imagination and I maintain final human judgment on accuracy (Distinctive Human Skill). By tracking job numbers no Copyrighted or Proprietary data is used. Human impact is positive. Everyone I knew hated timesheets. We loved coming up with ideas (Legal & Ethical Use).

(2.) AI could help with some aspects of client research.

AI could Answer Questions (AI Function) like “What are the current challenges and opportunities in the ice cream industry?” An open system like GPT would give me general answers based on open sources from the internet that may or may not be the most current, accurate, or relevant.

AI is Understanding (Level of Thinking) on a cursory level (AI Capability). To contextualize this understanding to your client and judge for accuracy (Distinctive Human Skill) you need proprietary data from paid databases like Mintel, your client, or your own research. Your personal experience with the client or industry is an added Distinctive Human Skill.

You could outsource this to AI by uploading proprietary data into an AI model Summarize and Ask Questions. (AI Function). But you’re uploading Copyrighted/Proprietary material without permission (Legal & Ethical Use). Mintel forbids input into AI systems and clients are adding AI restrictions to contracts to protect their data from training LLM models a competitor could use.

Some are developing Closed AI versus Open AI systems that run locally storing data on their computers versus the cloud. The ad/PR agency network Publicis is investing in an internal AI built on proprietary data. When available this could be a great way to quickly get up to speed on a business and industry.

How much I’d outsource depends on my previous experience. If it was a new client or market I was unfamiliar with I may worry how much I’d Understand (Level of Thinking) or Remember (Level of Thinking) if AI did it all. In an in-person meeting could I recall or contextualize the information on the fly?

(3.) AI could help with some parts of idea generation.

I would outsource some brainstorming to AI, not idea formation, but AI could give me more material for ideas by Answering Questions (AI Function). Let’s say my client wants to sell water bottles to 25-34-year-olds. I could ask “What do 25-34-year-olds who work out look for in a water bottle?” and “What are current trends with 25-34-year-olds who work out?”

With these prompts, GPT via Copilot Created (Level of Thinking) a list of alternatives (AI capability). From the list, I put together a feature “one-hand operation” with a trend of “functional fitness.” Then I Asked for functional fitness examples. From that list, I put together a humorous image or video scene of a young woman easily sipping out of her Owala water bottle with one hand while swinging a heavy Kettlebell with the other. This formulated an original solution (Distinctive Human Skill).

Evaluating AI responses and knowing what to Ask (Level of Thinking) comes from knowledge of the client, problem, market, target, and trends to discern the best and identify AI hallucinations. I’d also use my domain expertise of what concepts are good Remembering (Level of Thinking) from my long-term memory of 17 years of creating ideas for clients (Distinctive Human Skill).

I wouldn’t have AI write ad copy or scripts directly. If it isn’t mostly Created by a human, it can’t be copyrighted to sell to your client or to protect them from use by competitors (Legal & Ethical Use). I’d also check my agency and client for specific restrictions on AI. Your Knowledge (Level of Thinking) of the client and humans (Distinctive Human Skill) is better at Creating (Level of Thinking) less generic more human copy and scripts.

(4.) AI could help in parts of social media campaign creation.

AI could help brainstorm content Answering (AI Function) “What kind of content do 25-34-year-olds who work out like to see on social media?” I’d Evaluate (Level of Thinking) AI’s best suggestions (Distinctive Human Skill). One was “personal anecdotes.” It reminded me of an insight I read in a Mintel report about unused home workout equipment.

I combine this with the text “Peloton brings the motivation of a community to your home.” This gave me a visual idea of unused home workout equipment. I could mockup the social idea using AI to Generate (AI Function) the image. I’d ask “Create an image of an unused, dusty, stationary bike in a basement with a lonely looking guy” (Level of Thinking). This image would help me sell the idea to the client.

Generated with AI (DALL-E 3 via Copilot Designer ∙ June 25, 2024 at 1:33 PM

After approval, my art director and I would consider Copyright issues. Using AI-created artwork for commercial use is unsettled due to sources for training data. Adobe Firefly claimed to be copyright-compliant, but revelations about training data may put Firefly users at legal risk. A trusted photographer may be best to ensure compliance (Legal & Ethical Use).

We’d also consider that the medium sends a message. Does an artificial human and image support Petoton’s message of genuine human connection? I’d weigh the risk of uncanny valley. When tech gets too close to human people get an unsettled feeling. That creepy feeling can be transferred into negative feelings about the brand. Toys R Us and Under Armour have faced backlash for using AI generated video in this way. Google sparked backlash over an ad where a dad had AI write a letter for his daughter because it had to be perfect (Legal & Ethical Use).

I can’t help thinking about the human impact. I’ve worked with many talented creators who add to my ideas with their expertise. If we all decide to use AI instead, photographers, models, illustrators, designers, and writers lose their livelihoods. Levi’s faced a backlash after announcing they’d use AI generated models (Legal & Ethical Use).

Creating content variations (AI Capabilities) is a tedious part of social media. AI could help Generate (AI Function) variations to fit different platforms. I could ask “Write this copy ‘Peloton brings the motivation and community of a gym to the convenience of your home’ in 10 different ways.” I could also tell it to write a specific length for each platform’s character limits. This type of AI outsourcing is happening. Meta Ad Manager is adding Text Variations and social media management software Hootsuite has OwlyWriter AI.

Going through this AI task exercise makes me hopeful.

Breaking down my job into tasks making intentional decisions on what to outsource to AI gives me hope. It reminds me of our human agency. It helps me visualize what Mollick describes in his book Co-Intelligence. Instead of replacing all human tasks, we can use AI as Centaur (division of tasks) and Cyborg (intertwined alternating subtasks).

Once you decide what tasks to outsource you need to know how to ask AI to get the best results. In my next post, I’ll dive deeper into prompt writing.

This Was Human Created Content!