Social Media Audit GPT: How I Built It & How To Create Your Own GPT for Work or Learning.

In integrating AI into my courses, I’ve had experience using Custom GPTs. They can be very beneficial over broad AI use as they focus specifically on a single task or project to help the user – whether student, professor, or professional. For example, I have used JobsGPT in a previous blog post as a way to help predict how AI will impact the skills marketers need in the future so that I can adjust my course material.

I was also recently inspired by an article in Chronicle of Higher Ed. In “Teaching: Can AI actually help students write authentically?” Beth McMurtrie shares how Jeanne Beatrix Law, director of composition at Kennesaw State created a custom GPT Writing Guide Assistant. She found a way to engage students with AI to teach critical thinking and the writing process through prompting versus having AI write for students.

I also realize my students need to gain experience working with AI such as custom GPTs and agents to prepare for today’s marketing jobs. The latest CMO survey reports use of generative AI in marketing increased by 116% since 2024 – now 15% of marketing activities. As a hopeful sign, the same survey reports companies are still growing their marketing teams – 5.3% last year and predicted 5.0% in 2025.

My Social Media Audit GPT. Available now – an AI assisted social media strategy tool.

 

The Primary Goal of My GPT

My goal in creating the Social Media Audit GPT was to provide students with a learning assignment to teach step-by-step an important course concept. Social media audits are an amazing strategic tool but students often struggle to understand them completely – even with new examples in the 4th edition of my Social Media Strategy book.

This custom GPT takes students and professionals through the process of completing a social media audit through prompting, and you can ask questions at any time along the way. It also has the benefit of focusing on source materials to ensure accuracy.

To create the Social Media Audit GPT I gave it an article I wrote on this blog last year detailing the process for conducting a social media audit with a social media audit template. I see the custom GPT as a great support to in-person instruction giving each student access to how I would tutor them in this key concept 24/7. For those using my Social Media Strategy text in classes, this is a great supplement to support your instruction.

Social Media Audit Template To Improve Social Media Marketing Strategy.
I trained the GPT on the Social Media Audit template from my Social Media Strategy book.

Secondary Goals of My GPT

A secondary goal was to show students how to use AI responsibly to empower their learning, not harm it. Creating a custom GPT is a key demonstration of AI integration and teaching AI literacy versus AI bans labeling all AI use as cheating. This helps teach responsible AI use for students tempted to use AI to complete assignments.

Another secondary goal is to teach students how to work with AI as a partner in developing marketing strategies. The GPT is not a replacement for those creating a social media strategy for an employer or client. The AI agent doesn’t complete the audit.

I instructed the GPT to not collect data for the user it to prompt them to formulate their own insights. The real value of a social media audit is getting into each social media platform and seeing what’s happening with your own eyes. I built the AI as a strategy development assistant demonstrating how students or professionals can use custom GPTs and AI agents in their future or current marketing careers.

How I Created The Custom GPT

I had a working model of this Social Media Audit GPT several weeks ago as a Microsoft Copilot Agent (AI-powered assistant), but it was stuck inside my institution – you can only share with individuals or groups in your organization/company. Google Gemini Gems (custom AI experts), and Anthropic Claude Projects (curated sets of knowledge) have similar limitations in that your custom AI agent, gem, or project can only be shared internally within your organization.

Only OpenAI’s custom GPTs can be published on the open web and mobile app to be shared publicly. Anyone can use Custom GPTs with a free ChatGPT account, but to create a custom GPT you need at least ChatGPT Pro (at $20 a month). Before this, all my AI use was limited to only models and tools that I could access for free (so my students wouldn’t have to pay).

Yet with custom GPTs, I was in the opposite situation. As Marc Watkins explained recently, while OpenAI and Google are giving away premium subscriptions to students, they have not extended that offer to professors. I finally secured some funding to purchase a ChatGPT Pro account.

One thing I like about my blog is I own it and control what is published there. With this GPT I’m relying on OpenAI to host for me. If I downgrade to a free account, I can’t access it. Thus, I’m locked into paying $20 a month to manage and update. OpenAI, if you’re reading, please extend the free Pro account to educators, not just students.

GPTs Are Essentially Good Prompts

What is a custom GPT? OpenAI says “a version of ChatGPT for a specific purpose.” MIT Sloan explains, “Custom GPTs are helpful AI tools tailored for specific domains or contexts. GPTs differ from standard chats through ChatGPT due to custom instructions and the ability to keep a knowledge base in addition to what ChatGPT has been trained on. This allows users to create a custom GPT to address a specific need that might be hard for ChatGPT to achieve on its own. The process … requires no code, and involves using specific prompts and your own data to provide insights into a particular field.”

AI Prompt Framework Template with 1. Task/Goal 2. AI Persona 3. AI Audience 4. AI Task 5. AI Data 6. Evaluate Results.
AI Prompt Framework Template for writing good prompts – what you need to create a GPT.

Creating a custom GPT is essentially writing a good, detailed prompt that users of the GPT will begin a chat from that background and knowledge. In creating my Social Media Audit GPT I wrote a long prompt explaining what I wanted it to do following my AI Prompt Framework of Task/Goal, Persona, Audience, Task, Data, and Results.

In the image below I marked up my GPT prompt to sections of the AI Prompt Framework. The text on the top was my original building the Copilot Agent and adjustments. The text in the bottom right is the adjustments I made in custom GPT.

Custom GPT and Copilot Agent prompts to create Social Media Audit GPT.

Test Your GPT To Make Changes

An important part of this process is to test your GPT as a typical user to see how well it performs. If you find something wrong simply tell the GPT what you like and what needs to change. You can test it in a Preview column next to where you instruct the GPT.

One of the first adjustments I made was to clarify that I wanted the GPT to have the user visit each social platform and report results. An earlier version searched the web and reported back its analysis. I tested the social audit GPT with a running brand (see below).

I like to run so I chose to test Social Media Audit GPT with Saucony running shoes and apparel

Once you’ve tested the GPT you’re ready to publish! Click the “Create” button in the top right. Then click “Share” at the top right. In that pop-up screen select “Only me,” “Anyone with the link.” or “GPT Store access.” After choosing GPT Store your GPT will be available at https://chatgpt.com/gpts for anyone with a ChatGPT account to access. Search by name or click “My GPTs.”

The custom GPT you make is only limited by your discipline knowledge, the data you provide, and the strength of your prompt.

Have you explored creating a Copilot Agent, Gemini Gem, or Open AI Customer GPT? How might you use this in your teaching for professional practice?

Please try the Social Media Audit GPT and share any feedback you have. A great feature of custom GPTs is you can revise and update.

This Was Human Created Content!

The AI Agents Are Coming! So Are The Reasoning Models. Will They Take Our Jobs And How Should We Prepare?

AI image generated using Google ImageFX from a prompt “Create a digital painting depicting Paul Revere on his midnight ride, but instead of a person riding the horse it is a futuristic robotic AI agent yelling 'The AI Agents are coming for your jobs!'"

Last Fall I traveled to MIT to watch my daughter play in the NCAA volleyball tournament. On the way, we passed signs for Lexington and Concord. AI agents were on my mind. There was a sudden buzz about AI agents and how they’re coming for our jobs. The image of Paul Revere came to my mind.

Instead of warning about the Redcoats stealing munition at Concord, Revere’s of today warn of AI agents stealing our jobs. Then new AI reasoning models released causing another rise in discussion. Like Lexington Green have the first shots been fired on our jobs with reasoning AI agents?

AI image generated using Google ImageFX from a prompt “Create a digital painting depicting Paul Revere on his midnight ride, but instead of a person riding the horse it is a futuristic robotic AI agent yelling 'The AI Agents are coming for your jobs!'
AI image generated using Google ImageFX from the  prompt “Create a painting depicting Paul Revere on his midnight ride, but instead of a person it is a robotic AI agent yelling ‘The AI Agents are coming for your jobs!’.” https://labs.google/fx/tools/image-fx

What is an AI agent?

Search interest in AI agents spiked in January. If you search AI agents Google returns 216 results. Reading through many of them there are probably half as many definitions. For simplicity, I will begin by quoting AI Marketing Institute’s Paul Roetzer, “An AI agent takes action to achieve goals.”

That doesn’t sound scary. What’s driving interest and fear is adding the word autonomous. Roetzer and co-founder Mike Kaput have created a helpful Human-to-Machine Scale that depicts 4 levels of AI autonomous action.

Marketing AI Institute’s Human-to-Machine Scale:

  • Level 0 is all human.
  • Level 1 is mostly human.
  • Level 2 is half and half.
  • Level 3 is mostly machine.
  • Level 4 is all machine or full autonomy.

Full autonomy over complete jobs is certainly fear inducing! Large language model companies like OpenAI, Google, and SAAS companies integrating AI are promising increased autonomous action.

Salesforce has even named their AI products Agentforce, which literally sounds like an army coming to take over our jobs! Put some red coats on them and my Paul Revere analogy really comes to life.

Every player in AI is going deep.

In September Google released a white paper “Agents” with little attention. Now, after the release of reasoning models, everyone including Venture Beat is analyzing it. In the paper, Google predicts AI agents will reason, plan, and take action. This includes interacting with external systems, making decisions, and completing tasks – AI agents acting on their own with deeper understanding.

OpenAI claims its new tool Deep Research can complete a detailed research report with references in “tens of minutes.” Something that may take a human many hours.

Google’s DeepMind also has Deep Research, Perplexity has launched Deep Research, CoPilot now has Think Deeper, Grok3 has a Deep Search tool, and there’s the new Chinese company DeepSeek. Anthropic now has released what it is calling the first hybrid reasoning model. Claude 3.7 Sonnet can produce near-instant responses or extended step-by-step thinking that is made visible. The Redcoats are coming and they’re all in on deep thinking.

Graphs of Google Trends search data showing an increase in search for AI Agents and Reasoning Models.
Interest in and discussion about AI Agents and AI Reasoning Models has risen sharply. Graphs from https://trends.google.com/trends/

What is a reasoning model?

Google explains Gemini 2.0 Flash Thinking is “our enhanced reasoning model, capable of showing its thoughts to improve performance and explainability.” A definition for reasoning models may be even more difficult and contested than AI agents. This term returns 163 results in a Google search and perhaps just as many definitions.

For my definition of a reasoning model, I turn to Christopher Penn. In his “Introduction to Reasoning AI Models,” Penn explains, “AI – language models in particular – perform better the more they talk … The statistical nature of a language model is that the more talking there is, the more relevant words there are to correctly guess the next word.” Reasoning models slow down LLMs to consider more words through a process.

LLMs and reasoning models are not magic.

Penn further explains that good prompt engineering includes a chain of thought, reflection, and reward functions. Yet most people don’t use them, so reasoning models make the LLM do it automatically. I went back to MIT, not for volleyball, but for further help on this definition. The MIT Technology Review explains that these new models use using chain of thought and reinforcement learning through multiple steps.

An AI prompt framework, such as the one I created, will improve your results without reasoning. You also may not need a reasoning model for many tasks. Reasoning models cost more and use more energy. Experts like Trust Insights recommend slightly different prompting for reason models such as Problem, Relevant information, and Success Measures. Brooke Sellas of B Squared Media shared President of OpenAI Greg Brockman’s reasoning prompt of Goal, Return Format, Warnings, and Context Dump.

Many want a magical AI tool that does everything. In reality, different AI is better for different things. Penn explains generative AI is good with language, but for other tasks, traditional forms of AI like regression, classification, or even non-AI statistical models can be a better solution.

How we talk about AI matters.

Humans are attracted to the magic capabilities of AI. Folk tales like The Sorcerer’s Apprentice which you may know from Disney’s Fantasia, are about objects coming to life to do tasks for us. Reasoning models are said to have agentic behavior – the ability to make independent decisions in pursuit of a goal. Intentional or not, it sounds like angelic, bringing up mystical thoughts of angels and the supernatural.

Since the first post in my AI series, I’ve argued for maintaining human agency and keeping humans in the loop. Therefore, I want to be careful in how I talk about these new “reasoning” models that show us their “thinking.” I agree with Marc Watkin’s recent Substack “AI’s Illusion of Reason,” that the way we talk about these AI models matters.

An AI model that pauses before answering and shows the process it followed doesn’t mean it is thinking. It’s still a mathematical prediction machine. It doesn’t comprehend or understand what it is saying. Referring to ChatGPT or Gemini as it versus he or she (no matter the voice) matters.

Google Gemini 2.0 Flash Thinking
I asked Google’s reasoning model Gemini 2.0 Flash the difference between human thinking and AI “thinking.” From https://aistudio.google.com/

What’s the difference between human and AI thinking?

I asked Google’s reasoning model Gemini 2.0 Flash the difference between human thinking and AI thinking. It said, “AI can perform tasks without truly understanding the underlying concepts or the implications of its actions. It operates based on learned patterns, not genuine comprehension.” Does this raise any concerns for you as we move toward fully autonomous AI agents?

Humans need to stay in the loop. Even then, you need a human who truly understands the subject, context, field, and/or discipline. AI presents its answers in a convincing, well-written manner – even when it’s wrong. Human expertise and discernment are needed. Power without understanding can lead to Sorcerer’s Apprentice syndrome. A small mistake with an unchecked autonomous agent could escalate quickly.

In a Guardian article, Andrew Rogoyski, a director at the Institute for People-Centred AI warns of people using responses by AI deep research verbatim without performing checks on what was produced. Rogoyski says, “There’s a fundamental problem with knowledge-intensive AIs and that is it’ll take a human many hours and a lot of work to check whether the machine’s analysis is good.”

Let’s make sure 2025 is not like 1984.

I recently got the 75th anniversary edition of George Orwell’s 1984. I hadn’t read it since high school. It was the inspiration behind Apple’s 1984 Super Bowl ad – an example of the right message at the right time. It may be a message we need again.

AI isn’t right all the time and right for everything. It’s confident and convincing even when it’s wrong. No matter how magical AI’s “thinking” seems, we must think on our own. As AI agents and reasoning models advance, discernment is needed, not unthinking acceptance.

The 250th anniversary of Paul Revere’s ride and the “Shot heard ‘round the world” is in April this year. Will AI agents and reasoning models be a revolution in jobs in 2025? In my next post, “How Will AI Agents Impact Marketing Communications Jobs & Education? See Google’s AI Reasoning Model’s “Thoughts” And My Own” I take a deep dive into how AI may impact marketing and communications jobs and education. What’s your excitement or fear about AI agents and reasoning models?

This Was Human Created Content!