AI Prompt Framework: Improve Results With This Framework And Your Expertise [Template].

AI Prompt Framework Template with 1. Task/Goal 2. AI Persona 3. AI Audience 4. AI Task 5. AI Data 6. Evaluate Results.

This is the third post in a series of five on AI. In my last post, I gave examples of tasks I’d outsource to AI. How do you outsource them? Through prompt writing – a skill some call prompt engineering. Because large language models (LLMs) like ChatGPT, Claude, and Gemini are based on conversational prompting it’s easy for anyone to use them. You don’t need to learn a coding language like Python or HTML or a software interface like Excel or Photoshop. You just tell it.

Generative AI can produce remarkable results.

In an experiment, researchers found consultants at Boston Consulting Group gained 40% higher quality work using GPT-4 (via Microsoft Bing) without specialized prompt training and without training the AI on any proprietary data. What mattered was the consultants’ expertise. Knowing what to ask and how to evaluate the results.

AI expert Ethan Mollick describes large frontier LLMs as working with a smart intern. Sometimes they’re brilliant. Sometimes they don’t know what they don’t know. AI will even make things up to give you an answer. Mollick and other researchers call this the jagged frontier of AI. In some tasks, AI output is as good or better than humans. In others, it can be worse or wrong.

Their research with Boston Consulting Group found AI can be good at some easy or difficult tasks while being worse at other easy or difficult tasks. Level or task isn’t a predictor. One professor’s research found ChatGPT got difficult multiple-choice questions right but got easy questions wrong. Testing and learning based on expert knowledge is the way to know. How do you explore this jagged AI frontier while improving results? I suggest a prompt framework like the one I created below.

AI Prompt Framework Template. Click the image to download a PDF of this AI Prompt Framework Template.

First, have a clear understanding of what you want.

Begin with the task and goal. Are you summarizing to learn about a topic for a meeting, generating text or an image for content, looking for suggestions to improve your writing, performing a calculation to save time, or creating something to be published? Defining the task and objective sets the stage for a successful prompt and output.

Second, give AI a perspective or identity as a persona.

LLMs are trained on vast amounts of broad data, which makes them so powerful. This can also produce output that’s too generic or simply not what you want. It helps to give AI a perspective or identity like a persona. Personas are used in marketing to describe a target audience. Persona is also the character an author assumes in a written work.

Third, explain the audience of the AI output.

Are you writing an email to your boss, creating copy for a social media post, preparing for a talk, or is the output just for you? You know how to adjust what you create based on what’s appropriate for the audience. AI can do a remarkable job at this if you give it the right direction.

Fourth, describe the specific task you want it to complete.

Err on the side of more detail than less. Consider things you know in your mind that you would use in completing the task. It’s like giving the smart intern directions. They’re smart but don’t have the experience and knowledge you do. More complicated tasks can require multiple steps. That’s fine, just tell AI what to do first, second, third, etc.

Fifth, add any additional data it may need.

Some tasks require data such as a spreadsheet of numbers you want to analyze, a document you want summarized, or a specific stat, fact, or measurement. But before uploading proprietary data into an LLM see my post considering legal and ethical AI use. Recent research, Systematic Survey of Prompting Techniques, also suggests adding positive and negative examples.

Sixth, evaluate output based on expectations and expertise.

Sometimes you get back what you want and other times you don’t. Then you need to clarify, ask again, or provide more details and data. Go back to earlier steps tweaking the prompt. Other times you get back something wrong or made up. If clarifying doesn’t work you may have discovered a task AI is not good at. And sometimes you just wanted a rough start that you’ll modify considering copyright for legal and ethical AI use.

A prompt experiment with and without the framework.

I’ve been testing the framework and it has improved results. In one test I used GPT-4 via Copilot to see if it could recommend influencers for a specific brand – Saucony running shoes. First I didn’t use the framework and asked a simple question.

  • “Recommend influencers for 34-55-year-old males who like to run marathons.”

It recommended Cristiano Ronaldo, Leo Messi, and Stanley Tucci. Hopefully, you understand why these are not a good fit. I ran the same prompt again and it recommended Usain Bolt. Bolt is a runner, but known for track sprinting not marathons.

Generated with AI (Copilot) ∙ June 28, 2024 at 4:30 PM

I tried to be more direct changing the prompt to “34-55-year-old males who run marathons.” For some reason dropping the “like” started giving me older bodybuilders. I wouldn’t describe marathon runners as “shredded” the way the one influencer described himself.

I tried again with “34-54-year-old males known for their involvement in marathons.” This gave me a random list of people including Alex Moe (@themacrobarista) a Starbucks barista. As far as I can tell Moe doesn’t run marathons and his Instagram feed is full of swirling creamer pours.

Finally, I tried the prompt framework.

  • “You are a social media manager for Saucony running shoes. (Persona) Your target audience is 34-55-year-old males who run marathons. (Audience) Which influencers would you recommend for Saucony to appeal to and engage this target audience? (Task)

This prompt gave me better results including Dorothy Beal (@mileposts) who has run 46 marathons and created the I RUN THIS BODY movement. Her Instagram feed is full of images of running. Copilot still recommended Usain Bolt following the framework, but the other four recommendations were much better than a soccer star, bodybuilder, or barista.

Generated with AI (Copilot) ∙ June 28, 2024 at 4:35 PM

I tried to add data to the prompt with “Limit your suggestions to macro-influencers who have between 100,000 to 1 million followers.” (Data) The response didn’t give suggestions saying “as an AI, I don’t have access to social media platforms or databases that would allow me to provide a list of specific influencers who meet your criteria.” That’s okay because the more precise prompt gave me more relevant macro-influencers anyway.

Alternatively, I added positive and negative examples. I tried again adding to the prompt “Don’t provide influencers like Cristiano Ronaldo or Usain Bolt, but more like Dorthy Beale or Dean Karnazes.” (Data). This time I received a list of 8 influencers who all would have potential for this brand and audience.

Generated with AI (Copilot) ∙ July 27, 2024 at 11:35 PM

You don’t need to be a prompt engineer to explore.

Experts in various fields are finding frameworks that work best for their needs. Christopher S. Penn suggests the prompt framework PARE (prime, augment, refresh, evaluate). Prompt writing can also be more advanced to maximize efficiency. Prompt engineers are working on creating prompt libraries of common tasks.

But for most people, your job will not switch to prompt engineer. We need discipline experts to test the best uses of AI in their specific roles. Over time you’ll develop knowledge of how to prompt AI for your profession and what LLMs are better at each task. Penn suggests creating your own prompt library. You’ll gain marketable skills as you explore the jagged frontier of AI for tasks unique to your industry.

LLMs are already introducing AI tools to improve prompts. Anthropic Console takes your goal and generates the Claude prompt for you. Microsoft is adding Copilot AI features to improve prompts as you write promising to turn anyone into a prompt engineer. And Apple Intelligence is coming, running efficient more specific task-focused AI agents integrated into Apple apps.

In the article, The Rise and Fall of Prompt Engineering, Tech writer Nahla Davies says, “Even the best prompt engineers aren’t really ‘engineers.’ But at the end of the day, they’re just that–single tasks that, in most cases, rely on previous expertise in a niche.” The Survey of Prompting Techniques, also finds prompt engineering must engage with domain experts who know in what ways they want the computer to behave and why.

Thus, we don’t need everyone to be prompt engineers. We need discipline experts who have AI skills. In my next post, I’ll explore the challenges of teaching students to be discipline experts with AI.

This Was Human Created Content!

Artificial Intelligence Use: A Framework For Determining What Tasks to Outsource To AI [Template].

AI Framework Template for AI Use that includes 1. Task/Goal 2. AI Function 3. Level of Thinking 4. Legal/Ethical 5. Outsource to AI?
This is the first post in a series of five on AI. With any new technology, there are benefits and unintended consequences. Often the negative outcomes happen without thought or planning. We get caught up in the “new shiny object” mesmerized by its “magical capabilities.” That happened with social media. We can’t go back on that technology, but we are in the early stages of AI. In WIRED Rachel Botsman called for frameworks to do more to avoid the negative of tech developments.

Before jumping all in, ask, “What role should AI play in our tasks?”

Just because AI can do something doesn’t mean it is good or it should. AI’s capabilities are both exciting and frightening causing some to be all in and others to be all out. Being strategic takes more nuance. Be intentional about planning the role AI could and should play in your job or business with the AI Use Template below.
AI Framework Template for AI Use
Click the image to download a PDF template.

First, make a list of common tasks and the goal of each.

List tasks you perform in your job, on client projects, or in daily business operations. Then describe the goal of the task. Understanding the goal can help determine the human versus AI value in it. If the goal is to build a personal relationship with a customer or client, AI outsourcing may save time but undermine the task objective.

Recently a university outsourced their commencement speaker to an AI robot. Students started an unsuccessful petition for a speaker who could offer a “human connection.” The AI robot’s speech was described as weird and unmoving. Without any personal anecdotes, The Chronicle of Higher Education reports, “Sophia … delivered an amalgamation of lessons taken from other commencement speakers.”

Second, determine which type of AI Function each task requires.

The six AI functions (Generate, Extract, Summarize, Rewrite, Classify, Answer Questions) are modified from Christopher S. Penn’s AI Use Case Categories. Can the task be performed by one or multiple of these AI functions? If yes, you still want to consider how well AI can perform the function compared to a human and consider benefits that may be lost outsourcing to AI.

In my ad career clients often asked why a certain phrase or benefit was in the ad copy or ad script. Because I wrote it, I could explain it. It could be human insight from research (which AI can summarize), truths from lived experience, or talking with customers. If AI wrote the copy or script it may be missing and I wouldn’t know why AI wrote what it did. If you ask AI it often doesn’t know. Scientists call this the “unknowability” of how AI works.

Third, categorize the level of thinking each task entails.

The six levels of thinking (Remember, Understand, Apply, Analyze, Evaluate, Create) are modified from Oregon State’s Bloom’s Taxonomy Revisited. Bloom’s Taxonomy categorizes levels of thinking in the learning process. It was revisited to consider AI’s role. In each level determine the level of the task and discern AI’s capabilities versus distinctive human skills.

I had a student create a situation analysis of Spotify with ChatGPT. It was good at extracting information, summarizing, and suggesting alternatives (AI Capabilities of the Create Level). It wasn’t good at “Formulating original solutions, incorporating human judgment, and collaborating spontaneously” (Create Level Distinctive Human Skills). GPT’s recommendations lacked the nuanced understanding I’d expect from professionals or students.

Fourth, review the legal and ethical issues of outsourcing to AI.

Does the task require uploading copyrighted material? Are you able to copyright the output (copy/images) to sell to a client or protect it from competitor use? Does your employer or client permit using AI in this way? Are you sharing private or proprietary data (IP)? What’s the human impact? For some AI will take some tasks. For others, it could take their entire job.

Many companies are adding AI restrictions to contracts for agency partners. Samsung and others are restricting certain AI use by employees. There’s concern about performance or customer data uploaded into AI systems training a model competitors could use. Some agencies and companies are developing Closed AI versus Open AI to run local AI storing data on local versus cloud servers. For a summary of main AI legal concerns see “The real costs of ChatGPT” by Mintz.

Fifth, employ human agency to produce desirable results.

We shouldn’t be resigned to undesirable outcomes because AI change is complex and happening quickly. Penn’s TRIPS Framework for AI Outsourcing includes “pleasantness.” The more Time consuming, Repetitive, less Important, less Pleasant tasks that have Sufficient data are better candidates for AI. Don’t give away your human agency. Decide on your own or influence others to save the good stuff for yourself.

A post on X (Twitter) by author Joanna Maciejewska struck a nerve going viral “You know what the biggest problem with pushing all-things-AI is? Wrong Direction. I want AI to do my laundry and dishes so I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.” She later clarified it wasn’t about actual laundry robots, “it’s about wishing that AI focused on taking away those tasks we hate and don’t enjoy instead of trying to take away what we love to do and what makes us human.”

Marketers are getting this message. In a survey of CMOs most are using AI for draft copy and images that are refined by humans. And over 70% are concerned about AI’s impact on creativity and brand voice.

It’s easy to get overwhelmed and afraid of the AI future.

As Tech leaders sprint forward in an AI arms race and regulators woefully lag behind, the rest of us shouldn’t sit back and wait for our world to change. Unlike the Internet and social media, let’s be more intentional. Don’t fall prey to The Tradeoff Fallacy believing that to gain the benefits of AI we must give everything away.

In Co-Intelligence, Ethan Mollick says it’s important to keep the human in the loop. It’s not all-or-nothing. Some warn of a future when we don’t have choices in what role AI plays in our lives. It’s not the future. Today we can choose how to use AI in our professional, educational, and personal lives.

You know your job best, but if you want some help brainstorming tasks to outsource to AI, Paul Roetzer and SmarterX have created a customer GPT. Visit JobsGPT and enter a job title or job description. It uses AI to break down the job into tasks, estimate AI impact, time saved, and provides rationale.

Advocate for a pilot program if your employer is AI hesitant.

Some companies are holding employees back from AI use due to fears and some early adopters are failing to see the value of AI. The CIO of Chevron recently said, “the jury is still out on whether it’s helpful enough to staff to justify the cost.” If you find yourself in a company or organization that is either not allowing AI or skeptical of paying the cost of a CoPilot or ChatGPT license ($20 or $30 per user per month),  Paul Roetzer of the Marketing AI Institute suggests a 90-day pilot program.

Advocate to be part of a pilot program of small groups of employees to test use cases of AI for three months. Use this AI task framework to discover 3-5 of the most valuable. Keep track of the time you spend on each task before and after GPT use. Add up the hours saved each month and multiply by your actual or estimated hourly rate. If it’s more than $30 you have justified the costs. You’ve also become more valuable as you can train other employees in these tasks. Christopher Penn offers a more detailed method to calculate the ROI of AI.

What keeps me hopeful is breaking my job down into tasks and making intentional decisions on what to outsource to AI. Then I can see the time savings for me to focus on higher value aspects of my job. Using this framework allows me to get excited about the possibilities of AI taking over my least favorite or most time consuming tasks. In my next post, I’ll give some specific examples using this framework.

This Was Human Created Content!