AI Prompt Framework: Improve Results With This Framework And Your Expertise [Template].

AI Prompt Framework Template.

This is the third post in a series of five on AI. In my last post, I gave examples of tasks I’d outsource to AI. How do you outsource them? Through prompt writing – a skill some call prompt engineering. Because large language models (LLMs) like ChatGPT, Claude, and Gemini are based on conversational prompting it’s easy for anyone to use them. You don’t need to learn a coding language like Python or HTML or a software interface like Excel or Photoshop. You just tell it.

Generative AI can produce remarkable results.

In an experiment, researchers found consultants at Boston Consulting Group gained 40% higher quality work using GPT-4 (via Microsoft Bing) without specialized prompt training and without training the AI on any proprietary data. What mattered was the consultants’ expertise. Knowing what to ask and how to evaluate the results.

AI expert Ethan Mollick describes large frontier LLMs as working with a smart intern. Sometimes they’re brilliant. Sometimes they don’t know what they don’t know. AI will even make things up to give you an answer. Mollick and other researchers call this the jagged frontier of AI. In some tasks, AI output is as good or better than humans. In others, it can be worse or wrong.

Their research with Boston Consulting Group found AI can be good at some easy or difficult tasks while being worse at other easy or difficult tasks. Level or task isn’t a predictor. Testing and learning based on expert knowledge is the way to know. How do you explore this jagged AI frontier while improving results? I suggest a prompt framework like the one I created below.

AI Prompt Framework Template. Click the image to download a PDF of this AI Prompt Framework Template.

First, have a clear understanding of what you want.

Begin with the task and goal. Are you summarizing to learn about a topic for a meeting, generating text or an image for content, looking for suggestions to improve your writing, performing a calculation to save time, or creating something to be published? Defining the task and objective sets the stage for a successful prompt and output.

Second, give AI a perspective or identity as a persona.

LLMs are trained on vast amounts of broad data, which makes them so powerful. This can also produce output that’s too generic or simply not what you want. It helps to give AI a perspective or identity like a persona. Personas are used in marketing to describe a target audience. Persona is also the character an author assumes in a written work.

Third, explain the audience of the AI output.

Are you writing an email to your boss, creating copy for a social media post, preparing for a talk, or is the output just for you? You know how to adjust what you create based on what’s appropriate for the audience. AI can do a remarkable job at this if you give it the right direction.

Fourth, describe the specific task you want it to complete.

Err on the side of more detail than less. Consider things you know in your mind that you would use in completing the task. It’s like giving the smart intern directions. They’re smart but don’t have the experience and knowledge you do. More complicated tasks can require multiple steps. That’s fine, just tell AI what to do first, second, third, etc.

Fifth, add any additional data it may need.

Some tasks require data such as a spreadsheet of numbers you want to analyze, a document you want summarized, or a specific stat, fact, or measurement. But before uploading proprietary data into an LLM see my post considering legal and ethical AI use.

Sixth, evaluate output based on expectations and expertise.

Sometimes you get back what you want and other times you don’t. Then you need to clarify, ask again, or provide more details and data. Go back to earlier steps tweaking the prompt. Other times you get back something wrong or made up. If clarifying doesn’t work you may have discovered a task AI is not good at. Sometimes you just want a rough or outline you’ll modify heavily considering copyright for legal and ethical AI use.

A prompt experiment with and without the framework.

I’ve been testing the framework and it has improved results. In one I used GPT-4 Turbo via Microsoft Copilot to see if it could recommend influencers for a specific brand – Saucony running shoes. First I didn’t use the framework and asked a simple question.

  • “Recommend influencers for 34-55-year-old males who like to run marathons.”

It recommended Cristiano Ronaldo, Leo Messi, and Stanley Tucci. Hopefully, you know they wouldn’t be a food fit. I ran the same prompt again and it recommended Usain Bolt. Bolt is a runner, but known for track sprinting not marathons.

Generated with AI (Copilot) ∙ June 28, 2024 at 4:30 PM

I tried to be more direct changing the prompt to “34-55-year-old males who run marathons.” For some reason dropping the “like” started giving me older bodybuilders. I wouldn’t describe marathon runners as “shredded” the way the one influencer described himself.

I tried again with “34-54-year-old males known for their involvement in marathons.” This gave me a random list of people including Alex Moe (@themacrobarista) a Starbucks barista. As far as I can tell Moe doesn’t run marathons and his Instagram feed is full of swirling creamer pours.

Finally, I tried my prompt framework.

  • “You are a social media manager for Saucony running shoes. (Persona) Your target audience is 34-55-year-old males who run marathons. (Audience) Which influencers would you recommend for Saucony to appeal to and engage this target audience? (Task)

This prompt gave me better results including Dorothy Beal (@mileposts) who has run 46 marathons and created the I RUN THIS BODY movement. Her Instagram feed is full of images of running. Copilot still recommended Usain Bolt following the framework, but the other four recommendations were much better than a soccer star, bodybuilder, or barista.

Generated with AI (Copilot) ∙ June 28, 2024 at 4:35 PM

I tried to add data to the prompt with “Limit your suggestions to macro-influencers who have between 100,000 to 1 million followers.” (Data) The response didn’t give me any suggestions because “as an AI, I don’t have access to social media platforms or databases that would allow me to provide a list of specific influencers who meet your criteria.” Giving it a more precise prompt did give me more relevant macro-influencers anyway.

You don’t need to be a prompt engineer to explore.

Experts in various fields are finding frameworks that work best in for their needs. Christopher S. Penn suggests the prompt framework PARE (prime, augment, refresh, evaluate). Prompt writing can also be more advanced to maximize efficiency. Prompt engineers are working on creating prompt libraries of common tasks.

But I believe your job is to best use AI with your subject matter expertise. Over time you’ll build knowledge of prompting AI for your discipline and what LLMs are better at each task. Penn suggests creating your own prompt library. You’ll gain marketable skills as you explore the jagged frontier of AI for tasks unique to your industry.

LLMs are already introducing AI tools to improve prompts. Anthropic Console takes your goal and generates the Claude prompt for you. Microsoft is adding Copilot AI features to improve prompts as you write promising to turn anyone into a prompt engineer. And Apple Intelligence is coming, running efficient more specific task-focused AI agents integrated into Apple apps.

In the article, The Rise and Fall of Prompt Engineering, Tech writer Nahla Davies says, “Even the best prompt engineers aren’t really ‘engineers.’ But at the end of the day, they’re just that–single tasks that, in most cases, rely on previous expertise in a niche.”

We don’t need everyone to be prompt engineers. We need discipline experts who have AI skills. In my next post, I’ll explore the challenges of teaching students to be discipline experts with AI.

This Was Human Created Content!

The Top Ten Things I’ve Learned in Marketing and Advertising

Everyone is doing list this time of year. Here is mine.

Top 10 Things I Learned from in the Past Five Years (Don’t take my word for it, each number has a link):

10. Don’t break yourself of daydreaming, it leads to break throughs.

9. There’s nothing new under the sun.

8. We are not gadgets.

7. Instinct trumps reason.

6. Copywriting still matters – really.

5. Multitasking is a drug – don’t get addicted.

4. Email isn’t always the best form of communication.

3. Get a clue. Hopping on the digital train is still a craft.

2. The Four P’s are out – they only give you yellow snow.

And the number one thing I’ve learned,

1. Anything is possible – people can taste color!