More Than Prompt Engineers: Careers with AI Require Subject Matter Expertise.

This graphic shows that in stages of learning you go through attention, encoding, storage, and retrieval. You need your brain to learn this process not just use AI for the process.

This is the fourth post in a series of five on AI. In my last post, I proposed a framework for AI prompt writing. But before you can follow a prompt framework, you need to know what to ask and how to evaluate its response. This is where subject matter expertise and critical thinking skills come in. A reason we need to keep humans in the loop when working with large language models (LLM) like ChatGPT (Copilot), Gemini, Claude, and Llama.

Photo by Shopify Partners from Burst

Will we all be prompt engineers?

Prompt engineering is promoted as the hot, new high-paying career.” Learning AI prompt techniques is important but doesn’t replace being a subject matter expert. The key to a good prompt is more than format. As I described in my post on AI prompts, you must know how to describe the situation, perspective, audience, and what data to use. The way a marketer or manager will use AI is different than an accountant or engineer.

You also must know enough to judge AI output whether it’s information, analysis, writing, or a visual. If a prompt engineer doesn’t have subject knowledge they won’t know what AI got right, got wrong, and what is too generic. AI is not good at every task producing general and wrong responses with the right ones. With hallucination rates of 15% to 20% for ChatGPT, former marketing manager Maryna Bilan says AI integration is a significant challenge for professionals that risks a company’s reputation.

AI expert Christopher S. Penn says, “Subject matter expertise and human review still matter a great deal. To the untrained eye, … responses might look fine, but for anyone in the field, they would recognize responses as deeply deficient.” Marc Watkins, of the AI Mississippi Institute says AI is best with “trained subject matter experts using a tool to augment their existing skills.” And Marketing AI Institute’s Paul Roetzer says, “AI can’t shortcut becoming an expert at something.”

Prompt engineering skills are not enough.

As a college professor, this means my students still need to do the hard work of learning the subject and discipline on their own. But their social feeds are full of AI influencers promising learning shortcuts and easy A’s without listening to a lecture or writing an essay. Yet skipping the reading, having GPT take lecture notes, answer quiz questions, and write your report is not the way to get knowledge into your memory.

Some argue that ChatGPT is like a calculator. Yes and no. This author explains, “Calculators automate a . . . mundane task for people who understand the principle of how that task works. With Generative AI I don’t need to understand how it works, or even the subject I’m pretending to have studied, to create an impression of knowledge.”

My major assignments are applied business strategies. I tell students if they enter my assignment prompt into ChatGPT and it writes the report for them then they’ve written themselves out of a job. Why would a company hire them when they could enter the prompt themselves? That doesn’t mean AI has no place. I’ve written about outsourcing specific tasks to AI in a professional field, but you can’t outsource the base discipline knowledge learning.

AI can assist learning or get in the way.

I know how to keep humans in the loop in my discipline, but I can’t teach students if they outsource all their learning to AI. Old-fashioned reading, annotating, summarizing, writing, in-person discussion, and testing remain important. Once students get the base knowledge then we can explore ways to utilize generative AI to supplement and shortcut tasks, not skip learning altogether. We learn through memory and scientists have studied how memory works. Used the wrong way AI can skip all stages of learning.

Click the image for a downloadable PDF of this graphic.

I remember what it was like being a student. It’s very tempting to take the second path in the graphic above – the easiest path to an A and a degree. But that can lead to an over-reliance on technology, no real discipline knowledge, and a lack of critical thinking skills. The tool becomes a crutch to something I never learned how to do on my own. My performance is dependent on AI performance and I lack the discernment to know how well it performed.

Research skills in searching databases, evaluating information, citing sources, and avoiding plagiarism are needed to discern AI output. The online LLM Perplexity promised reliable answers with complete sources and citations, but a recent article in WIRED finds the LLM search engine makes things up and Forbes accuses it of plagiarizing its content.

A pitch from OpenAI selling ChatGPT Edu, says, “Undergraduates and MBA students in Professor Ethan Mollick’s courses at Wharton completed their final reflection assignments through discussions with a GPT trained on course materials, reporting that ChatGPT got them to think more deeply about what they’ve learned.”  This only works if the students do the reading and reflection assignments themselves first.

Outsourcing an entire assignment to AI doesn’t work.

A skill I teach is situation analysis. It’s a foundation for any marketing strategy or marketing communications (traditional, digital, or social) plan. Effective marketing recommendations aren’t possible without understanding the business context and objective. The result of that situation analysis is writing a relevant marketing objective.

As a test, I asked ChatGPT (via Copilot) to write a marketing objective for Saucony that follows SMART (Specific, Measurable, Achievable, Relevant, Time-bound) guidelines. It recommended boosting online sales by targeting fitness enthusiasts with social media influencers. I asked again, and it suggested increasing online sales of trail running shoes among outdoor enthusiasts 18-35 using social media and email.

Then I asked it to write 20 more and it did. Options varied: focusing on eco-friendly running shoes for Millennials and Gen Z, increasing customer retention with a loyalty program, expanding into Europe, increasing retail locations, developing a new line of women’s running shoes, and increasing Saucony’s share of voice with a PR campaign highlighting the brand’s unique selling propositions (USP). It didn’t tell me what those USPs were.

Which one is the right answer? The human in the loop would know based on their expertise and knowledge of the specific situation. Generated with AI (Copilot) ∙ July 2, 2024 at 3:30 PM

I asked Copilot which is best. It said, “The best objectives would depend on Saucony’s specific business goals, resources, and market conditions. It’s always important to tailor the objectives to the specific context of the business. As an AI, I don’t have personal opinions. I recommend discussing these objectives with your team to determine which one is most suitable for your current needs.” If students outsource all learning to LLMs how could they have the conversation?

To get a more relevant objective I could upload proprietary data like market reports and client data and then have AI summarize. But uploading Mintel reports into LLMs is illegal and many companies restrict this as well. Even if I work for a company that has built an internal AI system on proprietary data its output can’t be trusted. Ethan Mollick has warned that many companies building talk-to-your document RAG systems with AI need to test the final LLM output as it can produce many errors.

I need to be an expert to test LLM output in open and closed systems. Even then I’m not confident I could come up with truly unique solutions based on human insight If I didn’t engage information on my own. Could I answer client questions in an in-person meeting with a brief review of AI-generated summaries and recommendations?

AI as an assistant to complete assignments can work.

For the situation analysis assignment, I want students to know the business context and form their own opinions. That’s the only way they’ll learn to become subject matter experts. Instead of outsourcing the entire assignment, AI can act as a tutor. Students often struggle with the concept of a SMART marketing objective. I get a lot of wrong formats no matter how I explain it.

I asked GPT if statements were a marketing objective that followed SMART guidelines. I fed it right and wrong statements. It got all correct. It also did an excellent job of explaining why the statement did or did not adhere to SMART guidelines. Penn suggests explain it to me prompts to tell the LLM it is an expert in a specific topic you don’t understand and ask it to explain it to you in terms of something you do understand. This is using AI to help you become an expert versus outsourcing your expertise to AI.

ChatGPT can talk but can it network?

Last spring I attended a professional business event. We have a new American Marketing Association chapter in our area, and they had a mixer. It was a great networking opportunity. Several students from our marketing club were there mingling with the professionals. Afterward, a couple of the professionals told me how impressed they were with our students.

These were seniors and juniors. They had a lot of learning under their belts before ChatGPT came along. I worry about the younger students. If they see AI as a way to outsource the hard work of learning, how would they do? Could they talk extemporaneously at a networking event, interview, or meeting?

Will students learn with the new AI tools that summarize reading, transcribe lectures, answer quiz questions, and write assignments? Or will they learn to be subject matter experts who have discerned via AI Task Frameworks and AI Prompt Frameworks the beneficial uses of AI making them an asset to hire? In my next post, the final in this 5 part AI series, I share a story that inspired this AI research and explore how AI can distract from opportunities for learning and human connection.

This Was Human Created Content!

AI Prompt Framework: Improve Results With This Framework And Your Expertise [Template].

AI Prompt Framework Template with 1. Task/Goal 2. AI Persona 3. AI Audience 4. AI Task 5. AI Data 6. Evaluate Results.

This is the third post in a series of five on AI. In my last post, I gave examples of tasks I’d outsource to AI. How do you outsource them? Through prompt writing – a skill some call prompt engineering. Because large language models (LLMs) like ChatGPT, Claude, and Gemini are based on conversational prompting it’s easy for anyone to use them. You don’t need to learn a coding language like Python or HTML or a software interface like Excel or Photoshop. You just tell it.

Generative AI can produce remarkable results.

In an experiment, researchers found consultants at Boston Consulting Group gained 40% higher quality work using GPT-4 (via Microsoft Bing) without specialized prompt training and without training the AI on any proprietary data. What mattered was the consultants’ expertise. Knowing what to ask and how to evaluate the results.

AI expert Ethan Mollick describes large frontier LLMs as working with a smart intern. Sometimes they’re brilliant. Sometimes they don’t know what they don’t know. AI will even make things up to give you an answer. Mollick and other researchers call this the jagged frontier of AI. In some tasks, AI output is as good or better than humans. In others, it can be worse or wrong.

Their research with Boston Consulting Group found AI can be good at some easy or difficult tasks while being worse at other easy or difficult tasks. Level or task isn’t a predictor. One professor’s research found ChatGPT got difficult multiple-choice questions right but got easy questions wrong. Testing and learning based on expert knowledge is the way to know. How do you explore this jagged AI frontier while improving results? A prompt framework like the one I created below.

AI Prompt Framework Template. Click the image to download a PDF of this AI Prompt Framework Template.

First, have a clear understanding of what you want.

Begin with the task and goal. Are you summarizing to learn about a topic for a meeting, generating text or an image for content, looking for suggestions to improve your writing, performing a calculation to save time, or creating something to be published? Defining the task and objective sets the stage for a successful prompt and output.

Second, give AI a perspective or identity as a persona.

LLMs are trained on vast amounts of broad data, which makes them so powerful. This can also produce output that’s too generic or simply not what you want. It helps to give AI a perspective or identity like a persona. Personas are used in marketing to describe a target audience. Persona is also the character an author assumes in a written work.

Third, describe to AI the audience for your output.

Are you writing an email to your boss, creating copy for a social media post, preparing for a talk, or is the output just for you? You know how to adjust what you create based on what’s appropriate for an audience. AI can do a remarkable job at this if you give it the right direction.

Fourth, describe the specific task you want it to complete.

Err on the side of more detail than less. Consider things you know in your mind that you would use in completing the task. It’s like giving the smart intern directions. They’re smart but don’t have the experience and knowledge you do. More complicated tasks can require multiple steps. That’s fine, just tell AI what to do first, second, third, etc.

Fifth, add any additional data it may need.

Some tasks require data such as a spreadsheet of numbers you want to analyze, a document you want summarized, or a specific stat, fact, or measurement. But before uploading proprietary data into an LLM see my post considering legal and ethical AI use. Recent research, Systematic Survey of Prompting Techniques, also suggests adding positive and negative examples – “like this not like that.”

Sixth, evaluate output based on expectations and expertise.

Sometimes you get back what you want and other times you don’t. Then you need to clarify, ask again, or provide more details and data. Go back to earlier steps tweaking the prompt. Other times you get back something wrong or made up. If clarifying doesn’t work you may have discovered a task AI is not good at. And sometimes you just wanted a rough start that you’ll modify considering copyright for legal and ethical AI use.

A prompt experiment with and without the framework.

I’ve been testing the framework and it has improved results. In one test I used GPT-4 via Copilot to see if it could recommend influencers for a specific brand – Saucony running shoes. First I didn’t use the framework and asked a simple question.

  • “Recommend influencers for 34-55-year-old males who like to run marathons.”

It recommended Cristiano Ronaldo, Leo Messi, and Stanley Tucci. Hopefully, you understand why these are not a good fit. I ran the same prompt again and it recommended Usain Bolt. Bolt is a runner, but known for track sprinting not marathons.

Generated with AI (Copilot) ∙ June 28, 2024 at 4:30 PM

I tried to be more direct changing the prompt to “34-55-year-old males who run marathons.” For some reason dropping the “like” started giving me older bodybuilders. I wouldn’t describe marathon runners as “shredded” the way the one influencer described himself.

I tried again with “34-54-year-old males known for their involvement in marathons.” This gave me a random list of people including Alex Moe (@themacrobarista) a Starbucks barista. As far as I can tell Moe doesn’t run marathons and his Instagram feed is full of swirling creamer pours.

Finally, I tried the prompt framework.

  • “You are a social media manager for Saucony running shoes. (Persona) Your target audience is 34-55-year-old males who run marathons. (Audience) Which influencers would you recommend for Saucony to appeal to and engage this target audience? (Task)

This prompt gave me better results including Dorothy Beal (@mileposts) who has run 46 marathons and created the I RUN THIS BODY movement. Her Instagram feed is full of images of running. Copilot still recommended Usain Bolt following the framework, but the other four recommendations were much better than a soccer star, bodybuilder, or barista.

Generated with AI (Copilot) ∙ June 28, 2024 at 4:35 PM

I tried to add data to the prompt with “Limit your suggestions to macro-influencers who have between 100,000 to 1 million followers.” (Data) The response didn’t give suggestions saying “as an AI, I don’t have access to social media platforms or databases that would allow me to provide a list of specific influencers who meet your criteria.” That’s okay because the more precise prompt gave me more relevant macro-influencers anyway.

Alternatively, I added positive and negative examples. I tried again adding to the prompt “Don’t provide influencers like Cristiano Ronaldo or Usain Bolt, but more like Dorthy Beale or Dean Karnazes.” (Data). This time I received a list of 8 influencers who all would have potential for this brand and audience.

Generated with AI (Copilot) ∙ July 27, 2024 at 11:35 PM

You don’t need to be a prompt engineer to explore.

Experts in various fields are finding frameworks that work best for their needs. Christopher S. Penn suggests the prompt framework PARE (prime, augment, refresh, evaluate). Prompt writing can also be more advanced to maximize efficiency. Prompt engineers are working on creating prompt libraries of common tasks.

But for most people, your job will not switch to prompt engineer. We need discipline experts to test the best uses of AI in their specific roles. Over time you’ll develop knowledge of how to prompt AI for your profession and what LLMs are better at each task. Penn suggests creating your own prompt library. You’ll gain marketable skills as you explore the jagged frontier of AI for tasks unique to your industry.

LLMs are already introducing AI tools to improve prompts. Anthropic Console takes your goal and generates the Claude prompt for you. Microsoft is adding Copilot AI features to improve prompts as you write promising to turn anyone into a prompt engineer. And Apple Intelligence is coming, running efficient more specific task-focused AI agents integrated into Apple apps.

In the article, The Rise and Fall of Prompt Engineering, Tech writer Nahla Davies says, “Even the best prompt engineers aren’t really ‘engineers.’ But at the end of the day, they’re just that–single tasks that, in most cases, rely on previous expertise in a niche.” The Survey of Prompting Techniques, also finds prompt engineering must engage with domain experts who know in what ways they want the computer to behave and why.

Thus, we don’t need everyone to be prompt engineers. We need discipline experts who have AI skills. In my next post, I’ll explore the challenges of teaching students to be discipline experts with AI.

This Was Human Created Content!