Joy Interrupted: AI Can Distract From Opportunities For Learning And Human Connection.

An image of a poster promoting the Ross Gay even on a college campus.

This is the fifth post in a series of five on AI. My last post discussed why we need more than prompt engineers, but also subject matter experts. This post discusses the danger of losing that learning if AI is not used in the best ways.

Last spring, I went to a campus reading from New York Times best-selling author and poet Ross Gay. It was well attended by faculty, administrators, employees, community members, and students. In front of me, three students sat with two on laptops and one on a phone. My first thought was a professor required attendance and they were taking notes for an assignment.

A great opportunity on campus was this author’s reading and book signing.

 

Ross began drawing me in with engaging stories of happiness and sorrow and simple delights found in life if we pay attention. He’s a master observer of joy found in everyday moments. His message and delivery were powerful, yet, my attention was soon distracted by the busy screens in front of me.

Glancing down it was obvious the students weren’t taking notes. They didn’t look up at the author at all. The student in the middle was watching a video on the phone. The two on laptops were jumping back and forth between different websites, documents, emails, and social media.

Despite Ross’s dramatic reading, I had trouble focusing with three screens flitting around in front of me. I imagined what it’s like to be a student in the back of a lecture hall or even a small classroom with dozens of student multitasking screens in front of them.

AI promises to free us from busy work.

Between the author’s readings, I glanced down again, hoping to find evidence of something related to the event and this author. Instead, I noticed the ChatGPT screen. Maybe the student was using it to supplement an assignment or get help with a difficult task. Perhaps I’d see how a professor integrated AI into a class.

Instead, I saw quiz questions from the university learning management system. Each question and answer was quickly copied back and forth between ChatGPT and a course quiz. Twenty questions were answered in less than a minute. I saw no effort to answer questions first or even read them. Did the student not know this was wrong? Or were they so engrossed in the screen that they forgot their surroundings? Perhaps the student views quizzes as busy work, not a learning tool to ensure reading and internalizing information.

AI promises to free us from drudgery to explore human creativity and imagination. The article “How AI can save you time: 5 skills you no longer need to learn, tells us we can now skip learning skills like writing because AI will do writing like reports and news articles for us. I wonder what creativity the journalist will explore when Euro News outsources articles to AI. I don’t want to be freed from writing. My creativity and imagination are explored through writing and evidence tells us writing is how we learn.

If the student in front of me was using AI to save time to explore human creativity, they missed one of the best opportunities that semester. While they focused on their screen using AI, a poet expressed the joy of being human moving some in the room to tears.

Sometimes there is no shortcut to learning.

Much of AI is being marketed to us and students as a shortcut. The easy way to complete a task, assignment, paper, or degree. In AI’s Promise To Pay Attention For You., Marc Watkins of Mississippi AI Institute, says, “Many third-party app developers are building off of OpenAI’s API to create apps that promise an end-to-end user experience where a machine listens for you, complies with information, and then creates bespoke summaries all so you don’t feel burdened by listening or thinking about the content.”

TikTok is full of student videos promoting these apps as the easy way to an easy grade. I’m all for removing friction to make banking, car buying, and hotel booking easy. But is easy the best way to learn? What if friction and struggle are how we learn? In an op-ed Jane Rosenzweig of Harvard College Writing Center says, “Our students are not products to be moved down a frictionless assembly line, and the hard work of reading, writing, and thinking is not a problem to be solved.”

If not used properly AI can get in the way of learning. This summer I received an email marketing assignment in which a student “wrote” bland generic email copy. Then a paragraph explained how the email “fosters a deep emotional connection with the audience” and “reflects a deep understanding of the target audience’s needs.” But it didn’t! It sounded like the correct but unfeeling, general copy LLMs tend to generate.

The LLM knew what good email copy should do, but couldn’t write it. My student needed to be the human in the loop. I can teach how to write copy that forms an emotional connection with the audience based on human insight, but not if a student uses AI to write the entire assignment. Why would an employer hire them if AI could complete the entire project on its own?

Is liberal arts education the answer to AI job losses?

If AI takes away skills, many say the way to remain relevant is through liberal arts. Business Insider says AI startup founders hire liberal arts grads to get an edge. In Bloomberg, a Goldman Tech Guru says AI is spurring a “Revenge of the Liberal Arts.” WIRED proclaims, “To Own the Future, Read Shakespeare.”

IBM’s AI chief advises students who want a tech job to learn the language and creative thinking skills you get in the liberal arts. Because AI speaks our language not computer code we need prompt engineers “to train it up in human behavior and thinking.”

Marketing AI Institute’s Paul Roetzer believes the next generation entering the workforce will remain relevant with a broad-based liberal arts education. Literature, philosophy, history, and art are what make us human and teach critical thinking, analysis, creativity, communication, collaboration, integrity, understanding, and nuance. What AI can’t do.

But what if AI can also get in the way of learning liberal arts? Using AI to skip the reading and skip the writing skips the learning to save time for what? Doing the reading, processing the information, committing it to memory, and explaining it through writing is how you learn critical thinking and creativity. To have an imagination you need knowledge.

Is AI the answer to our loneliness epidemic?

In 2023, the Surgeon General released an advisory warning of a crisis of loneliness, isolation, and lack of connection. Nearly half in the U.S. experience loneliness which can increase the risk of death comparable to smoking. Rates of anxiety and depression on college campuses have never been higher. More than 60% of college students report at least one mental health problem.

Some are promising AI can solve this problem with Artificial intelligence friends. I sat in a room full of people with an author talking about human connection while students in front of me focused on their screens. Are AI friends really the best solution? What I often see in the classroom is students not talking to each other because they are focused on their screens.

After Ross finished there was a Q&A. A mental health professional behind me wanted to thank him. She gives her patients struggling with anxiety and depression Ross’s books as homework. Many report back that the books make the difference in being able to get out of bed some days.

I did not miss the irony. Many students struggle with anxiety and depression and many feel screen time is a part of it. Yet, here I was between a mental health professional, students, and one of her solutions. A real human in the room. I’m sad for the students who missed out on the joy of the evening – right past the screens taking their attention. In reading Jonathan Haidt’s The Anxious Generation my understanding and empathy for Gen Z has grown.

The exception rather than the rule.

It’s important to note this was a couple students. I’m grateful for the larger group of enthusiastically engaged students at the event. In my experience, most students are not looking for the easy way out and want to learn their disciplines by integrating AI in beneficial ways. But they need our guidance.

In a review by Turnitin, they found that of the more than 200 million writing assignments reviewed by Turnitin’s AI detection tool last year, some AI use was detected in just 1 out of 10 assignments. Only 3 out of every 100 were generated mostly by AI.

Education experts warn that focusing too much on AI cheating can cause distrust between instructors and students. We should frame the conversation around ways AI can both support and detract from learning. Our role is AI literacy, providing specific guidance on when and when not to use AI.

I hope to educate students on the role of AI in their lives and how to make intentional choices about what to outsource to AI, what to keep for ourselves, and how to prepare for careers with AI to keep humans in the loop.

For a look at my next blog article on AI see “AI Turned My Academic Journal Article Into An Engaging Podcast For Social Media Pros In Minutes.

This Was Human Created Content!

AI Prompt Framework: Improve Results With This Framework And Your Expertise [Template].

AI Prompt Framework Template with 1. Task/Goal 2. AI Persona 3. AI Audience 4. AI Task 5. AI Data 6. Evaluate Results.

This is the third post in a series of five on AI. In my last post, I gave examples of tasks I’d outsource to AI. How do you outsource them? Through prompt writing – a skill some call prompt engineering. Because large language models (LLMs) like ChatGPT, Claude, and Gemini are based on conversational prompting it’s easy for anyone to use them. You don’t need to learn a coding language like Python or HTML or a software interface like Excel or Photoshop. You just tell it.

Generative AI can produce remarkable results.

In an experiment, researchers found consultants at Boston Consulting Group gained 40% higher quality work using GPT-4 (via Microsoft Bing) without specialized prompt training and without training the AI on any proprietary data. What mattered was the consultants’ expertise. Knowing what to ask and how to evaluate the results.

AI expert Ethan Mollick describes large frontier LLMs as working with a smart intern. Sometimes they’re brilliant. Sometimes they don’t know what they don’t know. AI will even make things up to give you an answer. Mollick and other researchers call this the jagged frontier of AI. In some tasks, AI output is as good or better than humans. In others, it can be worse or wrong.

Their research with Boston Consulting Group found AI can be good at some easy or difficult tasks while being worse at other easy or difficult tasks. Level or task isn’t a predictor. One professor’s research found ChatGPT got difficult multiple-choice questions right but got easy questions wrong. Testing and learning based on expert knowledge is the way to know. How do you explore this jagged AI frontier while improving results? A prompt framework like the one I created below.

AI Prompt Framework Template. Click the image to download a PDF of this AI Prompt Framework Template.

First, have a clear understanding of what you want.

Begin with the task and goal. Are you summarizing to learn about a topic for a meeting, generating text or an image for content, looking for suggestions to improve your writing, performing a calculation to save time, or creating something to be published? Defining the task and objective sets the stage for a successful prompt and output.

Second, give AI a perspective or identity as a persona.

LLMs are trained on vast amounts of broad data, which makes them so powerful. This can also produce output that’s too generic or simply not what you want. It helps to give AI a perspective or identity like a persona. Personas are used in marketing to describe a target audience. Persona is also the character an author assumes in a written work.

Third, describe to AI the audience for your output.

Are you writing an email to your boss, creating copy for a social media post, preparing for a talk, or is the output just for you? You know how to adjust what you create based on what’s appropriate for an audience. AI can do a remarkable job at this if you give it the right direction.

Fourth, describe the specific task you want it to complete.

Err on the side of more detail than less. Consider things you know in your mind that you would use in completing the task. It’s like giving the smart intern directions. They’re smart but don’t have the experience and knowledge you do. More complicated tasks can require multiple steps. That’s fine, just tell AI what to do first, second, third, etc.

Fifth, add any additional data it may need.

Some tasks require data such as a spreadsheet of numbers you want to analyze, a document you want summarized, or a specific stat, fact, or measurement. But before uploading proprietary data into an LLM see my post considering legal and ethical AI use. Recent research, Systematic Survey of Prompting Techniques, also suggests adding positive and negative examples – “like this not like that.”

Sixth, evaluate output based on expectations and expertise.

Sometimes you get back what you want and other times you don’t. Then you need to clarify, ask again, or provide more details and data. Go back to earlier steps tweaking the prompt. Other times you get back something wrong or made up. If clarifying doesn’t work you may have discovered a task AI is not good at. And sometimes you just wanted a rough start that you’ll modify considering copyright for legal and ethical AI use.

A prompt experiment with and without the framework.

I’ve been testing the framework and it has improved results. In one test I used GPT-4 via Copilot to see if it could recommend influencers for a specific brand – Saucony running shoes. First I didn’t use the framework and asked a simple question.

  • “Recommend influencers for 34-55-year-old males who like to run marathons.”

It recommended Cristiano Ronaldo, Leo Messi, and Stanley Tucci. Hopefully, you understand why these are not a good fit. I ran the same prompt again and it recommended Usain Bolt. Bolt is a runner, but known for track sprinting not marathons.

Generated with AI (Copilot) ∙ June 28, 2024 at 4:30 PM

I tried to be more direct changing the prompt to “34-55-year-old males who run marathons.” For some reason dropping the “like” started giving me older bodybuilders. I wouldn’t describe marathon runners as “shredded” the way the one influencer described himself.

I tried again with “34-54-year-old males known for their involvement in marathons.” This gave me a random list of people including Alex Moe (@themacrobarista) a Starbucks barista. As far as I can tell Moe doesn’t run marathons and his Instagram feed is full of swirling creamer pours.

Finally, I tried the prompt framework.

  • “You are a social media manager for Saucony running shoes. (Persona) Your target audience is 34-55-year-old males who run marathons. (Audience) Which influencers would you recommend for Saucony to appeal to and engage this target audience? (Task)

This prompt gave me better results including Dorothy Beal (@mileposts) who has run 46 marathons and created the I RUN THIS BODY movement. Her Instagram feed is full of images of running. Copilot still recommended Usain Bolt following the framework, but the other four recommendations were much better than a soccer star, bodybuilder, or barista.

Generated with AI (Copilot) ∙ June 28, 2024 at 4:35 PM

I tried to add data to the prompt with “Limit your suggestions to macro-influencers who have between 100,000 to 1 million followers.” (Data) The response didn’t give suggestions saying “as an AI, I don’t have access to social media platforms or databases that would allow me to provide a list of specific influencers who meet your criteria.” That’s okay because the more precise prompt gave me more relevant macro-influencers anyway.

Alternatively, I added positive and negative examples. I tried again adding to the prompt “Don’t provide influencers like Cristiano Ronaldo or Usain Bolt, but more like Dorthy Beale or Dean Karnazes.” (Data). This time I received a list of 8 influencers who all would have potential for this brand and audience. Alternatively, this can be considered as adding success measures, “Your recommendations should include influencers like ___, but not like, ___.”

Generated with AI (Copilot) ∙ July 27, 2024 at 11:35 PM

You don’t need to be a prompt engineer to explore.

Experts in various fields are finding frameworks that work best for their needs. Christopher S. Penn suggests the prompt framework PARE (prime, augment, refresh, evaluate). Prompt writing can also be more advanced to maximize efficiency. Prompt engineers are working on creating prompt libraries of common tasks.

But for most people, your job will not switch to prompt engineer. We need discipline experts to test the best uses of AI in their specific roles. Over time you’ll develop knowledge of how to prompt AI for your profession and what LLMs are better at each task. Penn suggests creating your own prompt library. You’ll gain marketable skills as you explore the jagged frontier of AI for tasks unique to your industry.

LLMs are already introducing AI tools to improve prompts. Anthropic Console takes your goal and generates the Claude prompt for you. Microsoft is adding Copilot AI features to improve prompts as you write promising to turn anyone into a prompt engineer. And Apple Intelligence is coming, running efficient more specific task-focused AI agents integrated into Apple apps.

In the article, The Rise and Fall of Prompt Engineering, Tech writer Nahla Davies says, “Even the best prompt engineers aren’t really ‘engineers.’ But at the end of the day, they’re just that–single tasks that, in most cases, rely on previous expertise in a niche.” The Survey of Prompting Techniques, also finds prompt engineering must engage with domain experts who know in what ways they want the computer to behave and why.

Thus, we don’t need everyone to be prompt engineers. We need discipline experts who have AI skills. In my next post, I’ll explore the challenges of teaching students to be discipline experts with AI.

This Was Human Created Content!