AI Turned My Academic Journal Article Into An Engaging Podcast For Social Media Pros In Minutes with Google’s NotebookLM.

 I recently published academic research in the Quarterly Review of Business Disciplines with Michael Coolsen titled, “Engagement on Twitter: Connecting Consumer Social Media Gratifications and Forms of Interactivity to Brand Goals as Model for Social Media Engagement.” Exciting right?

If you’re a research geek or academic maybe. A social media manager? No way. Yet, I know the findings, specifically our Brand Consumer Goal Model for Social Media Engagement is very exciting for social media pros! So I wanted to write this blog post.

But, as you can tell by the title, an academic audience, and a professional audience are very different. Taking a complicated 25-page academic research article and translating it into a practical and concise professional blog post could take me hours.

I’ve been meaning to experiment with Google’s new AI generator tool NotebookLM so I thought I would try it. Thus, this blog post is about our research on a social media engagement framework and how I used AI to streamline my process to create it. As a bonus, I got a podcast out of it!

My co-author and I did the hard work of the research. I was okay with an AI assistant helping translate it into different media for different audiences. Click for an AI Task Framework.

Using NotebookLM.

Our study was on types of content that generate engagement on Twitter, but the real value was a proposed model for engagement. So before uploading any of the research into the AI tool, I condensed it to just the theoretical and managerial implications sections. Then I added a title, the journal citation, and saved it as a PDF.

NotebookLM uses Gemini 1.5 Pro. Google describes it as a virtual research assistant. Think of it as an AI tool to help you explore and take notes about a source or sources that you upload. Each project you work on is saved in a Notebook that you title. I titled mine “Brand Consumer Goal Model for Social Media Engagement.”

Whatever you upload NotebookLM becomes an expert on that information. It uses your sources to answer your questions or complete your requests. It responds with citations, showing you original quotes from your sources. Google says that your data is not used to train NotebookLM, so sensitive information stays private (I would still double-check before uploading).

Source files accepted include Google Docs, Google Slides, PDF, Text files, Web URLs, Copy-pasted text, YouTube URLs of public videos, and Audio files. Each source can contain up to 500,000 words, or up to 200MB for uploaded files. Each notebook can contain up to 50 sources. If you add that up NotebookLM’s context window is huge compared to other models. ChatGPT 4o’s context window is roughly 96,000 words.

When you upload a source to NotebookLM, it instantly creates an overview that summarizes all sources, pulls out key topics, and suggests questions to ask. It also has a set of standard documents you can create such as an FAQ, Study Guide, Table of Contents, Timeline, or Briefing Doc.

You can also ask it to create something else. I asked it to write a blog post about the findings of our research. You will see that below. Yet, the most impressive feature is the Audio Overview. This generates an audio file of two podcast hosts explaining your source or sources in the Notebook.

The NotebookLM dashboard gives you a variety of options to interact with your sources.

Using Audio Overviews.

There are no options for the Audio Overview so you get what it creates. But what it creates is amazing! My jaw literally dropped when I heard it. And it will give you slightly different results each time you run it.

I noticed things missing in the first audio overview such as the journal and article title and the authors’ names. I did figure out how to make adjustments by modifying my source document. Through five rounds of modifying my source document, I was able to get that information in and more.

Sometimes overviews aren’t 100% accurate. It says, “NotebookLM may still sometimes give inaccurate responses, so you may want to confirm any facts independently.” In our research article we give a hypothetical example of a running shoe brand following our model. It was not real. But in one version of Audio Overviews, the podcast hosts talk as if the company did what we said and got real results that we measured.

I was impressed that in other versions it didn’t use our example and applied the model to new ones. One time it used an organic tea company and another time a sustainable clothing brand. On the fifth attempt it even built in a commercial break for the “podcast.” This last version gave my running shoe example and added its own about a sustainable activewear brand.

What’s really interesting about the last version is that it pulled in other general knowledge about social media strategy and applied it to the new information of our study. At the end, the hosts bring up how our engagement model will help know what to say but that social media managers still need to customize the content to be appropriate for each social platform. That’s a social media best practice but not something we mention in the article.

The Audio Overview Podcast NotebookLM Created.

 

It’s amazing these podcast hosts discussed our research and explained it so well for social pros. What’s more amazing is that they are not real people! Yet NotebookLM did more. Below is the blog post it wrote. It included our diagram of the model, but had trouble getting it right. So, I replaced the image with one I created from our article.

Brand Consumer Goal Model for Social Media Engagement.

This post examines a model for social media engagement based on an October 2024 study in the Quarterly Review of Business Disciplines. “Engagement on Twitter: Connecting Consumer Social Media Gratifications and Forms of Interactivity to Brand Goals as Model for Social Media Engagement,” published by Keith Quesenberry and Mike Coolsen.

The Brand Consumer Goal Model for Social Media Engagement is a framework to help social pros create more effective plans by aligning brand goals with consumer goals. It emphasizes understanding the motivations behind consumer engagement and tailoring content accordingly.

How the Model Works

The model outlines three key brand goals:

  • Building brand community (Reach): This goal focuses on expanding the brand’s audience and increasing awareness.
  • Building brand-consumer relationships (Response): This goal aims to foster brand interaction and engagement.
  • Building brand-consumer bonds (Recognition): This goal seeks to create brand emotional connections and loyalty.

Each brand goal is associated with a corresponding consumer goal and form of social media engagement as seen in the graphic below. Consumers can manage people’s impressions and persuade others by sharing certain brand posts. They can gain information about a brand with other types of brand content for deliberation. Then they can bond with others regulating their emotions in evaluating brand posts with other fans.

Brand Consumer Goal Model for Social Media Engagement Template
Click on the graphic to download this model for social media engagement.

Here is an example

  • A sneaker brand launches a running shoe and aims to build brand community (reach) creating content that appeals to runners’ desire for impression management and persuasion. This could be sharing inspirational stories about runners breaking records encouraging retweets and brand visibility.
  • Once awareness is established, the brand could shift its focus to building relationships (response) by providing information about the shoe’s features and benefits, appealing to consumers’ information acquisition goals, and prompting replies and comments seeking further details.
  • Finally, the brand can foster brand bonds (recognition) by sharing content that resonates with runners’ social bonding and emotion regulation needs, such as posts about the challenges and rewards of training, which encourage likes and emotional connections.

Key Insights from the Study

The study found that simply using popular content types like videos or photos is not enough to guarantee success on social media. The message delivered with the content is crucial.

Marketers need to consider:

  • The target audience’s motivations for using social media
  • Buying cycle stage (awareness, consideration, purchase, loyalty)
  • Desired word-of-mouth function (sharing, deliberation, evaluation)

The Brand Consumer Goal Model for Social Media Engagement offers a strategic framework for developing effective social media campaigns. By understanding the motivations behind consumer behavior and aligning content with both brand and consumer goals, marketers can achieve better results and build stronger relationships with their target audience.

I hope you found this look at NotebookLM and the insights from our social media research helpful. In what ways do you think NotebookLM can help in your job? In what ways can the insights from the Brand Consumer Engagement Model improve your social media content strategy?

NotebookLM Could be a Great Study Tool for Students.

NotebookLM could be a great tool for student learning if used as a study guide, reinforcement, or tutor. It would have a negative impact if used to simply replace reading and listening in the first place. What’s missed when you use AI in the wrong way is depicted in the graphic below. It is from a previous post on the importance of subject matter expertise when using AI

Personally, I was fine using this tool in this way. My co-author and I did the hard work of the research. This AI assistant simply helped us translate it into different media for different audiences.

This graphic shows that in stages of learning you go through attention, encoding, storage, and retrieval. You need your brain to learn this process not just use AI for the process.
Click the image for a downloadable PDF of this graphic.

Half of This Content Was Human Created!

UPDATE: Google has released a new version of NotebookLX where you can customize the Audio Overview before processing. I was very impressed with this feature. For example, I had another academic article published about a new no tech policy in the classroom that I implemented after COVID restrictions were released. I uploaded the academic article and before processing I Customized the Audio Overview telling NotebookXL that my target audience was college students distracted by technology in the classroom and to keep the overview shorter for their short attention spans. Here is the result:

More Than Prompt Engineers: Careers with AI Require Subject Matter Expertise.

This graphic shows that in stages of learning you go through attention, encoding, storage, and retrieval. You need your brain to learn this process not just use AI for the process.

This is the fourth post in a series of five on AI. In my last post, I proposed a framework for AI prompt writing. But before you can follow a prompt framework, you need to know what to ask and how to evaluate its response. This is where subject matter expertise and critical thinking skills come in. A reason we need to keep humans in the loop when working with large language models (LLM) like ChatGPT (Copilot), Gemini, Claude, and Llama.

Photo by Shopify Partners from Burst

Will we all be prompt engineers?

Prompt engineering is promoted as the hot, new high-paying career.” Learning AI prompt techniques is important but doesn’t replace being a subject matter expert. The key to a good prompt is more than format. As I described in my post on AI prompts, you must know how to describe the situation, perspective, audience, and what data to use. The way a marketer or manager will use AI is different than an accountant or engineer.

You also must know enough to judge AI output whether it’s information, analysis, writing, or a visual. If a prompt engineer doesn’t have subject knowledge they won’t know what AI got right, got wrong, and what is too generic. AI is not good at every task producing general and wrong responses with the right ones. With hallucination rates of 15% to 20% for ChatGPT, former marketing manager Maryna Bilan says AI integration is a significant challenge for professionals that risks a company’s reputation.

AI expert Christopher S. Penn says, “Subject matter expertise and human review still matter a great deal. To the untrained eye, … responses might look fine, but for anyone in the field, they would recognize responses as deeply deficient.” Marc Watkins, of the AI Mississippi Institute says AI is best with “trained subject matter experts using a tool to augment their existing skills.” And Marketing AI Institute’s Paul Roetzer says, “AI can’t shortcut becoming an expert at something.”

Prompt engineering skills are not enough.

As a college professor, this means my students still need to do the hard work of learning the subject and discipline on their own. But their social feeds are full of AI influencers promising learning shortcuts and easy A’s without listening to a lecture or writing an essay. Yet skipping the reading, having GPT take lecture notes, answer quiz questions, and write your report is not the way to get knowledge into your memory.

Some argue that ChatGPT is like a calculator. Yes and no. This author explains, “Calculators automate a . . . mundane task for people who understand the principle of how that task works. With Generative AI I don’t need to understand how it works, or even the subject I’m pretending to have studied, to create an impression of knowledge.”

My major assignments are applied business strategies. I tell students if they enter my assignment prompt into ChatGPT and it writes the report for them then they’ve written themselves out of a job. Why would a company hire them when they could enter the prompt themselves? That doesn’t mean AI has no place. I’ve written about outsourcing specific tasks to AI in a professional field, but you can’t outsource the base discipline knowledge learning.

AI can assist learning or get in the way.

I know how to keep humans in the loop in my discipline, but I can’t teach students if they outsource all their learning to AI. Old-fashioned reading, annotating, summarizing, writing, in-person discussion, and testing remain important. Once students get the base knowledge then we can explore ways to utilize generative AI to supplement and shortcut tasks, not skip learning altogether. We learn through memory and scientists have studied how memory works. Used the wrong way AI can skip all stages of learning.

Click the image for a downloadable PDF of this graphic.

I remember what it was like being a student. It’s very tempting to take the second path in the graphic above – the easiest path to an A and a degree. But that can lead to an over-reliance on technology, no real discipline knowledge, and a lack of critical thinking skills. The tool becomes a crutch to something I never learned how to do on my own. My performance is dependent on AI performance and I lack the discernment to know how well it performed.

Research skills in searching databases, evaluating information, citing sources, and avoiding plagiarism are needed to discern AI output. The online LLM Perplexity promised reliable answers with complete sources and citations, but a recent article in WIRED finds the LLM search engine makes things up and Forbes accuses it of plagiarizing its content.

A pitch from OpenAI selling ChatGPT Edu, says, “Undergraduates and MBA students in Professor Ethan Mollick’s courses at Wharton completed their final reflection assignments through discussions with a GPT trained on course materials, reporting that ChatGPT got them to think more deeply about what they’ve learned.”  This only works if the students do the reading and reflection assignments themselves first.

Outsourcing an entire assignment to AI doesn’t work.

A skill I teach is situation analysis. It’s a foundation for any marketing strategy or marketing communications (traditional, digital, or social) plan. Effective marketing recommendations aren’t possible without understanding the business context and objective. The result of that situation analysis is writing a relevant marketing objective.

As a test, I asked ChatGPT (via Copilot) to write a marketing objective for Saucony that follows SMART (Specific, Measurable, Achievable, Relevant, Time-bound) guidelines. It recommended boosting online sales by targeting fitness enthusiasts with social media influencers. I asked again, and it suggested increasing online sales of trail running shoes among outdoor enthusiasts 18-35 using social media and email.

Then I asked it to write 20 more and it did. Options varied: focusing on eco-friendly running shoes for Millennials and Gen Z, increasing customer retention with a loyalty program, expanding into Europe, increasing retail locations, developing a new line of women’s running shoes, and increasing Saucony’s share of voice with a PR campaign highlighting the brand’s unique selling propositions (USP). It didn’t tell me what those USPs were.

Which one is the right answer? The human in the loop would know based on their expertise and knowledge of the specific situation. Generated with AI (Copilot) ∙ July 2, 2024 at 3:30 PM

I asked Copilot which is best. It said, “The best objectives would depend on Saucony’s specific business goals, resources, and market conditions. It’s always important to tailor the objectives to the specific context of the business. As an AI, I don’t have personal opinions. I recommend discussing these objectives with your team to determine which one is most suitable for your current needs.” If students outsource all learning to LLMs how could they have the conversation?

To get a more relevant objective I could upload proprietary data like market reports and client data and then have AI summarize. But uploading Mintel reports into LLMs is illegal and many companies restrict this as well. Even if I work for a company that has built an internal AI system on proprietary data its output can’t be trusted. Ethan Mollick has warned that many companies building talk-to-your document RAG systems with AI need to test the final LLM output as it can produce many errors.

I need to be an expert to test LLM output in open and closed systems. Even then I’m not confident I could come up with truly unique solutions based on human insight If I didn’t engage information on my own. Could I answer client questions in an in-person meeting with a brief review of AI-generated summaries and recommendations?

AI as an assistant to complete assignments can work.

For the situation analysis assignment, I want students to know the business context and form their own opinions. That’s the only way they’ll learn to become subject matter experts. Instead of outsourcing the entire assignment, AI can act as a tutor. Students often struggle with the concept of a SMART marketing objective. I get a lot of wrong formats no matter how I explain it.

I asked GPT if statements were a marketing objective that followed SMART guidelines. I fed it right and wrong statements. It got all correct. It also did an excellent job of explaining why the statement did or did not adhere to SMART guidelines. Penn suggests explain it to me prompts to tell the LLM it is an expert in a specific topic you don’t understand and ask it to explain it to you in terms of something you do understand. This is using AI to help you become an expert versus outsourcing your expertise to AI.

ChatGPT can talk but can it network?

Last spring I attended a professional business event. We have a new American Marketing Association chapter in our area, and they had a mixer. It was a great networking opportunity. Several students from our marketing club were there mingling with the professionals. Afterward, a couple of the professionals told me how impressed they were with our students.

These were seniors and juniors. They had a lot of learning under their belts before ChatGPT came along. I worry about the younger students. If they see AI as a way to outsource the hard work of learning, how would they do? Could they talk extemporaneously at a networking event, interview, or meeting?

Will students learn with the new AI tools that summarize reading, transcribe lectures, answer quiz questions, and write assignments? Or will they learn to be subject matter experts who have discerned via AI Task Frameworks and AI Prompt Frameworks the beneficial uses of AI making them an asset to hire? In my next post, the final in this 5 part AI series, I share a story that inspired this AI research and explore how AI can distract from opportunities for learning and human connection.

This Was Human Created Content!