I’ve been forcing myself to regularly read physical books again.
Not articles. Not threads. Not AI summaries. Actual books. Cover to cover. It’s my way of fighting back against the algorithmic feeds rewiring my attention span.
If AI can consume a library of data in seconds, maybe my competitive advantage is going slower and deeper.
Two books that have been sitting on my shelf are S.I. Hayakawa’s Language in Thought and Action and Angus Fletcher’s Primal Intelligence. The first was written in 1939 and the second 2025. As I read them over several weeks, something clicked.
My brain, the neural synapses Fletcher writes about, made a connection no algorithm would have surfaced: Hayakawa’s framework for sane thinking during WWII and Fletcher’s research on how human brains “imagine” new paths or plans in the future.

Our Narrative Brain
This is what your Narrative Brain does. It makes imaginative leaps across disparate ideas. It asks “What if these two things connect?” A semantics book and neuroscience book written 86 years apart. No dataset, predictive analytics, or AI could have made this creative leap.
It’s a unique capability we risk losing if we don’t understand how to partner with AI correctly.
Many conversations about AI in business and marketing position it as an all or nothing proposition. AI will and should replace employees or (because of this threat) we should avoid using AI at all.
In AI lessons from 2025, I shared how I explored AI partnership versus replacement last year. But I still didn’t understand the core biological barriers and benefits.
Hayakawa and Fletcher gave me the answer. Fletcher explained the fundamental difference between how AI processes information and how our brain works. Hayakawa helped me understand the challenges in AI adoption. Both are key to staying sane (and essential) as a knowledge worker in the AI revolution.
Light Switch vs. Dimmer
Hayakawa described two ways of looking at the world. A Two-Value Orientation is like a light switch. It’s binary: people are all evil or all good. Knowledge work should be all human or all AI. When we approach business, marketing or communications this way, we ask “Should we use AI?” and expect a simple Yes or No.
A Multi-Value Orientation, however, is like a dimmer switch. It recognizes that reality exists on a scale. Instead of automatically labeling people as evil or good, we consider nuance. Instead of asking “If” we should use AI, we ask, “To what degree and in what context is AI appropriate for each task?”
Key Insight: Two-value thinking creates conflict. Multi-value thinking creates a roadmap for collaboration.

Your Biological Advantage
In his book Primal Intelligence, Angus Fletcher points out a biological truth that changes how we may view AI.
AI runs on transistors that perform Correlation. Its logic is A = B. It looks at massive datasets of the past to see what usually happens. Given A, there’s a 95% chance that B comes next.
If you ask AI for a business or marketing idea, it calculates the statistical probability of which words usually go together. It is, effectively, a high-speed rear-view mirror. It can tell you where the market has been.
Your brain, however, runs on neural synapses that perform Conjecture. Your logic is A → B. You don’t just see two things are typically related. You can imagine a potential causal link. You can look at a set of facts and ask, “What if we did the opposite?” or “Why can’t these go together?”
You can see ways forward from missing, incomplete, or unexpected information. Whereas AI is prone to hallucinations when faced with a lack of data.
For example, AI looks at the data and says: “90% of successful luxury brands use minimalist black-and-white logos.” That’s correlation. But a human looks at a crowded, monochrome market and asks: “What if we used neon yellow to signal a different kind of rebellion?” AI follows the trend to be safe. You break the trend to be noticed.
When correlation said people wanted better keyboards on their phones, Steve Jobs used conjecture to imagine a different story: a single piece of glass that could hold the internet. That strategy drove Apple to fill in the gaps to make that “improbable” narrative happen. AI could not have “imagined” that possibility based on previous data. It would make a better keyboard.
AI is a map of the past (Correlation). You are the driver of the future (Conjecture).
The Abstraction Ladder
Hayakawa also taught us about the Ladder of Abstraction. For business and marketing the top would be vague labels like “Customer Satisfaction.” At the bottom is the “Territory” such as the actual, concrete facts and interactions with real people.
AI is great at the top of the ladder. It can summarize “General Trends” all day. But because it lacks a physical body and lived experience (what Fletcher calls “Embodied Intelligence”), it can’t feel the “Territory.”
Example: AI can tell you “Gen Z engagement is down 15%.” That’s the top of the ladder or an abstraction. You climb down by observing and talking to actual Gen Z customers and discovering they’re not disengaging. They’re just moving to a platform your data doesn’t track yet. That’s Territory AI can’t access without embodied experience.
A multi-value approach uses AI to handle the high-level abstractions, which frees up your human brain to climb down the ladder to the real lived experience. We use our Narrative Brain to find the specific, human story, the A → B sequence, that makes a brand feel real.
In a world where AI levels the data playing field, competitive advantage comes from the humans companies employ. Your edge won’t be guaranteed by data. You’ll need people who can look at a spreadsheet and see the human story waiting to be written.
Instead of acting in the past you’ll begin imagining new futures and designing marketing actions to make them happen.
5 Levels of AI Integration
To help us navigate this, I created a 5-level scale of AI Integration based on multi-value orientation and our biological advantage. Not every task deserves Level 5 automation. As a professional you’ll know when to turn the dimmer switch up or down based on the human value required.
5 levels of AI integration. Taking a multi-value orientation that leverages our brain’s primal intelligence advantage. Click image to download a PDF.
Now It’s Your Turn
If you’ve been avoiding AI, start at Level 1. This week, ask it to proofread an email you’ve already written. That’s it. You’re still the author. You’re still making all the decisions. Notice how it feels, what it catches and misses.
Then try Level 2. Or if you’re doing that try higher. Try deep research, brainstorming, outlining, drafting, feedback or variations with a reasoning model. Don’t know how? Ask AI.
The goal isn’t to become a better prompt engineer. It’s to become a better thinker.
Become someone who knows when to leverage speed and when to trust your human ability to imagine what doesn’t exist yet. Leverage AI to speed up low value tasks to free up more time for your unique human contribution.
Remember those two books on my shelf? No AI would have recommended I read them together. No algorithm would have surfaced their connection. But my Narrative Brain, the same you use every day in your work, made an imaginative leap that created this framework.
That’s what makes you irreplaceable: the ability to make connections that don’t exist in any dataset.
AI can tell you the most likely next word, but only you can imagine the most meaningful next chapter.
Moving from a two-value “Either/Or” mindset to a multi-value “Degrees-of” mindset, enables you to start imagining and start creating a better future with your narrative brain.
About This Post’s Creation
This post was developed in partnership with Google Gemini 3.0 and Claude Sonnet 4.5. Gemini and Claude helped organize the structure and refine the language. The connection of General Semantics and Narrative Science is my original contribution. One that came from the kind of deep, sustained reading and cross-pollination of ideas that only a human narrative brain can produce.

