The Token Trap: Why “Tokenmaxing” in AI is the New Klout Score

MIDE Studio Brainstorming Wiith Post Its

Around 2012, a professor at Florida State University made national headlines for including Klout scores as part of his students’ social media marketing course grades (just 10% of the total). It wasn’t an unreasonable experiment. The logic was straightforward: if the market cares about this number, students should understand how it works. And the market really did care. Klout scores shaped influencer partnerships, opened doors to brand deals, and helped establish who counted as a thought leader worth listening to. The number had real weight.

I remember first encountering Klout at an IMC conference where Mark Schaefer was keynoting. He had recently published Return on Influence, and immediately after that talk I checked my own score and started thinking more intentionally about my online presence. That led to this blog, to job opportunities, to contributing articles to respected publications and eventually my two books.

The number was motivating, but I never got obsessed with it to the point of doing things just to watch it climb.

What’s interesting in retrospect isn’t that the professor was wrong to take it seriously. It’s how quickly the controversy revealed the problem with legitimizing a metric that could be gamed. When a score carries institutional weight, a market grows up around engineering it. Purchased followers, inflated engagement, synthetic reach. The score becomes more important than what it was supposed to represent. Eventually it stopped representing much at all. Klout shut down in 2018.

I find myself thinking about that moment again as I watch a new metric start to carry similar weight in AI-driven marketing circles. My prediction: it won’t be long before we hear about a professor basing an AI course grade on token usage. Or a manager rewarding the employee who ran the most prompts last quarter. We may already be there.

“When a measure becomes a target, it ceases to be a good measure.” — Marilyn Strathern (popularizing Goodhart’s Law)

Amazon recently mandated that 80% of its engineers use its internal AI coding tool weekly, tracking adoption as a formal corporate goal. Within roughly 90 days, Amazon suffered at least four serious production incidents, including a six-hour outage with an estimated 6.3 million lost orders. Amazon attributed the failures to user error, not AI. That may be true, but when usage is the mandate and the metric, the conditions for that kind of error become much easier. The 80% target became more important than the safeguards surrounding it.

We may be entering the era of what’s being called Tokenmaxing. It looks like Klout in a more expensive suit. You’ve probably already seen a version of it on LinkedIn. People sharing screenshots of a dozen or two dozen AI agent tabs open on their monitor as a signal of how AI-forward they are. It’s the digital equivalent of watching that little orange Klout score climb.

MIDE Studio Brainstorming Wiith Post Its
Should we feel guilty for thinking with our hands and allowing time for inspiration and the slow hunch? This was from a brainstorming session on PostIt notes that my students had in the Markets, Innovation & Design studio.

What is Tokenmaxing?

In AI, a token is roughly a unit of compute or 0.75 words. For developers, token usage is a genuinely meaningful metric. It affects cost, context window management, and efficiency. Paying attention to it makes sense in that context.

The problem arises when that same logic migrates into marketing strategy as a proxy for effort or value. Tokenmaxing is what happens when the volume of AI interactions becomes the goal in itself. People burning through a monthly subscription budget, climbing a usage leaderboard, generating thousands of prompt variations to prove the’re being “AI-forward.”

If you aren’t maxing out your tokens, the thinking goes, you aren’t really using the tool.

It’s the Use It or Lose It fallacy updated for the 21st century. And like Klout, the number starts to feel meaningful. Even if it might not be measuring anything that actually matters for marketers.

What Gets Lost Upstream

The concern here isn’t just budget waste. It’s what happens to the thinking process when output volume becomes the metric. Real creative work has an upstream phase. The slow, often uncomfortable part where you sit with a problem before you know what to do about it. Where you ask why before you ask what’s next.

That phase doesn’t generate tokens. It doesn’t show up in a usage dashboard. But it’s often where the most useful thinking happens, and it’s exactly the phase that gets compressed when we start measuring activity instead of impact.

Not all important thinking happens in the ones and zeros of the digital world.

Mark Schaefer has written about AI turning marketing into “a pandemic of dull.” Everyone converging toward the same outputs, faster. Tokenmaxing feeds that pattern. When we optimize for volume, we tend to get incrementally better at doing the same thing as everyone else, until the work becomes difficult to distinguish from anyone else’s.

Reclaiming the Tiny Experiment

I’ve been spending time with Anne-Laure Le Cunff’s book Tiny Experiments. It’s been a useful counterweight to this kind of thinking. In a design-thinking context, a tiny experiment isn’t a step toward a usage milestone. It’s more like a probe into uncertain territory. A way to follow a question you can’t fully answer yet, and learn something from the attempt.

The difference in practice is meaningful. A tokenmaxing mindset measures success by what it produces: content, variations, volume. A design-thinking mindset measures success by what it discovers. Sitting in a coffee shop and overhearing how someone actually describes a problem you thought you understood, or visiting a store and watching how a customer navigates a decision in real time, and then designing a small test around what you observed, that’s a tiny experiment in the spirit Le Cunff intends.

The output isn’t content. It’s a new understanding of a real human being that no prompt, and no amount of online data, could have surfaced on its own.

When we set goals around usage, we quietly change what we’re optimizing for, and the experiment stops being an experiment. A tool built for exploration becomes a treadmill for output.

Protecting the Thinking That Happens Before the Prompt

The answer to this isn’t less AI. It’s being more intentional about where the human thinking lives in the process. Klout scores didn’t make anyone a better marketer, but they did shape how influence was perceived and rewarded, until the market for gaming them undermined the whole thing.

Token counts carry a similar risk. They can start to feel like a proxy for strategic thinking without doing any of the work that strategic thinking actually requires.

The most valuable part of the design process is the part that doesn’t cost a cent in API fees. It’s the conjecture, the what-if, the question you sit with before you ever write a prompt. Le Cunff’s framing is most useful when it’s pointed at expanding your thinking rather than refining the path you’re already on. Experiment to learn, not just to optimize. We didn’t fully absorb the lesson Klout offered. It’s worth keeping that in mind as token usage starts to feel like a measure of something meaningful.

I’ll admit I feel it too. There’s a low-grade guilt that creeps in when I’m not consuming the latest AI articles, listening to another podcast about where the technology is heading, or running a deep research report to stay current. AI is moving quickly, and falling behind feels like a real risk. Then I open LinkedIn and see someone’s screenshot of a dozen AI agent tabs and the pressure compounds. Am I the only one that feels more anxious than inspired by those posts?

It’s just one more reason to feel like offline, in-person time is something I need to justify rather than protect.

But if I’m sitting in that coffee shop with my laptop open, catching up on AI news, or counting other people’s agent tabs, I’m probably missing the thing that actually makes the work better. The overheard conversation, the moment of watching someone struggle with a decision, the small human observation that no model thought to collect because nobody knew it mattered yet.

That upstream time isn’t wasted time. It might be the most productive thing a marketer can do right now. It’s just the hardest to justify in a dashboard.

About This Post’s Creation

This post was developed in partnership with Claude. I had initial ideas from observation, articles and podcast, plus reading Le Cunff’. Claude helped organize, research further and refine.

How Innovation Disappears Without Anyone Noticing. AI quietly becomes the judge of what’s worth trying.

Bookshelf of diverse titles.

Most of what I hear about AI in business and education falls into two camps. One wants to go all in. Adopt everything, automate everything, let the model drive. The other wants to ban it. Block it, police it, treat it like a threat to learning, work, and trust.

I understand both instincts. But both are incomplete. The biggest risk isn’t solved by pretending AI doesn’t exist, and it isn’t solved by letting AI become the default operating system for creativity. The real danger is quieter and doesn’t require advanced AI use to feel it.

We’ve been here before. Social media promised connection and creativity. For a while, it delivered. What it quietly took in return (depth of attention, tolerance for uncertainty, ability to sit with an idea before seeking validation) didn’t disappear in a single moment.

It eroded in a million rational decisions until dependency on external validation became walls we stopped seeing.

We watched it happen to our teenagers first (likes, follower counts, compulsive checking, anxiety when it didn’t come). A generation learned to measure the worth of a thought, a face, a body, a life by how many strangers approved of it. Not just one generation. How much am I thinking if people will “like” this when I post it on LinkedIn?

Some of what was lost with social media (certain habits of mind, expectations of privacy, unmediated experience) feel like they’re not recoverable. How about confidence, patience, trust in instinct, feeling comfortable in your own skin? We didn’t notice them leaving. A pattern worth considering as AI moves into the center of how we work, create, and decide.

The real danger isn’t takeover. It’s permission.

Innovation doesn’t die because AI becomes “too smart.” It dies when AI becomes the permission structure.

When AI is the first draft and the final judge, teams begin to internalize a new rule. “If the model didn’t suggest it, it’s probably not worth trying.” That sounds dramatic, but in practice it shows up as premature convergence. Ideas get smoothed down into something plausible, defensible, and safe. AI makes the walls feel like the edge of possible.

Creative leaps disappear, not because people lack imagination, but because they stop trusting imagination that can’t be justified by the machine.

There’s a second, less visible danger that compounds this one. It’s about what happens to the ideas that do get conceived, once they enter a world where AI drives gatekeeping. Not just on publishing platforms, and funding algorithms, but inside organizations.

It’s a manager who runs a proposal through an AI tool before greenlighting it. A client who asks for validation before approving an original direction. A colleague who reports back that an idea “scored low on feasibility.” A hiring process that screens for a pattern of past success rather than shape of future potential.

At every layer, the filter is the same: does this match what has worked before? Radical ideas may still get born. The problem is they have a harder time surviving the rooms they have to pass through due to the invisible walls created by AI. Thus, the threat to innovation is psychological and infrastructural.

The third way: humans in the loop, with a method

This is why I’m convinced the answer isn’t all-out AI or zero AI. The answer is AI + human judgment, guided by a method that protects what AI can’t replace.

Design thinking can be that method. “Design” is most associated with how things look. But “design thinking” is a human-centered problem-solving process. It’s a disciplined way of framing questions, testing assumptions, and learning from reality that applies to anything: a product, a strategy, a curriculum, a client relationship, a hiring decision. This includes problems where no data exists. Where the market hasn’t formed, the behavior hasn’t been measured, the future hasn’t happened.

AI requires data to recommend. Humans can imagine a world that doesn’t exist and work backwards to it.

That capacity, to conceive of something genuinely new and then build toward it, is what the permission structure quietly erodes. Here is why designing thinking works. It forces a loop that AI can’t complete on its own:

Frame → Prototype →Test → Learn

A prototype can be a napkin sketch – just enough to get a reaction to a concept or idea.

Crucially, this process keeps validation anchored in human reality rather than model confidence. It doesn’t just keep humans in the loop. It gives humans a reason to trust their own judgment over the machine’s. It helps you see the walls which is a prerequisite to getting past them.

Where AI flattens innovation

AI systems trained on existing data are fundamentally prediction engines built to find the most plausible path between what already exists. That makes them ill suited for the leap to the adjacent possible: the space beyond the walls of the known, close enough to reach but too original for any model to have predicted.

Anyone who’s studied business recognizes the pattern. Organizations built on early innovations by visionary founders stagnate with small improvements and eked out efficiencies to be upset by a startup that sees past the corporate walls.

AI, deployed the wrong way, can accelerate that stagnation. LLM output is substituted for human contact: personas of people never met, best practices relieve lived tensions, staid concepts fit in instead of stand out, strategies sound smart yet feel interchangeable.

AI itself is novel and appears to be fast and cheap. But by definition it converges toward the middle, toward the expected, toward what has already been validated by the aggregate of human output. Genuine originality, grounded in human reality and shaped by real judgment, will become rarer than ever.

Why design thinking helps (in practical terms)

Design thinking mitigates the loss of innovation risk because it changes what you optimize for and where you look for proof. And it works whether you’re new to AI or already deep in it. It’s less about correcting a habit than building the right one from the start.

It prioritizes problem framing before solution picking

AI will happily answer any question you ask. Design thinking asks whether you’re asking the right question. Reframing is where originality often lives. If you only optimize inside an inherited frame, you’ll get better and better at solving the wrong problem. Reframing is also how you become aware of walls. The assumptions so embedded in how you’re thinking that you stopped noticing they were assumptions at all.

It replaces “AI says” with “reality says”

If AI becomes the validator, teams outsource judgment. Design thinking moves validation back into the world of prototypes and user behavior. You don’t need permission from a model when you can say, “We built something small and tested it. Here’s what people actually did.” This is also the antidote to the infrastructure problem. When you can demonstrate real-world proof, you don’t need the algorithm’s blessing to proceed.

It trains the habit AI can quietly erode: human agency

One of the subtler effects of AI is the habit of letting the tool do the thinking you used to do yourself. You reach for it before you’ve sat with the problem, before you’ve let the uncomfortable question breathe.

Design thinking builds the opposite habit of mind: observe, synthesize, imagine, test, learn, repeat. These aren’t soft skills. They’re capabilities that keep unexpected ideas alive long enough to find an audience. Design thinking keeps you oriented toward the adjacent possible rather than the algorithmic average.

Two paths (a quick mental model)

If you want a simple operating principle: AI is a powerful teammate. It shouldn’t be the judge. Whether you’re just beginning to use AI or have been using it for a while, it helps to see the two patterns it tends to create and to decide deliberately which one you want.

Two paths to innovation with AI. Click on the graphic to download a PDF.

The first path is faster and cleaner. It’s the one that forms without noticing. Ask, get an answer, justify it, move on. It’s not wrong exactly, but it keeps you inside the walls, optimizing the known and mistaking the plausible for the possible. The second path feels slower at the start. But it protects originality because it stays anchored in human reality where the proof of concept is the world, not just the algorithm.

Innovation isn’t dead. But it’s no longer default.

In an AI world, “good enough” will be cheap. Polished content, plausible strategies, and decent ideas generated fast. All of it living comfortably within the walls of what’s already been tried because no one pointed it elsewhere.

The advantage shifts to teams who can see past those walls and make leaps toward the adjacent possible. But that capacity doesn’t happen automatically. It has to be trained and the humanities are where that training has always lived.

Bookshelf of diverse titles.
Innovation comes from studying humans in all capacities not just business books.

Shakespeare doesn’t just teach you plot. He teaches you to sit with ambiguity, hold contradictions, and understand humans who don’t behave rationally. Storytelling doesn’t just teach you to communicate. It teaches you to imagine a future that doesn’t exist yet and make others believe in it before evidence arrives. Those aren’t decorative skills. They’re exactly what AI can’t replicate.

That’s why I believe the Freeman College of Management’s Markets, Innovation & Design (MiDE) approach matters now more than ever. Business education rooted in the liberal arts. The skills often dismissed by data-driven business curriculums turn out to be the ones that matter most in a data-driven AI world.

MiDE trains students to work at the intersection of markets, innovation, and human-centered design, so AI becomes a tool inside a disciplined thinking process rather than the thinking process itself.

There is one final irony worth considering. AI itself was born from the human capacity this piece is arguing to protect. Alan Turing imagined a world that didn’t exist yet, with no data to confirm it was possible, and leaped. The question now is whether what human AI visionaries built will illuminate that capacity in us or quietly dim it.

Your turn

As AI moves further into your work: what role do you want it playing in how you think and create? The walls we stopped seeing with social media took years to become invisible. With AI, most of us are still early enough to decide what we let form and what we don’t.

A confession.

I wrote this post with AI assistance for research, drafts, and pushing my thinking further than I could alone. That’s not the issue. At one point, I asked Claude whether a thought was good enough and then caught myself wondering whether it would perform well on LinkedIn. Two different tools. The same instinct. Both optimizing against past data. Neither capable of telling me whether an idea is genuinely new or worth pursuing. I caught myself doing exactly what this post argues against. I’m not immune to this. Neither, I suspect, are you.