The Token Trap: Why “Tokenmaxing” in AI is the New Klout Score

MIDE Studio Brainstorming Wiith Post Its

Around 2012, a professor at Florida State University made national headlines for including Klout scores as part of his students’ social media marketing course grades (just 10% of the total). It wasn’t an unreasonable experiment. The logic was straightforward: if the market cares about this number, students should understand how it works. And the market really did care. Klout scores shaped influencer partnerships, opened doors to brand deals, and helped establish who counted as a thought leader worth listening to. The number had real weight.

I remember first encountering Klout at an IMC conference where Mark Schaefer was keynoting. He had recently published Return on Influence, and immediately after that talk I checked my own score and started thinking more intentionally about my online presence. That led to this blog, to job opportunities, to contributing articles to respected publications and eventually my two books.

The number was motivating, but I never got obsessed with it to the point of doing things just to watch it climb.

What’s interesting in retrospect isn’t that the professor was wrong to take it seriously. It’s how quickly the controversy revealed the problem with legitimizing a metric that could be gamed. When a score carries institutional weight, a market grows up around engineering it. Purchased followers, inflated engagement, synthetic reach. The score becomes more important than what it was supposed to represent. Eventually it stopped representing much at all. Klout shut down in 2018.

I find myself thinking about that moment again as I watch a new metric start to carry similar weight in AI-driven marketing circles. My prediction: it won’t be long before we hear about a professor basing an AI course grade on token usage. Or a manager rewarding the employee who ran the most prompts last quarter. We may already be there.

“When a measure becomes a target, it ceases to be a good measure.” — Marilyn Strathern (popularizing Goodhart’s Law)

Amazon recently mandated that 80% of its engineers use its internal AI coding tool weekly. Within 90 days at least four production incidents happened, including a 6-hour outage with an estimated 6.3 million lost orders. Amazon attributed the failures to user error. That may be true, but when usage is the mandate and metric, conditions for error become easier. The 80% target became more important than the safeguards surrounding it.

We may be entering the era of what’s being called Tokenmaxing. It looks like Klout in a more expensive suit. You’ve probably already seen a version of it on LinkedIn. People sharing screenshots of a dozen or two dozen AI agent tabs open on their monitor as a signal of how AI-forward they are. It’s the digital equivalent of watching that little orange Klout score climb.

MIDE Studio Brainstorming Wiith Post Its
Should we feel guilty for thinking with our hands and allowing time for inspiration and the slow hunch? This was from a brainstorming session on PostIt notes that my students had in the Markets, Innovation & Design studio.

What is Tokenmaxing?

In AI, a token is roughly a unit of compute or 0.75 words. For developers, token usage is a genuinely meaningful metric. It affects cost, context window management, and efficiency. Paying attention to it makes sense in that context.

The problem arises when that same logic migrates into marketing strategy as a proxy for effort or value. Tokenmaxing is what happens when the volume of AI interactions becomes the goal in itself. People burning through a monthly subscription budget, climbing a usage leaderboard, generating thousands of prompt variations to prove the’re being “AI-forward.”

If you aren’t maxing out your tokens, the thinking goes, you aren’t really using the tool.

It’s the Use It or Lose It fallacy updated for the 21st century. And like Klout, the number starts to feel meaningful. Even if it might not be measuring anything that actually matters for marketers.

What Gets Lost Upstream

The concern here isn’t just budget waste. It’s what happens to the thinking process when output volume becomes the metric. Real creative work has an upstream phase. The slow, often uncomfortable part where you sit with a problem before you know what to do about it. Where you ask why before you ask what’s next.

That phase doesn’t generate tokens. It doesn’t show up in a usage dashboard. But it’s often where the most useful thinking happens, and it’s exactly the phase that gets compressed when we start measuring activity instead of impact.

Not all important thinking happens in the ones and zeros of the digital world.

Mark Schaefer has written about AI turning marketing into “a pandemic of dull.” Everyone converging toward the same outputs, faster. Tokenmaxing feeds that pattern. When we optimize for volume, we tend to get incrementally better at doing the same thing as everyone else, until the work becomes difficult to distinguish from anyone else’s.

Reclaiming the Tiny Experiment

I’ve been spending time with Anne-Laure Le Cunff’s book Tiny Experiments. It’s been a useful counterweight to this kind of thinking. In a design-thinking context, a tiny experiment isn’t a step toward a usage milestone. It’s more like a probe into uncertain territory. A way to follow a question you can’t fully answer yet, and learn something from the attempt.

The difference in practice is meaningful. A tokenmaxing mindset measures success by what it produces: content, variations, volume. A design-thinking mindset measures success by what it discovers.

Sitting in a coffee shop and overhearing how someone actually describes a problem you thought you understood, or visiting a store and watching how a customer navigates a decision in real time, and then designing a small test around what you observed, that’s a tiny experiment in the spirit Le Cunff intends.

The output isn’t content. It’s a new understanding of a real human being that no prompt, and no amount of online data, could have surfaced on its own.

When we set goals around usage, we quietly change what we’re optimizing for, and the experiment stops being an experiment. A tool built for exploration becomes a treadmill for output.

Protecting the Thinking That Happens Before the Prompt

The answer to this isn’t less AI. It’s being more intentional about where the human thinking lives in the process. Klout scores didn’t make anyone a better marketer, but they did shape how influence was perceived and rewarded, until the market for gaming them undermined the whole thing.

Token counts carry a similar risk. They can start to feel like a proxy for strategic thinking without doing any of the work that strategic thinking actually requires.

The most valuable part of the design process is the part that doesn’t cost a cent in API fees. It’s the conjecture, the what-if, the question you sit with before you ever write a prompt. Le Cunff’s framing is most useful when it’s pointed at expanding your thinking rather than refining the path you’re already on. Experiment to learn, not just to optimize. We didn’t fully absorb the lesson Klout offered. It’s worth keeping that in mind as token usage starts to feel like a measure of something meaningful.

I’ll admit I feel it too. There’s a low-grade guilt that creeps in when I’m not consuming the latest AI articles, listening to another podcast about where the technology is heading, or running a deep research report to stay current. AI is moving quickly, and falling behind feels like a real risk. Then I open LinkedIn and see someone’s screenshot of a dozen AI agent tabs and the pressure compounds.

It’s just one more reason to feel like offline, in-person time is something I need to justify rather than protect.

But if I’m sitting in that coffee shop with my laptop open, catching up on AI news, or counting other people’s agent tabs, I’m probably missing the thing that actually makes the work better. The overheard conversation, the moment of watching someone struggle with a decision, the small human observation that no model thought to collect because nobody knew it mattered yet.

That upstream time isn’t wasted time. It might be the most productive thing a marketer can do right now. It’s just the hardest to justify in a dashboard.

About This Post’s Creation

This post was developed in partnership with Claude. I had initial ideas from observation, articles and podcast, plus reading Le Cunff’. Claude helped organize, research further and refine.