Around 2012, an FSU professor made headlines for including Klout scores as part of his students’ course grades (just 10% of the total). It wasn’t an unreasonable experiment. If the market cares about this number, students should understand how it works. Klout scores shaped influencer partnerships, opened doors to brand deals, and establish who counted as a thought leader. The number had real weight.
I remember first encountering Klout at an IMC conference in Mark Schaefer’s keynote. He recently published Return on Influence. Right after that talk I checked my score and started thinking more intentionally about my online presence. That led to this blog, job opportunities, contributing to respected publications and eventually my two books.
The number was motivating, but I never got obsessed to the point of doing things just to watch it climb.
What’s interesting in retrospect isn’t that the professor was wrong to take it seriously. It’s how quickly the controversy revealed the problem with legitimizing a metric that could be gamed. When a score carries institutional weight, a market grows around engineering it with purchased followers, inflated engagement, and synthetic reach. The score becomes more important than what it was supposed to represent until it stops representing much at all. Klout shut down in 2018.
I find myself thinking about that moment again as I watch a new metric start to carry similar weight in AI-driven marketing circles. My prediction: it won’t be long before we hear about a professor basing an AI course grade on token usage. Or a manager rewarding the employee who ran the most prompts last quarter. We may already be there.
“When a measure becomes a target, it ceases to be a good measure.” — Marilyn Strathern (popularizing Goodhart’s Law)
Amazon recently mandated that 80% of its engineers use its internal AI coding tool weekly. Within 90 days at least four production incidents happened, including a 6-hour outage with an estimated 6.3 million lost orders. Amazon attributed the failures to user error. That may be true, but when usage is the mandate and metric, conditions for error become easier. The 80% target became more important than the safeguards surrounding it.
We may be entering the era of what’s called Tokenmaxing. It’s like Klout in a more expensive suit. You’ve probably already seen a version of it on LinkedIn. People sharing screenshots of a dozen AI agent tabs open on their monitor as a signal of how AI-forward they are. It’s the digital equivalent of watching that little orange Klout score climb.

What is Tokenmaxing?
In AI, a token is a unit of compute or 0.75 words. For developers, token usage is a meaningful metric. It affects cost, context window management, and efficiency. Paying attention to it makes sense in that context.
The problem arises when that same logic migrates into marketing strategy as a proxy for effort or value. Tokenmaxing is what happens when the volume of AI interactions becomes the goal in itself. People burning through a monthly subscription budget, climbing a usage leaderboard, generating thousands of prompt variations to prove the’re being “AI-forward.”
If you aren’t maxing out your tokens, you aren’t really using the tool.
It’s the Use It or Lose It fallacy updated for today. And like Klout, the number starts to feel meaningful. Even if it might not be measuring anything that actually matters for marketers.
What Gets Lost Upstream
The concern isn’t just wasted budget. It’s what happens to the thinking process when output volume becomes the metric. Real creative work has an upstream phase. The slow, often uncomfortable part where you sit with a problem before you know what to do about it. Where you ask why before you ask what’s next.
That phase doesn’t generate tokens. It doesn’t show up in a usage dashboard. But it’s often where the most useful thinking happens, and it’s the phase that gets compressed when we start measuring activity instead of impact.
Not all important thinking happens in the ones and zeros of the digital world.
Mark Schaefer has written about AI turning marketing into “a pandemic of dull.” Everyone converging toward the same outputs, faster. Tokenmaxing feeds that pattern. When we optimize for volume, we tend to get incrementally better at doing the same thing as everyone else, until the work becomes difficult to distinguish from anyone else’s.
Reclaiming the Tiny Experiment
I’ve been spending time with Anne-Laure Le Cunff’s book Tiny Experiments. It’s been a useful counterweight to this kind of thinking. In a design-thinking context, a tiny experiment isn’t a step toward a usage milestone. It’s more like a probe into uncertain territory. A way to follow a question you can’t fully answer yet, and learn something from the attempt.
The difference in practice is meaningful. A tokenmaxing mindset measures success by what it produces: content, variations, volume. A design-thinking mindset measures success by what it discovers.
Sitting in a coffee shop and overhearing how someone actually describes a problem you thought you understood, or visiting a store and watching how a customer navigates a decision in real time, and then designing a small test around what you observed, that’s a tiny experiment in the spirit Le Cunff intends.
The output isn’t content. It’s a new understanding of a real human being that no prompt, and no amount of online data, could have surfaced on its own.
When we set goals around usage, we quietly change what we’re optimizing for and it stops being an experiment. A tool built for exploration becomes a treadmill for output.
Protecting the Thinking That Happens Before the Prompt
The answer to this isn’t less AI. It’s being more intentional about where the human thinking lives in the process. Klout scores didn’t make anyone a better marketer, but they did shape how influence was perceived and rewarded, until the market for gaming them undermined the whole thing.
Token counts carry a similar risk. They can start to feel like a proxy for strategic thinking without doing any of the work that strategic thinking actually requires.
The most valuable part of the design process is the part that doesn’t cost a cent in API fees. It’s the conjecture, the what-if, the question you sit with before you ever write a prompt. Le Cunff’s framing is most useful when it’s pointed at expanding your thinking rather than refining the path you’re already on. Experiment to learn, not just to optimize.
We didn’t fully absorb the lesson from Klout. It’s worth keeping that in mind as token usage starts to feel like a measure of something meaningful. I feel it too.
There’s a guilt that creeps in when I’m not consuming AI articles, listening to podcasts, or running a deep research report to stay current. AI is moving quickly, and falling behind feels like a real risk. Then I open LinkedIn and see someone’s screenshot of a dozen AI agent tabs and the pressure compounds.
It’s just one more reason to feel like offline, in-person time is something I need to justify rather than protect.
But if I’m sitting in that coffee shop with my laptop open, catching up on AI news, or counting other people’s agent tabs, I’m missing the thing that makes the work better. The overheard conversation, the moment of watching someone struggle with a decision, the small human observation that no model thought to collect.
That upstream time isn’t wasted time. It may be the most innovative thing a marketer can do. It’s just the hardest to justify in a dashboard.
About This Post’s Creation
This post was developed in partnership with Claude. I had initial ideas from observation, articles, podcasts, and reading Le Cunff. Claude helped organize, research further and refine.

