When AI Creates Margin, Who Gets It?

Empty Beach

AI is being sold to businesses as a way to improve margin. Many employees are embracing it for the same reason. The problem is that businesses usually mean profit margin, while employees mean margin in their lives.

Same word. Different dream.

That may become one of the biggest workplace tensions of the next few years.

I learned this long before AI.

Back when I worked in the high-pressure world of advertising as a copywriter and creative director, my art director and I would sometimes leave the building and go to Starbucks. Not to waste time. To create margin.

We knew that if we stayed in the office, with people constantly checking on us, asking for things, and wanting updates, we would not have the mental room to come up with the big ideas everyone wanted from us.

That coffee shop time was not a break from productivity. It was productivity.

Some of our most creative moments were not spent typing at a computer. They were spent leaning back in a chair, getting some distance, and talking and sketching our way toward a better idea. That’s something I think many organizations still miss, especially now in the rush to adopt AI.

Empty Beach
Where this post started. Room to pause. Room to reflect. Room to dream.

When AI creates efficiency, who keeps it?

For businesses, margin means more output from the same people. Faster turnaround. Lower labor costs. Less slack in the system.

For employees, margin means less drudgery. Fewer late nights. More breathing room to think, recover, and have some life left at the end of the day.

Once AI creates efficiency, somebody decides where it goes. Back to the human being? Or right back into the machine of work?

Recently, I was able to get away. I had time to enjoy nature, spend time with family, and read something not related to work. I got caught up in the characters and story of a novel. When I came back, I felt refreshed and inspired.

And sadly, I also felt the need to justify that time by telling myself it gave me some really good ideas for work.

Somehow even breathing room can start to feel like something you have to justify.

Efficiency can quietly consume margin

If AI removes low-value tasks and gives people room for better judgment and deeper focus, that’s progress. But that margin can disappear in two ways. Management can fill the gap with more tasks, tighter deadlines, and leaner staffing. Employees can fill it themselves, because many of us have been conditioned to treat freed time as space for more work.

That may look like progress on paper, but in practice it can become just another way work expands to fill every available space. Parkinson’s Law applies to AI. When tools create margin, the instinct is to fill it.

Evidence is already mounting. A UC Berkeley study tracking AI adoption inside a real company found that even without management pressure, workers filled every hour AI freed up with more work. Deep-focus time fell and cognitive fatigue rose. There’s even a name for it: “AI brain fry.” Whatever you call it, it’s another example of a tool promising margin but quietly consuming it instead.

We’ve seen this before. Email was supposed to make communication easier. Smartphones were supposed to make work more flexible. They did both, but they also made work more constant and harder to leave behind. AI could easily follow the same path.

The real opportunity of AI isn’t just to use it to do work faster. It’s to decide what kind of margin is worth protecting.

Human margin is not waste

In my classes, we use Steven Johnson’s Where Good Ideas Come From. His study of innovators in history points to conditions that produce breakthrough ideas like liquid networks, adjacent possible, error, serendipity, and slow hunch. None of those happen easily when every minute is scheduled, measured, and filled. They need margin. Time for reflection.

I’ve seen this in my own life too. Some of my best ideas have come in places that don’t always look productive: the break room, between conference sessions, at a social hour, in the casual conversation between one thing and the next. That’s often where ideas connect.

Organizations say they want creativity, insight, and innovation. Then they build systems that leave no room for the very conditions that make those things possible.

You can’t squeeze people into breakthrough thinking.

Margin is not always waste. Sometimes it’s the condition that makes better work possible.

A better question for leaders

Some push back on this. They say pressure is the point. Constraint forces creativity. Urgency eliminates mediocrity. There’s evidence for it. Companies built on relentless intensity have produced breakthroughs that more relaxed organizations never did.

While AI is being sold as a tool to give time back, some of the very companies building AI post job descriptions glamorizing 70-plus hour weeks, or a 996 schedule. The technology that promises margin is arriving with a culture that demands you surrender it.

But that model tends to work in specific conditions: mission-driven people who opted in, often early in their careers, working on outsized problems they personally find worth the sacrifice. It also has real costs: attrition, burnout, and the quiet departure of experienced people who have other options.

Importantly, it misses what AI actually changes. A high-pressure model squeezes harder to get more. AI removes the need to squeeze people just to get routine work done.

The question isn’t whether to demand high performance. It is whether human qualities AI cannot replicate, such as judgment, creativity, and strategic thinking, flourish under constant pressure or require something different.

The companies that benefit most from AI over time may not be chasing maximum short-term output. I’d bet on the ones that use part of the gain to create better conditions for human performance: more focus, less drudgery, better decisions, more sustainable energy.

A healthier AI model could look more like defining work clearly, what done well looks like, and letting people keep some of the margin they create – for better work, and for more life.

Who gets the margin

The deeper issue isn’t that employers and employees want opposite things. Often, they both want better results and a sense that work is making a meaningful difference. Tension comes from a misunderstanding about how those outcomes are produced. Work culture often treats margin as waste to eliminate rather than the space needed to think, care, recover, and do meaningful work well.

This can lead to loss of motivation. Worker morale is meaningful. When people lose heart, productivity erodes. Eventually, the best people leave. AI didn’t create that misunderstanding.

The real negotiation happening around AI at work isn’t just about efficiency or adoption.

It’s about margin. Who captures it. Who benefits from it. Who gets the breathing room.

At the agency, I used to run during my lunch hour. It relieved stress, helped keep me healthy, and didn’t take away from family time. Anyone familiar with the creative process knows downtime matters. My subconscious mind kept working on client problems and projects. More often than not, I came back from those runs with new ideas for the work I was doing.

That doesn’t mean I never worked long hours. Big pitches and tight deadlines sometimes meant late nights, work after the kids were in bed, and Saturdays in the office. That came with the business. But there’s a difference between working hard when the work truly calls for it and treating constant overwork as proof of commitment.

After several years, my boss called me into his office. He said that my art director and I had the best work in the agency. Our work won creative awards, produced profit for our clients, and we always met deadlines while handling more clients and projects than the other teams.

Then he said, “But…” You run at lunch and go home at night.

He didn’t understand that the margin was part of what produced the results he was getting.

Shortly after that meeting, my art director and I both left for other opportunities.

The future of AI at work may not come down to the technology itself. It may come down to who gets the margin.

This post was drafted with the assistance of ChatGPT and Claude. The ideas, experiences, and opinions are my own.

How Innovation Disappears Without Anyone Noticing. AI quietly becomes the judge of what’s worth trying.

Bookshelf of diverse titles.

Most of what I hear about AI in business and education falls into two camps. One wants to go all in. Adopt everything, automate everything, let the model drive. The other wants to ban it. Block it, police it, treat it like a threat to learning, work, and trust.

I understand both instincts. But both are incomplete. The biggest risk isn’t solved by pretending AI doesn’t exist, and it isn’t solved by letting AI become the default operating system for creativity. The real danger is quieter and doesn’t require advanced AI use to feel it.

We’ve been here before. Social media promised connection and creativity. For a while, it delivered. What it quietly took in return (depth of attention, tolerance for uncertainty, ability to sit with an idea before seeking validation) didn’t disappear in a single moment.

It eroded in a million rational decisions until dependency on external validation became walls we stopped seeing.

We watched it happen to our teenagers first (likes, follower counts, compulsive checking, anxiety when it didn’t come). A generation learned to measure the worth of a thought, a face, a body, a life by how many strangers approved of it. Not just one generation. How much am I thinking if people will “like” this when I post it on LinkedIn?

Some of what was lost with social media (certain habits of mind, expectations of privacy, unmediated experience) feel like they’re not recoverable. How about confidence, patience, trust in instinct, feeling comfortable in your own skin? We didn’t notice them leaving. A pattern worth considering as AI moves into the center of how we work, create, and decide.

The real danger isn’t takeover. It’s permission.

Innovation doesn’t die because AI becomes “too smart.” It dies when AI becomes the permission structure.

When AI is the first draft and the final judge, teams begin to internalize a new rule. “If the model didn’t suggest it, it’s probably not worth trying.” That sounds dramatic, but in practice it shows up as premature convergence. Ideas get smoothed down into something plausible, defensible, and safe. AI makes the walls feel like the edge of possible.

Creative leaps disappear, not because people lack imagination, but because they stop trusting imagination that can’t be justified by the machine.

There’s a second, less visible danger that compounds this one. It’s about what happens to the ideas that do get conceived, once they enter a world where AI drives gatekeeping. Not just on publishing platforms, and funding algorithms, but inside organizations.

It’s a manager who runs a proposal through an AI tool before greenlighting it. A client who asks for validation before approving an original direction. A colleague who reports back that an idea “scored low on feasibility.” A hiring process that screens for a pattern of past success rather than shape of future potential.

At every layer, the filter is the same: does this match what has worked before? Radical ideas may still get born. The problem is they have a harder time surviving the rooms they have to pass through due to the invisible walls created by AI. Thus, the threat to innovation is psychological and infrastructural.

The third way: humans in the loop, with a method

This is why I’m convinced the answer isn’t all-out AI or zero AI. The answer is AI + human judgment, guided by a method that protects what AI can’t replace.

Design thinking can be that method. “Design” is most associated with how things look. But “design thinking” is a human-centered problem-solving process. It’s a disciplined way of framing questions, testing assumptions, and learning from reality that applies to anything: a product, a strategy, a curriculum, a client relationship, a hiring decision. This includes problems where no data exists. Where the market hasn’t formed, the behavior hasn’t been measured, the future hasn’t happened.

AI requires data to recommend. Humans can imagine a world that doesn’t exist and work backwards to it.

That capacity, to conceive of something genuinely new and then build toward it, is what the permission structure quietly erodes. Here is why designing thinking works. It forces a loop that AI can’t complete on its own:

Frame → Prototype →Test → Learn

A prototype can be a napkin sketch – just enough to get a reaction to a concept or idea.

Crucially, this process keeps validation anchored in human reality rather than model confidence. It doesn’t just keep humans in the loop. It gives humans a reason to trust their own judgment over the machine’s. It helps you see the walls which is a prerequisite to getting past them.

Where AI flattens innovation

AI systems trained on existing data are fundamentally prediction engines built to find the most plausible path between what already exists. That makes them ill suited for the leap to the adjacent possible: the space beyond the walls of the known, close enough to reach but too original for any model to have predicted.

Anyone who’s studied business recognizes the pattern. Organizations built on early innovations by visionary founders stagnate with small improvements and eked out efficiencies to be upset by a startup that sees past the corporate walls.

AI, deployed the wrong way, can accelerate that stagnation. LLM output is substituted for human contact: personas of people never met, best practices relieve lived tensions, staid concepts fit in instead of stand out, strategies sound smart yet feel interchangeable.

AI itself is novel and appears to be fast and cheap. But by definition it converges toward the middle, toward the expected, toward what has already been validated by the aggregate of human output. Genuine originality, grounded in human reality and shaped by real judgment, will become rarer than ever.

Why design thinking helps (in practical terms)

Design thinking mitigates the loss of innovation risk because it changes what you optimize for and where you look for proof. And it works whether you’re new to AI or already deep in it. It’s less about correcting a habit than building the right one from the start.

It prioritizes problem framing before solution picking

AI will happily answer any question you ask. Design thinking asks whether you’re asking the right question. Reframing is where originality often lives. If you only optimize inside an inherited frame, you’ll get better and better at solving the wrong problem. Reframing is also how you become aware of walls. The assumptions so embedded in how you’re thinking that you stopped noticing they were assumptions at all.

It replaces “AI says” with “reality says”

If AI becomes the validator, teams outsource judgment. Design thinking moves validation back into the world of prototypes and user behavior. You don’t need permission from a model when you can say, “We built something small and tested it. Here’s what people actually did.” This is also the antidote to the infrastructure problem. When you can demonstrate real-world proof, you don’t need the algorithm’s blessing to proceed.

It trains the habit AI can quietly erode: human agency

One of the subtler effects of AI is the habit of letting the tool do the thinking you used to do yourself. You reach for it before you’ve sat with the problem, before you’ve let the uncomfortable question breathe.

Design thinking builds the opposite habit of mind: observe, synthesize, imagine, test, learn, repeat. These aren’t soft skills. They’re capabilities that keep unexpected ideas alive long enough to find an audience. Design thinking keeps you oriented toward the adjacent possible rather than the algorithmic average.

Two paths (a quick mental model)

If you want a simple operating principle: AI is a powerful teammate. It shouldn’t be the judge. Whether you’re just beginning to use AI or have been using it for a while, it helps to see the two patterns it tends to create and to decide deliberately which one you want.

Two paths to innovation with AI. Click on the graphic to download a PDF.

The first path is faster and cleaner. It’s the one that forms without noticing. Ask, get an answer, justify it, move on. It’s not wrong exactly, but it keeps you inside the walls, optimizing the known and mistaking the plausible for the possible. The second path feels slower at the start. But it protects originality because it stays anchored in human reality where the proof of concept is the world, not just the algorithm.

Innovation isn’t dead. But it’s no longer default.

In an AI world, “good enough” will be cheap. Polished content, plausible strategies, and decent ideas generated fast. All of it living comfortably within the walls of what’s already been tried because no one pointed it elsewhere.

The advantage shifts to teams who can see past those walls and make leaps toward the adjacent possible. But that capacity doesn’t happen automatically. It has to be trained and the humanities are where that training has always lived.

Bookshelf of diverse titles.
Innovation comes from studying humans in all capacities not just business books.

Shakespeare doesn’t just teach you plot. He teaches you to sit with ambiguity, hold contradictions, and understand humans who don’t behave rationally. Storytelling doesn’t just teach you to communicate. It teaches you to imagine a future that doesn’t exist yet and make others believe in it before evidence arrives. Those aren’t decorative skills. They’re exactly what AI can’t replicate.

That’s why I believe the Freeman College of Management’s Markets, Innovation & Design (MiDE) approach matters now more than ever. Business education rooted in the liberal arts. The skills often dismissed by data-driven business curriculums turn out to be the ones that matter most in a data-driven AI world.

MiDE trains students to work at the intersection of markets, innovation, and human-centered design, so AI becomes a tool inside a disciplined thinking process rather than the thinking process itself.

There is one final irony worth considering. AI itself was born from the human capacity this piece is arguing to protect. Alan Turing imagined a world that didn’t exist yet, with no data to confirm it was possible, and leaped. The question now is whether what human AI visionaries built will illuminate that capacity in us or quietly dim it.

Your turn

As AI moves further into your work: what role do you want it playing in how you think and create? The walls we stopped seeing with social media took years to become invisible. With AI, most of us are still early enough to decide what we let form and what we don’t.

A confession.

I wrote this post with AI assistance for research, drafts, and pushing my thinking further than I could alone. That’s not the issue. At one point, I asked Claude whether a thought was good enough and then caught myself wondering whether it would perform well on LinkedIn. Two different tools. The same instinct. Both optimizing against past data. Neither capable of telling me whether an idea is genuinely new or worth pursuing. I caught myself doing exactly what this post argues against. I’m not immune to this. Neither, I suspect, are you.