Empathy Engine

Empathy Engine

Human Premium for Solopreneurs (Part B)

🔒 Leader’s Dispatch: Volume 42 (Hybrid Solopreneur, Part 5b of 6 Part Series)

Mark S. Carroll's avatar
Mark S. Carroll
May 11, 2026
∙ Paid

Episode 05b: The Human Premium

The tool that turns invisible judgment into language you can defend

👋 Welcome to my paid subscriber-only edition of Empathy Engine (Leader’s Dispatch). Each week I build evidence-informed tools for serious solo operators, leaders, and team leads who have moved past the hype and are now wrestling with the real operating cost of hybrid AI stacks and contemporary organizations.


Research Binder: the receipts (citations + source notes) are compiled in a PDF at the bottom of this post.

Empathy Engine is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

From naming the premium to defending it

Last week I argued that the human premium is not the output. It is the judgment that makes the output worth using. Useful evidence changes the decision. Impressive noise changes the mood in the room. That distinction landed. The question that followed was harder: what do you actually say when a client tests it?

This week is the tool.

Human review is not magic

A sloppy version of last week’s argument would say: AI makes mistakes. Humans catch them. Therefore, humans are the premium. That sounds comforting. Comfort is not clarity.

Research on automation bias suggests that uncritical deference to AI output causes a systematic drop in vigilance, making the human verification layer a critical business function, not a redundant one. But human review only helps when the human is actually reviewing with critical distance. A person who rubber-stamps an AI draft is not adding judgment. They are adding a warm signature to a machine’s confidence. That is not a premium. That is clerical theater.

The better question is not, “Did a human touch this?” The better question is, “What did the human actually change?” Did they verify weak claims? Did they challenge the framing? Did they adapt the answer to the client’s specific situation? Did they own the final call? That is the difference between symbolic review and meaningful review.

The hard part is not claiming you added judgment. The hard part is being honest about whether your judgment changed the output.


When the client asks about AI

At some point, a client may ask whether AI was used. The wrong answer is a panic monologue. The other wrong answer is a slippery dodge wrapped in eight syllables of “augmentation.” The better answer starts with preparation that happens before the call, when you can explain the role of AI and the role of your judgment without sounding like a hostage video with better lighting.

I had a client once who wanted a clean answer on a roadmap decision that was politically loaded before the meeting even started. The room was the same kind of evidence pile I described last week: every function had data, every function could defend their slice, and nobody had agreed on which evidence should govern the decision.

Before I gave a recommendation, I walked through how I was weighting the evidence. The sales escalation was urgent, but not necessarily representative. The support pattern was less dramatic, but more recurring. The executive commitment mattered politically, but it should not be disguised as customer evidence.

That is where a bad consultant can look more attractive than a good one. A bad consultant sells certainty. A better consultant sells calibrated judgment. Serious product decisions carry variance, tradeoffs, timing issues, and risk. The useful answer was not, “Do the feature” or “do not do the feature.” The useful answer was, “There are three viable paths, and the right one depends on which risk you are most willing to carry.” Certainty feels like leadership until reality starts charging interest.

The breakthrough came when they stopped asking, “Which answer is correct?” and started asking, “Which tradeoff are we choosing, and can we defend it?” That was the real work.

A prompt can generate a draft and simulate confidence. What the client still needs is the convergence of judgment, trust, and taste inside a real context. Judgment decides what deserves weight. Trust comes from knowing what was checked and owned. Taste filters the output into something usable for this client, in this moment, with these consequences.

In practice, that convergence looks different depending on the operator. A fractional CMO curating AI strategy options to protect a retainer is doing judgment work. A solo consultant synthesizing market research into actionable product features is doing judgment work. The task is visible. The judgment layer is what makes the task worth paying for.

Share

That is why the worksheet belongs in the system. Not as a prop to wave at the client. Not as proof that the client should pay more. Use it to clarify your thinking before pressure enters the room.

User's avatar

Continue reading this post for free, courtesy of Mark S. Carroll.

Or purchase a paid subscription.
© 2026 Mark S. Carroll · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture