The Thing You Know
Here's what's actually happening.
AI didn't suddenly make humans obsolete. What it did was destroy the market value of one specific thing: producing passable first-draft output. Writing a decent email. Summarizing a report. Generating code that mostly works. Building a slide deck. Drafting a plan.
But here's what AI cannot do. It cannot know what should exist in the first placeIt can produce anything you describe. It cannot decide what's worth producing. That decision requires context, values, and stakes — things that come from living, not computing..
It can write a thousand emails. It cannot know which email needs to be sent, to whom, and why this one matters. It can generate a business plan. It cannot know whether the business should existGeneration is cheap now. Judgment about what to generate is the scarce resource. That's what you carry..
That capacity — knowing what should exist — is yours. It comes from living in the world, dealing with real situations, and building the kind of understanding that only comes from experience, context, and giving a damn about the outcome.
A calculator didn't destroy math. It destroyed the market value of arithmeticThe skill didn't become less real. It became less rare. That's the pattern every time: the tool commoditizes the execution, and the value migrates to whoever knows what to execute and why.. The people who understood when to multiply and why were more valuable than ever. They just needed to learn to use the calculator.
That's where we are now. Except it's not math. It's everything.
of knowledge workers say they use AI weekly — but fewer than 15% report trusting the output without editing.
What decisions at your work or school could not be made by someone who just arrived?
You already know something AI doesn't.
Everybody carries knowledge that didn't come from a textbook. Maybe you know how to read a room. Maybe you know how to keep three kids fed on nothing. Maybe you know how to navigate a bureaucracy that was designed to make you give up. Maybe you can look at a dataset and feel that something's off before you can explain why.
Whatever it is — you know it in your bones. You built that knowledge by living.
Now we're going to test it. Right here.
Philosopher Michael Polanyi called this "tacit knowledge" — the things we know but cannot fully explain. Read more
Here’s the first thing you need to know: your instinct right now will be to think about what you know as a topic. “I know about nursing.” “I know about cars.” “I know about managing people.”
That’s the layer AI handles just fine. Ask it about nursing or cars or management and it’ll give you something decent. Maybe even impressive. The gap doesn’t live there.
Go deeper. Think about a specific moment — a time you made a call that someone without your experience would have gotten wrong.
Maybe you’re a nurse who looked at a patient and knew something was off before the vitals confirmed it. Maybe you’re a parent who met a daycare worker and felt, in your gut, that something wasn’t right. Maybe you’re a mechanic who heard a noise nobody else heard. A manager who sensed a team was about to fall apart. A teacher who knew a student was in crisis, not lazy. Someone who navigated a system that was designed to make you give up — and found the one path that actually worked.
Whatever it is — that’s your thing. Not the topic. The moment. The judgment call.
Below is a live AI. Describe that situation — the circumstances, the stakes, what you were reading that others wouldn’t have seen. Then ask it what it would have advised. Watch what comes back.
That gap — between what the AI produced and what you know from living — is not a flaw in you. That's the thing about you that has value.
What you just found is judgment. Contextual knowledge. The kind of understanding that doesn't exist in any dataset because it comes from being a person in a specific place with specific experience.
The AI can produce output on any topic. It cannot know whether that output actually works in the real world. You can. That's not a small thing. That's the whole thing.
Harvard Business School found that consultants using AI were 40% more accurate on tasks within its range — but 23% less accurate when the task required judgment AI couldn't provide. Read the study
Now push back. Tell the AI what it got wrong. Give it your context, your constraints, the reality it's missing. Make it try again.
Watch what happens. Does it get closer to your reality? Or does it just generate a more confident version of the same shallow take? Your ability to tell the difference is the skill.
Two things, and both matter.
First: your ability to tell the difference between a genuine improvement and a polished version of the same miss — that’s one of the most economically valuable skills you can develop right now. Someone in every organization, every school, every government office is going to trust AI output that shouldn’t be trusted. The person who can catch that is worth more than the person who generated it.
Second: to push back, you had to explain what you know and why it matters. You had to put language around expertise that usually just lives in your head. That’s not a side effect of this exercise. It’s the point.
How many people in your workplace or school would accept AI output at face value on this same topic? What would they miss?
Now a different kind of question. This one isn’t about what you know — it’s about how you came to know it.
How did you actually learn the thing you know? Not “I studied it.” The real way. The failures, the mentors, the moments where something clicked because you’d been in the situation enough times to finally see what was happening.
Most of the knowledge that actually matters was built through exposure, practice, consequence, and time. There’s no shortcut.
Tell the AI your story. How did the knowledge get into you?
Psychologist K. Anders Ericsson spent decades studying expertise. His key finding: it takes structured experience, not just time. Deliberate practice
Exposure
↓
Practice
↓
Failure
↓
Judgment
If AI removes the first three steps, the last one never forms.
You just traced the pipeline that built your judgment. Exposure, practice, failure, time. There’s no shortcut and no dataset that contains it.
Here’s why this matters: that pipeline — the process by which people develop judgment through experience — is exactly what’s being disruptedIf junior work is how people become seniors, and AI absorbs the junior work — where do the next seniors come from? This isn’t hypothetical. It’s already happening in writing, design, and code.. When AI handles the entry-level work, the junior tasks, the first drafts — it also removes the experience that used to build expertise.
Is the path that built your knowledge still available to the next person? Or is it being closed off?
The world is being reorganized. You can either let that happen around you or you can see it for what it is and move.
Time to flip it. Everything up to now, you’ve been evaluating what the AI produces. Now you’re going to direct it.
Think of something you actually need. Not a test — something real. A plan, a document, a solution to a real problem.
Don’t ask. Direct. Tell the AI what you need, give it the context and constraints, and set the standard. If it misses, tell it what’s wrong and make it try again. Iterate until it meets your standard. You’re the one who knows what should exist. Make it build what you see.
Notice the difference. In Level 1, you tested. In Level 2, you pushed back. Just now, you led. You had a vision of what should exist, and you used a powerful tool to build it.
That’s the skill. Not prompting. Not “using AI.” Directing. Bringing your knowledge, your judgment, your sense of what’s needed, and using a powerful tool to execute on it faster than you could alone.
The person who can do this is not being replaced by AI. They are the reason AI produces anything worth a damn.
This is what researchers call “directive capacity” — the ability to specify what should exist and hold the output to a standard. It’s the skill that scales with AI, not against it.
One more thing, and it’s the most important thing.
When a new technology makes work faster, someone captures that value. It might be you — same pay, less effort. It might be your employer — more output, same payroll. It might be an investor. It might be a customer.
This is not a technology question. This is the oldest question in economicsEvery productivity revolution — the loom, the assembly line, the computer — created enormous value. The fight was always about who captured it. AI is the same fight, moving faster., and the answer has always depended on who has power and who doesn’t.
The knowledge you carry — the thing you’ve been exploring through this whole experience — is real and it has value. But value and compensation are not the same thingA nurse’s judgment at 3am is invaluable. Their paycheck doesn’t reflect that. AI doesn’t fix this gap — it widens it, unless the people with the knowledge also have the leverage..
Let’s think this through together. In your world — who captures the value when things get more efficient? And what does that mean for you?
McKinsey’s estimate of annual value AI could add to the global economy. The question isn’t whether value is being created — it’s who receives it.
If your employer adopted AI tomorrow and your output doubled — would your pay double? Who would capture the difference?
That conversation? That’s what AI is actually for. Not replacing your judgment — extending it. Helping you think through real problems that matter to your life.
You just used AI as a thinking partner to work through a systemic question about your own situation. That’s not a parlor trick. That’s the skill.
So here’s where you are.
You know something AI doesn’t. You’ve tested that — not as a theory, but right here, live, with the tool in front of you. You’ve felt the gap between what it generates and what you know.
You’ve pushed back on it, traced the path that built your knowledge, directed it to make something real, and used it to think through a question that matters to your life.
And you’ve looked at the economic reality: that the value of what you know and the compensation you receive for it are two different things, connected by systems you can either understand or be subject to.
What you just did wasn’t an AI tutorial.
You found the foundation for every AI skill that actually matters: knowing what should exist, recognizing when something’s off, directing a powerful tool toward something real, and thinking through systemic questions with genuine honesty.
The difference between using AI and directing AI is the difference between asking and knowing what to ask for. You just practiced the second one.
Door 2 goes deeper — into what happens when the answer isn’t simple.
Before you go
What do you see now that you didn’t see before?
Not about AI. About yourself — what you carry, what it’s worth, and how the world around you is or isn’t recognizing it.
And one more thing
What do you need now?
Not what should you learn. What do you actually need? More access to tools? Help with a specific situation? A community? Something we haven’t thought of?
Stay connected
Want to hear when the next door opens?
We send about one email per month — when a new door is ready. That’s it.
Received. Thank you. What you shared shapes what we build next.
Generating your report…