ELIMINATE OR ELEVATE? What Are You Asking AI to Do?
Apr 07, 2026
The question your organization is asking about AI is determining whether you gain a real advantage — or just faster mediocrity.
A few weeks ago, I was leading an executive briefing when a senior leader asked a question I hear in almost every room I walk into.
"What work can we eliminate with AI?"
It's a reasonable question. It's also the wrong one.
My response: "A better question is — what thinking can you elevate with AI?"
The room went quiet for a moment — the kind of quiet that happens when a reframe lands. Because the distinction between those two questions isn't semantic. It's strategic. And which question your organization is asking right now is likely determining whether your AI investment is producing real advantage or just faster mediocrity.
Artificial intelligence is no longer a competitive differentiator.
Access is widespread. Capabilities are advancing rapidly. Tools that once felt cutting-edge are now embedded into everyday workflows. BCG's 2024 global survey of 1,000 senior executives across 59 countries confirmed the scale of commitment: organizations are all-in.
And yet, 74% of those same companies have yet to generate tangible, scalable value from their AI investments.
If everyone has access to the same technology, why are the outcomes so different?
The answer is not technical. It is cognitive.
THE TRAP MOST ORGANIZATIONS ARE WALKING STRAIGHT INTO
Most organizations still frame AI adoption as a technology transformation. They invest in platforms, governance frameworks, and deployment roadmaps. When results stall — and they do, for most — they look for better models, newer tools, or more training hours.
What they are missing is the actual problem.
BCG's research on what separates AI leaders from AI laggards reveals a pattern that challenges how most companies think about this investment. The organizations achieving 1.5× higher revenue growth and 1.6× greater shareholder returns aren't winning because they have superior technology. They're winning because they've made a fundamentally different investment. Roughly 70% of their AI investment dollars go into people and processes — culture, capability, and change — while just 10% go into algorithms. The laggards do precisely the opposite.
This is not an implementation problem. It is a thinking problem.
And there is a name for the cognitive skill that separates the leaders who get this right from those who don't. It's called metacognition. And most executives have never heard of it.
WHAT METACOGNITION ACTUALLY MEANS — AND WHY IT MATTERS NOW
Metacognition, first defined by psychologist John Flavell in the 1970s, refers to the ability to observe, evaluate, and regulate your own thinking. In practical terms: thinking about your thinking.
It sounds academic. The implications are anything but.
A Microsoft Research-led team published one of the most important papers on AI and human cognition at the CHI 2024 conference — the premier gathering of human-computer interaction researchers. Their conclusion was direct: the core usability challenges of generative AI — crafting effective prompts, evaluating outputs critically, iterating toward better results — are fundamentally metacognitive demands. They require users to accurately know what they know, recognize the limits of their own judgment, and adapt their thinking strategy when something isn't working.
This is not a skill most organizations are building. And the consequences of that gap are now measurable.
Researchers at Aalto University and LMU Munich recently published a peer-reviewed study examining what happens when professionals use AI to complete cognitively demanding tasks. AI improved task performance — but participants simultaneously overestimated their performance, with the confidence gap actually exceeding the real gain. More striking: higher AI literacy correlated with lower metacognitive accuracy. People who knew more about AI tools were more confident in their AI-assisted work — and less accurate in assessing the quality of that work.
Read that again. The more technically proficient with AI your team becomes, the more likely they are to overestimate the quality of their outputs — unless they've deliberately built the metacognitive discipline to counter it.
THE TWO MODES OF AI ENGAGEMENT
In observing how leaders and knowledge workers interact with AI, a pattern emerges with remarkable consistency.
Most people operate in linear mode: Ask. Receive. Move on.
This approach produces incremental efficiency. It saves time. For the 80% of work that is transactional and repeatable, it works fine. But it does not improve thinking. It does not build capability. And at its worst, it creates a dangerous illusion — the sense of sophisticated output without the judgment to evaluate whether that output is actually right.
A smaller group of leaders operates differently. Their interaction looks more like this: frame the question, evaluate the response, identify what's missing or misaligned, refine the approach, pressure-test the conclusion. They use AI not as an answer engine but as a thinking partner. They bring their context, their judgment, and their expertise to interrogate what the AI produces — not just receive it.
This is the difference between elimination and elevation. Between automation and amplification.
Automation asks: What tasks can we eliminate? It treats human expertise as a bottleneck to be bypassed. Amplification asks: What thinking can we elevate? It invites AI into the thinking process and treats human judgment as an asset to be multiplied.
Weak thinking becomes faster. Strong thinking becomes exponential.
That is the dividing line.
THE HIDDEN RISK NO ONE IS TALKING ABOUT
A provocative research paper from MIT Media Lab introduced the concept of "cognitive debt" — the accumulation of cognitive cost when professionals use AI passively rather than actively. In their study, participants who relied on AI to generate work rather than think through it showed markedly weaker neural engagement during the task. More striking: 83% of users could not recall a single sentence from the essay they had just "written" with AI assistance.
This finding should prompt serious reflection in any organization deploying AI at scale. It is still preliminary research, and the full long-term implications remain to be studied. But the directional signal matters: passive AI consumption does not build capability. It may actively erode it.
The antidote isn't less AI. It's more intentional AI. What researchers call "active engagement" — questioning outputs, challenging assumptions, iterating with purpose — is what builds lasting professional capability rather than dependency.
That active engagement is metacognition in practice.
WHY THIS IS A LEADERSHIP RESPONSIBILITY, NOT AN INDIVIDUAL ONE
Here is where the stakes escalate for executives specifically.
How leaders engage with AI shapes how their entire organization engages with it. This is not metaphor — it is organizational behavior. Research from MIT Sloan and Deloitte found that companies where leaders personally model AI adoption are three times more likely to achieve measurable ROI from their AI investments. Teams do not adopt AI based on policy. They adopt it based on what they observe their leaders doing.
If leaders use AI superficially — accepting the first output, delegating thinking to the tool, presenting AI-generated work as their own judgment — their teams will mirror exactly that behavior. If leaders demonstrate visible, iterative, accountable engagement with AI, that behavior becomes the norm.
This is what makes metacognition a leadership multiplier. It doesn't just improve what you produce. It shapes the thinking culture of everyone around you.
When 94% of CEOs identify AI as their top strategic priority, but only 6% of employees feel genuinely confident using AI in their roles, the gap is not a training problem. It is a modeling problem. And it starts at the top.
FIVE DISCIPLINES THAT BUILD METACOGNITIVE CAPABILITY
Unlike technical skills, metacognition can be deliberately developed at the individual and organizational level. These five disciplines are where to start.
- Make Thinking Visible. Stop evaluating only outputs. Evaluate the thinking behind them. When you share a deliverable produced with AI assistance, narrate your process: how you approached the problem, what you questioned in the output, how you refined it. When thinking becomes visible, it becomes teachable — and it becomes the organizational standard.
- Normalize Iteration. High-quality outcomes from AI emerge through multiple cycles of engagement, not a single prompt. Move your culture away from "ask once, accept once" toward "ask, assess, adjust." The first output is a starting point. Treating it as the final answer is where quality dies.
- Shift from Prompting to Dialogue. Teaching better prompting reinforces a transactional mindset. The more durable skill is metacognitive flexibility — the ability to recognize when a strategy isn't working and adapt it in real time. Transform AI from a tool into a collaborator by exploring multiple perspectives, challenging the model's assumptions, and building on prior exchanges.
- Build Evaluation Discipline. Speed is only an advantage if quality follows. Before any AI-assisted output becomes a deliverable, evaluate it against three criteria: Accuracy — is it verifiable? Alignment — does it fit your specific strategic context and objectives? Authenticity — does it reflect your professional standard and voice? Human accountability must be the final filter. Always.
- Close the Loop. After every working cycle, harvest the insights. What worked? What needed refinement? What context should carry forward? This is the mechanism that separates professionals who compound their capability over time from those who start from scratch with every new engagement. It is also the mechanism that makes your AI workspace measurably smarter with every project.
THE LEADERSHIP IMPERATIVE
We are entering a phase where AI capability will continue to expand — and quickly. As it does, the differentiator will shift further from technology and toward the human capabilities that technology cannot replicate: contextual judgment, ethical reasoning, the ability to synthesize ambiguity into clear decisions, and the discipline to know when AI output should be trusted and when it should be challenged.
BCG's 10-20-70 rule isn't just a resource allocation framework. It's a thesis about where competitive advantage actually lives. Seventy percent of AI's value comes from the human side. The leaders who recognize that — and build it intentionally, systematically, and visibly — will define the next generation of high-performing organizations.
AI will not reward those who use it most. It will reward those who think with it best.
Your expertise and your judgment are not legacy assets from a pre-AI world. In an era of abundant intelligence, the scarce resource is not access to information. It is the human capacity to interpret it, challenge it, and apply it to problems that actually matter.
So the next time someone in your organization asks what work AI can eliminate — redirect the question.
Ask what thinking it can elevate.
That's where the advantage lives.
Scott Wise brings 30 years of transformation consulting experience to the most important leadership challenge of our time. Author of AI4Leaders: Amplify Your Impact and certified in AI by MIT and Oxford, he helps executive teams and organizations move from AI-Curious to AI-Capable. Explore his work at ScottWise.ai.