by Robyn Bolton | Feb 2, 2026 | AI, Leadership, Strategic Foresight, Strategy
It was a race. And the whole world was watching.
In 1911, Captain Robert Scott set out to reach the South Pole. He’d been to Antarctica before and because of his past success, he had more funding, more expertise, and more experience. He had all the equipment needed.
Racing him to fame, fortune and glory was Norwegian Roald Amundsen. Originally heading to the North Pole, he turned around when he learned that Robert Peary had beaten him there. He had dogs and skis, equipment perfect for the Arctic but unproven in Antarctica.
Amundsen won the race, by over a month.
Scott and his crew died 11 miles from the South Pole.
When the Playbook Stops Working
Scott wasn’t guessing. He’d tested motor sledges in the Alps. He’d seen ponies work on a previous Antarctic expedition. He built a plan around the best available equipment and the general playbook that had served British expeditions for decades: horses and motors move heavy loads, so use horses and motors.
It just wasn’t right for Antarctica. The motors broke down in the cold. The ponies sank through the ice. The plan that looked solid on paper fell apart the moment it met the actual environment it had to operate in.
The same thing is happening today with AI.
For decades, when new technologies emerge, executives have followed a similarly familiar playbook: assess the opportunity, build a business case, plan the rollout, execute.
And for decades it worked. Cloud migrations and ERP implementations were architectural changes to known processes with predictable outcomes. As time went on, information grew more solid, timelines became better understood, and the playbook solidified.
AI is different. Executives are so focused on picking the right AI tools and building the right infrastructure that they aren’t thinking about what happens when they hit the ice. Even if the technology works as designed, you have no idea whether it will deliver the intended results or create a ripple of unintended consequences that paralyze your business and put egg on your face.
Diagnose Before You Prescribe
The circumstances of AI are different too, and that requires a new playbook. Make that playbooks. Picking the right playbook requires something my clients and I call Calibrated Decision Design.
We start by asking how long it will take to realize the ultimate goals of the investment. Do we need to break even this year, or is this a multi-year bet where results slowly roll in? Most teams have a sense of this, so it allows us to move quickly to the next, much harder question.
What do we know and what do we believe? This is where most teams and AI implementations fail. To seem confident and indispensable, people present hypotheses as if they are facts resulting in decisions based on a single data points or best guesses. The result is a confident decision destined to crumble.
Where you land on these two axes determines your playbook. Apply the wrong one and you’ll either waste money on over-analysis or burn through budget on premature action.
Pick from the Four Playbooks
Go NOW!: You have the facts and need results now. Stop deliberating. Execute.
Predictable Planning: You have confidence in the outcome, but the payoff takes patience. Build a flexible strategy and operational plan to stay responsive as things progress.
Discovery Planning: You need results fast, but you don’t have proof your plan will work. Run small, fast experiments before scaling anything.
Resilient Strategy: The time horizon is long and you’re short on facts. The worst thing you can do is go all in. Instead, envision multiple futures, identify early warning signs, find commonalities and prepare a strategy that can pivot.
Apply it
Which playbook are you using and which one is best for your circumstance?
by Robyn Bolton | Jan 25, 2026 | AI, Leadership, Leading Through Uncertainty
Spain, 1896
At the tender age of 14, Pablo Ruiz Picasso painted a portrait of his Aunt Pepa a work of brilliant academic realism that would go on to be hailed as “without a doubt one of the greatest in the whole history of Spanish painting.”
In 1901, he abandoned his mastery of realism, painting only in shades blue and blue-green.
There’s debate over why Picasso’s Blue Period began. Some argue that it’s a reflection of the poverty and desperation he experienced as a starving artist in Paris. Others claim it was a response to the suicide of his friend, Carles Casagemas. But Bill Gurley, a longtime venture capitalist, has a different theory.
Picasso abandoned realism because of the Kodak Brownie.
Introduced on February 1, 1900, the Kodak Brownie made photography widely available, fulfilling George Eastman’s promise that “you press the button, we do the rest.”
An ocean away, Gurley argues, Picasso’s “move toward abstraction wasn’t a rejection of skill; it was a recognition that realism had stopped being the frontier….So Picasso moved on, not because realism was wrong, but because it was finished.”
Washington DC, 2004
Three years before Drive took the world by storm, Daniel Pink published his third book, A Whole New Mind: Why Right-Brainers Will Rule the Future.
In it, he argues that a combination of technological advancements, higher standards of living, and access to cheaper labor are pushing us from a world that values left brain skills like linear thought, analysis, and optimization towards one that requires right brain skills like artistry, empathy, and big picture thinking.
As a result, those who succeed in the future will be able to think like designers, tell stories with context and emotional impact, and combine disparate pieces into a whole greater than the sum of its parts. Leaders will need to be empathetic, able to create “a pathway to more intense creativity and inspiration,” and guide others in the pursuit of meaning and significance.
California, 2026
Barry O’Reilly, author of Unlearn, published his monthly blog post, “Six Counterintuitive Trends to Think about for 2026,” in which he outlines what he believes will be the human reactions to a world in which AI is everywhere.
Leadership, he asserts, will cease to be measured by the resources we control (and how well we control them to extract maximum value) but by judgment. Specifically, a leader’s ability to:
- Ask better questions
- Frame decisions clearly
- Hold ambiguity without freezing
- Know when not to use AI
The Price of Safety vs the Promise of Greatness
Picasso walked away from a thriving and lucrative market where he was an emerging star to suffer the poverty, uncertainty, and desperation of finding what was next. It would take more than a decade for him to find international acclaim. He would spend the rest of his life as the most famous and financially successful artist in the world.
Are you willing to take that same risk?
You can cling to the safety of what you know, the markets, industries, business models, structures, incentives that have always worked. You can continue to demand immediate efficiency, obedience, and profit while experimenting with new tech and playing with creative ideas.
Or you can start to build what’s next. You don’t have to abandon what works, just as Picasso didn’t abandon paint. But you do have to start using your resources in new ways. You must build the characteristics and capabilities that Daniel Pink outlines. You must become the “counterintuitive” leader that embraces ambiguity, role models critical thinking, and rewards creativity and risk-taking.
Do you have the courage to be counterintuitive?
Are you willing to embrace your inner Picasso?
by Robyn Bolton | Jan 17, 2026 | AI, Leadership, Leading Through Uncertainty, Stories & Examples
You’ve clarified the vision and strategy. Laid out the priorities and simplified the message. Held town halls, answered questions, and addressed concerns. Yet the AI initiative is stalled in ‘pilot mode,’ your team is focused solely on this quarter’s numbers, and real change feels impossible. You’re starting to suspect this isn’t a “change management” problem.
You’re right. It’s not.
The Data You’re Not Seeing
You’ve been doing what the research tells you to do: communicate clearly and frequently, clarify decision rights, and reduce change overload. And these things worked. Until employees went from grappling with two to 10 planned change initiatives in a single year. As the number went up, willingness to support organization change crashed, falling from 74% of employees in 2016 to 43% in 2022.
But here’s what the research isn’t telling you: despite your organizational fixes, your people are terrified. 77% of workers fear they’ll lose their jobs to AI in the next year. 70% fear they’ll be exposed as incompetent. And 66% of consumers, the highest level in a decade, expect unemployment to continue to rise.
Why doesn’t the research focus on fear? Because it’s uncomfortable. Messy. It’s a people (Behavior) problem, not a process (Architecture) problem and, as a result, you can’t fix it with a new org chart or better meeting cadence.
The organizational fixes are necessary. They’re just not sufficient to give people the psychological reassurance, resilience, and tools required to navigate an environment in which change is exponential, existential, and constant.
What Actually Works
In 2014, Microsoft was toxic and employees were afraid. Stack ranking meant every conversation was a competition, every mistake was career-limiting, and every decision was a chance to lose status. The company was dying not from bad strategy, but from fear.
CEO Satya Nadella didn’t follow the old change management playbook. He did more:
First, he eliminated the structures that created fear, including the stack ranking system, the zero-sum performance reviews, the incentives that punished mistakes. These were Architecture fixes, and they mattered.
And he addressed the messy, uncomfortable emotions that drove Behavior and Culture. He role modeled the Behaviors required to make it psychologically safe to be wrong. He introduced the “growth mindset” not as a poster on the wall, but as explicit permission to not have all the answers. When he made a public gaffe about gender equality, he immediately emailed all 200,000 employees: “My answer was very bad.” No spin. No excuses. Just modeling the vulnerability that he expected from everyone.
Ten years later, Microsoft is worth $2.5 trillion. Employee engagement and morale are dramatically improved because Nadella addressed the structures that fed fear AND the fear itself.
What This Means for You
You don’t need to be Satya Nadella. But you do need to stop pretending fear doesn’t exist in your organization.
Name it early and often. Not just in the all-hands meeting, but in the team meetings and lunch-and-learns. Be honest, “Some roles will change with this AI implementation. Here’s what we know and don’t know.” Make the implicit explicit.
Eliminate the structures that create fear. If your performance system pits people against each other, change it. If people get punished for taking smart risks, stop. If people ask questions or make suggestions, listen and act.
Be vulnerable. Share what you’re uncertain about. Admit when you don’t know. Show that it’s safe to be learning. Demonstrate that learning is better than (pretending to) know.
The stakes aren’t abstract: That AI pilot stuck in testing. The strategic initiative that gets compliance but not commitment. The team so focused on surviving today they can’t prepare for tomorrow. These aren’t communication failures. They’re misaligned ABCs that allow fear to masquerade as pragmatism.
And the masquerade only stops when you align align the ABCs all at once. Because fixing Architecture without changing your Behavior simply gives fear a new place to hide.