Hey friends and Happy New Year!
And welcome to 2026! This is our very first episode of the year, and I wanted to start it with something I keep seeing in boardrooms, leadership off-sites, and “AI rollout” meetings everywhere.
Most AI doesn’t fail because the tool is bad. It fails because the organisation never did the human work first.
This week I sat down with Tiffany Gray, someone I heard about through a big utility team here in Geelong. When I asked about their “AI readiness,” they didn’t talk about licenses or vendor decks. They said, “We had Tiffany come in and help us understand the human side of AI.”
That line stuck with me, because it’s the part most leaders skip. They roll out Copilot. They buy 100 licenses. They flick the switch.
And then they act surprised when people resist, ignore it, or quietly hate it.
Tiffany’s work lives in the messy middle: culture, identity, group dynamics, and the invisible work people do every day without ever writing it down.
The trap: “Just turn it on”
You’ve seen this. A leadership team gets excited. Someone says, “We’re on Office 365 anyway… let’s just turn Copilot on.”
Then it escalates. “Now there’s a requirement you use AI.” Sometimes it’s even added into performance plans. And suddenly everyone is asking a very reasonable question:
What does “use AI” even mean?
This is where a lot of organisations create fear without realising it. Not because AI is scary — but because leadership hasn’t created a clear story, a safe space, or a shared way of learning it together.
People don’t resist the tool. They resist the feeling of being forced into incompetence.
Two kinds of learning - and why AI demands both
Tiffany offered a framework I wish every executive team had on the wall:
Technical learning: acquiring a new skill or knowledge (how to use the tool)
Transformational learning: making conscious, deliberate changes to behaviour (how we work, decide, communicate, collaborate)
Most organisations try to do the first one with a training session. AI needs both.
Because it doesn’t just change what we do. It changes how work moves through the organisation and how people see their value. Which leads to a surprising problem…
The invisible work problem
Tiffany said something that explains why so many “AI agent” projects stall.
If you want to build workflows or agents, you need to map the work. To map the work, people need to be able to describe what they actually do. And most people can’t.
Not because they’re bad at their jobs but because they’ve become unconsciously competent. They’ve been doing the role so long it’s automatic. Like driving a car. You don’t think about each step… you just do it.
So when someone says, “Explain your role as if you were handing it over to me,” the room goes quiet. And when the room goes quiet, fear fills the gap:
“Will this make me replaceable?”
“If I share what I know, will I lose my leverage?”
“Am I about to automate myself out of a job?”
This is where cultural integration becomes the real work. Because the truth is: you can’t build good AI systems on top of work nobody can explain, and trust nobody wants to give.
Psychological safety isn’t a slogan. Its a system.
One of the sharpest moments in our conversation was when Tiffany described why “culture surveys” can be misleading. Some companies have glowing survey results… and still have deep dysfunction.
Why?
Because in many organisations, people answer surveys in a way that says, “Everything is fine. Don’t rock the boat.” Leaders think they’re getting reality. They’re often getting what people think is safe to say.
Tiffany’s point was simple: executive teams don’t always receive the real messages of what’s happening inside the organisation — especially when people have learned, over time, to adapt their behaviour just to belong.
Sometimes the only way to see it is to bring in someone external, because the organisation has normalised its own blind spots.
The “dance floor vs balcony” way to learn AI
This metaphor landed hard for me. When you first start using AI, you’re on the dance floor. You can only see what’s right in front of you: your task, your workflow, your little world. And AI can make you feel invincible for a while.
But real maturity comes when you step up onto the balcony and watch the whole dance:
How am I using AI?
What is the impact of this on the team?
Is the output actually valuable?
What am I missing because I’m working alone?
That balcony moment is why Tiffany recommends something deceptively simple:
15–30 minutes a day experimenting with AI (build your confidence)
45–60 minutes a week with a group, reflecting together (build shared learning and stop siloing)
Because here’s the hidden risk: AI can make siloing worse.
If one person can run an end-to-end workflow alone, they might. But “end-to-end” doesn’t mean “best.” Most people aren’t subject-matter experts across an entire process, they’re just fast.
Organisations don’t need more isolated speed. They need better collective judgment.
Shaping AI vs using AI
Another distinction Tiffany made is worth repeating:
Most people are either:
talking about AI,
using AI like an advanced Google,
or shaping AI, consciously designing how it fits the organisation’s purpose.
Shaping isn’t about coding. It’s about intent. Where does AI belong? Where should it not belong? What work do we want to keep “pure people”? What do we automate, and why?
That’s leadership work. Not IT work.
Treat AI like an intern (and you’ll lead
One of my favourite parts of the conversation was when Tiffany compared delegating to AI with delegating to a new grad.
You don’t throw a brand-new graduate into the deep end and say, “Give me three strategies to increase profit.”
You give context. You set expectations. You define the outcome. You coach. AI is no different.
The leaders who struggle with AI adoption often have the same weakness in human leadership: unclear delegation, missing context, vague expectations, poor feedback loops.
AI doesn’t just expose process gaps. It exposes leadership gaps.
The four-question habit that changes teams
Tiffany shared a simple model she built years ago for executive teams and it fits AI adoption perfectly because it builds connection and clarity.
Try it in a 1:1 or a team meeting (weekly or fortnightly):
What were the highs and lows since we last met? (“I see you.” Patterns start to emerge.)
Where have you been focusing your attention? (Spot wheel-spinning, blockers, hidden overload.)
How is that focus helping us achieve our goal? (Lift eyes to purpose. Connect work to outcomes.)
What are the actions from here? (Forward motion. No stagnant meetings.)
If you run this consistently for six weeks, you’ll feel the difference, not because it’s magic, but because it creates a rhythm of reflection and accountability.
And it quietly builds the thing AI rollouts need most: trust.
If you’re worried about your job
We didn’t sugarcoat this.
Yes, a lot of tasks will disappear. Some roles will end. Some people will lose jobs. There will be disruption.
But Tiffany made a point I agree with, even if it’s uncomfortable: You can’t wait for the organisation to “train you” into safety.
The only move is to reconnect with your own agency:
Be clear on the purpose of your role (not just the tasks)
Understand who you serve inside the system
Build relationships across functions (stop living in your silo)
Learn AI by doing, not by reading think pieces
Find a small cohort and learn together
And one more thing that deserves to be said out loud: Stop listening to the noise.
A lot of what’s posted online is fear dressed up as certainty. Find a trusted group, experiment, reflect, and make informed choices based on your lived experience, not someone else’s hot take.
See you next week.
— Aamir
📲 Resources & Links
🎧 Listen to the Podcast Episode 1 on: Spotify | Apple Podcasts | YouTube
📘 Book: The CEO Who Mocked AI (Until It Made Him Millions) by Aamir Qutub