Hey friends,

I asked a room full of board people a few years ago, “How many of you are playing with AI?”

Not one hand went up.

A year ago, I asked the same question in a similar room. This time, everyone knew about it. Most had tried it.

And yet, inside most organisations, AI is still sitting in the strangest place.

No one owns it.

The CTO says it’s not their job. The CEO says they’re too busy. Security says it’s risky. Everyone agrees it matters, and then it floats in the background.

That’s what this conversation with Mark Edmonds was really about. AI isn’t failing because the tools aren’t good enough. It’s failing because businesses haven’t decided who’s responsible for making it real.

The Gap Between Knowing and Using

A lot of leaders have used AI, just not for anything meaningful. They’ve generated an image. They’ve helped their kid with a cover letter. They’ve tested a prompt once and moved on.

But when it comes to work, many still don’t know what to do with it. Or they know exactly what they could do, but the organisation has no structure to support it. So adoption becomes accidental.

One person in ops gets obsessed. Someone in marketing experiments. A finance person tries it once, gets a weird answer, and decides it’s useless forever.

The business never crosses the line from dabbling to capability.

AI is a Tool, But Tools still Need Rules

Mark sits on boards. He deals in risk. And he’s unusually enthusiastic about AI, which made the conversation interesting, because he isn’t waving away the risks.

He made a point I keep coming back to. AI isn’t risky only because it hallucinates. It’s risky because humans get lazy.

Governance doesn’t usually fail because it’s missing. It fails because people stop following it when they’re busy, rushed, or assume someone else checked the work.

We’ve seen that story plenty of times, with or without AI. The answer isn’t banning the tool. The answer is building verification into the workflow. If you’re going to let people create work faster, you need a way to review work faster too.

That can be as simple as teaching everyone that AI output is a draft, not truth. Requiring sources for anything factual. Spot-checking before something goes out. Even running one model’s output through another as a sanity check.

Not because AI is evil. Because humans do silly things.

Education is the Missing Owner

We kept circling back to education for a reason. It’s the missing piece nobody wants to own.

Boards need to educate themselves enough to ask the right questions. Executives need to understand where AI can help and where it can hurt. Teams need practical training that turns AI from interesting into useful.

Right now, AI learning is optional, so it gets deprioritised. That’s why I loved the idea we landed on. An AI licence inside organisations.

A short mandatory module. A simple test. Clear road rules. Not to slow people down, but to prevent the dumb mistakes that create reputational damage.

Policy is Not Enough, Culture Matters

One of the strongest moments wasn’t about tools or risk registers. It was about culture.

Mark shared an example of someone emailing the wrong person by accident, and immediately putting their hand up, fixing it, and following the policy. No blame. No punishment. Just transparency. That’s the environment you want for AI too.

Mistakes will happen. Someone will paste the wrong thing. Someone will upload something they shouldn’t. Someone will let a note-taking bot into a meeting without thinking.

If the culture punishes honesty, people hide mistakes and the damage grows. If the culture rewards early transparency, the organisation becomes safer.

Champions Beat Committees

Here’s the uncomfortable truth. AI doesn’t get adopted through strategy decks. It gets adopted through champions.

People who prove value, earn trust, and show others what’s possible in a practical way. Find those people. Give them time. Let them produce quick wins. Then scale what works.

Because if AI is everyone’s job, it becomes nobody’s job.

A Calm Takeaway

If you’re leading a business right now, here’s the real question.

Who owns AI in your organisation?

Not who is interested. Not who plays with it. Who is responsible for turning it into capability, with education, guardrails, verification, and culture. Because the organisations that win won’t be the ones with the fanciest tools.

They’ll be the ones who stop leaving AI in no man’s land.

-Aamir

🎧 Listen to the Podcast Episode 1 on: Spotify | Apple Podcasts | YouTube

📱 Dumb Monkey AI Academy App: Apple | Android

Keep Reading