Hey friends,

Over the past few months, I have had a lot of conversations with leaders who all sound slightly different, but are really asking the same question.

How do we keep up with AI without losing control of the business?

It is a fair question. AI is moving fast. New tools are appearing inside software your team already uses. New features are being switched on without much warning. And while leadership teams are still drafting policies, people on the ground are already experimenting, already prompting, already trying to work faster.

That is the part many organisations are missing. AI is not waiting for governance to catch up. And that is exactly why governance is failing in so many businesses right now.

Not because leaders are careless. Not because teams are reckless. But because most governance structures were built for a world where humans used software as a tool. AI has changed that. Now the tool can also generate, recommend, decide, summarise, and influence action. It is not just supporting the work anymore. In some cases, it is shaping the work itself.

That changes everything.

The Problem No One Wants To Admit

A lot of organisations think they have AI under control because they have written a policy. But when you look closer, the policy often says one of two things.

Either, “Don’t use AI.”

Or, “Use AI, but don’t put any confidential or sensitive information into it.”

Neither of those is real governance.

The first one ignores reality. People are using AI anyway. Not because they are trying to be difficult, but because they are trying to be productive.

The second one sounds responsible, but usually collapses the moment someone tries to do actual work. If a lawyer cannot use client-related information, what exactly are they meant to do with the tool? If someone writing a report cannot reference real business context, what is the point of giving them access in the first place?

This is where shadow AI starts.

Not in some dramatic spy-thriller way. Just in the very ordinary way that people behave when the official rules do not make practical sense. They find a workaround. They use the tool quietly. They make judgment calls on their own. And suddenly your governance team is sitting in one room while the real AI adoption is happening somewhere else entirely.

That gap is where the risk lives.

Governance Is Not A Box To Tick

One of the biggest mistakes I see is treating governance like compliance paperwork. Something to draft, approve, file away, and revisit next year. That approach was shaky even before AI. With AI, it breaks immediately.

Governance should be the framework for how you operate. It should tell your people who you are, how you work, what you value, what tools are allowed, where the boundaries are, and how decisions get made when something new appears.

Because something new will appear. Constantly.

A tool you approved last month may release a new model this month. A software platform your team has used for years may quietly add AI generation, summarisation, or image creation next week. Canva, Salesforce, HR systems, CRMs, browsers, phones, productivity suites, almost everything is becoming AI-enabled.

So if your governance relies on a static assumption that tools stay the same, it is already outdated.

Why Traditional Governance Is Struggling

I used to think traditional governance structures would be enough. You have risk committees, cybersecurity officers, legal teams, executives, directors. Surely that should cover it.

But the problem is that most of these structures were designed for a more stable environment. A world where systems changed slowly, responsibilities were clearer, and the behaviour of tools was more predictable.

AI does not behave like that.

It can hallucinate. It can produce biased outputs. It can create results that look polished but are factually wrong. It can generate different responses to similar prompts. It can be embedded inside products you already approved without anyone reassessing the risk. And it can introduce issues that touch not just IT and security, but hiring, inclusion, copyright, privacy, workplace culture, and decision-making.

So no, this is not just a technology issue. It is a leadership issue. It is a governance issue. And it is a business model issue.

The Real Risk Is Not Just The Model, It Is The Mismatch

A lot of businesses are trying to govern AI at the policy level without first deciding what they actually want AI to do. That sequence is backwards.

Before you write the policy, you need to ask:

  • What are we trying to achieve with AI?

  • Which tools are appropriate for those goals?

  • What are the risks of those tools?

  • What use cases are we comfortable approving?

  • Where are the boundaries?

Only then can you build policy that means anything.

For example, if you have assessed Microsoft Copilot inside your environment and you are satisfied that the data stays within that environment, then a blanket rule that says, “Do not use sensitive information” may not make sense. But if there are other tools where data leaves your environment or contributes to external systems, then the restrictions need to be different.

The issue is not whether AI is allowed or not allowed. The issue is whether the guidance matches the reality of the tool.

Policy Without Training Is Theatre

Even a good policy is not enough if no one understands how to apply it. That is another major failure point.

Policies are often too broad, too legal, too abstract, or too disconnected from real work. People cannot map them to the actual decisions they need to make during the day. So they either avoid the tool entirely, or use it badly.

Neither outcome is useful. What works is translating policy into practical use cases. Show your teams what they can do, what they cannot do, and why. Give examples. Create clarity around scenarios.

Help them understand what counts as acceptable use in marketing, sales, HR, operations, legal, and customer service. Because AI use does not look the same across departments, and pretending one paragraph will cover everything is usually how confusion starts.

If you want accountability from staff, you need to give them training first. Otherwise you are handing someone a powerful tool, offering vague rules, and then blaming them when things go wrong. That is not governance. That is outsourcing confusion.

The Bias Problem Is Bigger Than Most People Think

Another reason AI governance is failing is because many organisations still treat outputs as neutral. They are not.

AI models are trained on data, and data carries bias. That bias can show up in subtle ways or obvious ones. Ask an image model for a receptionist and it may lean female. Ask for a construction worker and it may lean male. Ask for certain professions, appearances, or social roles, and stereotypes start leaking into the output.

That becomes dangerous the moment AI is used in decision-making. Especially in hiring, performance reviews, shortlisting, recommendations, or customer segmentation.

A lot of modern HR platforms already include AI features. If an organisation adopts those systems without understanding the underlying model, the training data, or the potential bias, then the organisation carries the risk. Not the vendor. Not the model provider. The organisation.

That is why AI governance cannot sit separately from your other policies.

You need to test it against everything else you say you care about, inclusion, fairness, harassment prevention, workplace safety, ethics, privacy, traceability. A business can spend years building good internal policy and then quietly undermine all of it with one unchecked AI workflow.

The Security Conversation Has Changed Too

For years, businesses worried about cyber risk in a fairly familiar way. Breaches, phishing, weak passwords, exposed systems. Now the picture is wider.

If AI tools are connected to email, files, calendars, browsers, or internal documents, the potential impact of misuse or compromise becomes much greater. A bad prompt, a malicious prompt injection, an exposed browser session, a compromised login, these things can create risks that many leaders are still underestimating.

And in Australia, directors also need to understand that data security is not a soft issue anymore. Personal liability is becoming a more serious consideration.

So one of the questions leaders should now be asking is not just, “Do we have AI?” but;

  • Have we tested what happens when this breaks?

  • Have we done proper security reviews?

  • Have we done penetration testing?

  • Have we explored the worst-case scenario?

  • If this tool is compromised, what data is exposed? What actions could be triggered? What systems become vulnerable?

These are not technical questions only for the IT team. These are governance questions for leadership.

What Better Governance Actually Looks Like

The good news is that fixing this does not require perfection. It requires structure, honesty, and momentum.

The first step is to accept reality. Your people are probably already using AI in some form, whether leadership sees it or not.

The second is to build a proper AI governance committee. Not one made up only of traditional governance people, but one that includes actual AI users, practical operators, and people who genuinely understand generative AI.

This matters because people doing the day-to-day work can tell you what is really happening. They can show you the workflows, the temptations, the inefficiencies, the shortcuts, and the opportunities. Without that input, policy will always be disconnected from practice.

It also helps to bring in external expertise. Not because your CIO or risk officer is not capable, but because being strong in technology or governance does not automatically make someone an AI specialist. This space is moving too quickly for assumptions.

From there, you need three living layers.

  1. A clear AI policy.

  2. An approved tools register.

  3. And a process for requesting and approving new AI use cases.

That last one is critical. If staff have no pathway to ask for approval, they will keep experimenting in the shadows. But if they can raise a use case, get it assessed, and receive fast guidance, you start bringing AI into the open.

And the committee cannot meet once a quarter and expect to stay relevant. AI governance has to move faster than that. Monthly is a much more realistic rhythm if you want any chance of keeping up.

The Simplest Starting Point

If I had to reduce this whole conversation to three actions, it would be this.

  1. Set up an AI governance structure with the right people in the room, including actual users and people who understand generative AI.

  2. Make sure your AI policy is practical, clear, and aligned to real use cases, not vague legal language.

  3. Train your people properly, not just on the rules, but on how to use AI well, how to assess outputs, and where accountability sits.

That alone would solve a surprising amount. And then keep talking.

The organisations doing this well are not the ones pretending they have all the answers. They are the ones running monthly conversations, sharing lessons, learning from others, asking dumb questions early, and refusing to hide behind “we didn’t know.”

Because not knowing is no longer good enough.

If AI recommended something, you should still be able to explain how the decision was made. If you cannot trace it, challenge it, or justify it, that is your signal to stop and look closer.

A Calm Takeaway

I do not think businesses need to panic. But I do think they need to wake up.

AI governance is failing not because the problem is impossible, but because too many organisations are treating it like old governance with new labels. It is not.

This requires a more practical mindset, a more dynamic structure, and a much more human conversation across the business.

The goal is not to shut AI down. The goal is to create an environment where people can use it well, safely, and in a way that reflects the standards your organisation claims to stand for.

That is what real governance looks like. And yes, we are all still learning. That is fine. Just do not let that become an excuse to stay still.

— Aamir

🎧 Listen to the Podcast Episode on: Spotify | Apple Podcasts | YouTube

📱 Dumb Monkey AI Academy App: Apple | Android

Keep Reading