
Over the last few months, we’ve heard a consistent concern from risk, security and compliance leaders:
“Our staff are uploading sensitive documents into their personal ChatGPT accounts, and we can’t fully stop it.”
In most cases, public AI tools are already blocked on corporate networks. Microsoft Copilot is enabled internally. Data Loss Prevention policies are in place.
And yet — the behaviour persists. This is not a failure of technology. It’s a signal that the problem has been framed incorrectly.
The idea that organisations can “decide” whether employees use AI is no longer realistic.
People are using AI because:
Trying to eliminate that behaviour entirely leads to shadow usage:
From a regulator or board perspective, that posture is hard to defend.
Most organisations respond to AI risk by focusing on tool restriction:
That approach assumes compliance through prevention.
But regulators, including those shaping APRA‑aligned and privacy‑driven controls, increasingly recognise a harder truth:
you cannot eliminate human intent — you can only reduce risk and prove control.
This is precisely the scenario Microsoft and regulators are now designing for.
Microsoft refers to this emerging control model as AI Access Governance, aligned to its Responsible AI and Secure AI Adoption frameworks.
We see it working in practice as a layered governance model, not a single control.
At a high level, it has four parts:
This is where most organisations already have foundations:
The goal here isn’t perfection — it’s risk reduction at scale.
If sensitive data can’t easily leave managed environments, opportunistic misuse drops sharply.
This is where many strategies fall down.
Tools like:
allow organisations to:
This matters because regulators care less about isolated incidents — and far more about patterns and response.
Blocking without enablement pushes behaviour underground.
Clear guidance matters:
A “Copilot‑first” policy, backed by real training, is often one of the highest‑return controls available.
People generally want to do the right thing — if the path is obvious and supported.
Governance without enforcement is just advice.
Effective programs:
This closes the loop — and critically, creates a defensible position if scrutiny arrives.
What this layered model provides is not just alerts after the fact, but:
It acknowledges reality:
From a board, regulator or insurer perspective, this is a far stronger position than “we blocked the tools and hoped”.
The technology largely exists.
The challenge we see repeatedly is connection:
AI governance fails when it’s treated as:
It succeeds when people, process and technology are designed together — with AI treated as a permanent operating shift, not a temporary threat.