
How to Get Your Team Using AI Openly, Consistently, and Without the Risk
Here is something most leaders already sense but rarely say out loud: their team is probably using AI at work right now, in ways the organisation has not formally approved, without anyone quite knowing the extent of it.
EY research from 2025 found that 68% of Australian workers are already using AI at work, yet only 35% have received any formal AI training from their employer. More striking still: 72% are actively worried about breaching data or regulatory rules when they do. That is not a workforce resisting AI. That is a workforce using AI without the confidence or clarity to do it well.
That gap between what is happening on the ground and what leadership can see is not a technology problem. It is an organisational readiness problem. And it has a practical solution.
This post is for leaders who want to move from fragmented, invisible AI use to something more intentional: a team that uses AI openly, applies it consistently, and does so in ways the organisation can stand behind.
Why Unmanaged AI Use Is Not a Discipline Problem
The instinct for many leaders when they discover their team is using unapproved AI tools is to tighten the rules. Write a stronger policy. Restrict access. Make the expectations clearer.
That instinct is understandable, but it tends to address the symptom rather than the cause. The federal government’s Jobs and Skills Australia report found that between 21% and 27% of Australian white-collar workers are using AI behind their manager’s back. The report notes these employees are often the most motivated and innovative people in an organisation. They are not trying to create problems. They are trying to do their jobs better, and AI is helping them do it.
The reason this behaviour stays hidden is rarely defiance. It is usually a combination of three things: no clear guidance on what is appropriate, no psychological safety to experiment openly, and no shared capability foundation that makes everyone feel confident enough to bring their AI use into the light.
Restrict without addressing those three things and the behaviour does not stop. It just becomes harder to see.
What Open, Consistent AI Use Actually Looks Like
The goal is not to eliminate AI experimentation. The goal is to bring it into the open so it can be shared, refined, and governed without slowing people down.
In organisations where this is working well, a few things are visibly true.
People talk about AI use openly in meetings without it feeling like a confession. There is a shared sense of what good AI output looks like and how to sense-check it. Leaders model their own use rather than delegating AI to the people below them. The question is not “am I allowed to use this” but “how do I use this well.” And when someone finds a better way, it gets shared rather than kept to themselves.
That culture does not happen by accident. It is built deliberately, and it requires three conditions to be in place at the same time.
The Three Conditions That Make It Possible
At The Square Wave, every engagement we run is built around three questions. When one is missing, AI use stays fragmented. When all three are in place, it becomes a genuine organisational capability.
Clarity: does your leadership team agree on how AI should be used?
Not a compliance-heavy policy document, but a set of clear, usable principles that give people a framework for decisions in situations a policy could never anticipate. Things like: AI supports thinking, humans own the final call. Transparency about AI use is the default. Client trust takes precedence over efficiency. When leaders are aligned on these principles and communicate them clearly, the ambiguity that drives hidden AI use starts to dissolve.
Climate: does your team feel safe enough to experiment openly?
Psychological safety is the single most underestimated factor in AI adoption. When people fear being judged for not knowing enough, for producing an imperfect output, or for questioning whether an AI result is trustworthy, they do not stop using AI. They stop being honest about it. Leaders who model their own curiosity openly, including their mistakes and uncertainties, create the conditions where others feel safe to do the same.
Competence: do your people actually know how to use AI well?
Uneven AI capability is one of the most common drivers of hidden use. The people who are confident use the tools your organisation provides. The people who are not confident find tools that feel more forgiving or intuitive, and they use them quietly. Building genuine, consistent capability across the whole team, not just among the early adopters, closes this gap. When everyone has a shared foundation, the need to go outside the sanctioned environment reduces significantly.
Where to Start If You Are Not There Yet
Most organisations are somewhere in the middle. AI is present but uneven. Some people are moving quickly and getting real results. Others are hesitant, disengaged, or doing their own thing quietly. Leaders sense something is off but are not sure exactly what or where to start.
The most important first step is replacing assumptions with evidence. Before you can address the problem, you need to know what is actually happening: where confidence is high, where it is low, where AI use is already embedded in workflows, and where people are avoiding it entirely. Most leadership teams are surprised by what they find when they look properly. If you are not yet clear on why AI adoption stalls in the first place, this post on the Integration Gap is a useful starting point.
From there, the work is sequential. Establish shared standards before building capability. Build capability before trying to scale. Move in that order and the culture shifts. Skip steps and you end up running training programs into an environment that is not ready to absorb them.
The organisations getting this right are not necessarily the ones that moved fastest. They are the ones that built the foundation properly before they tried to scale. That foundation is the difference between a team that uses AI because they have been told to and a team that uses it because it genuinely makes their work better.
“Governance without behavioural norms is just compliance theatre. The goal is not a team that follows AI rules. It is a team that uses AI well because they understand why it matters.”
The Signs Your Organisation Is Ready to Make This Shift
You do not need to have solved this to start. But there are a few signals that suggest your organisation is ready to move from fragmented AI use to something more intentional.
•Your leadership team acknowledges that AI use is uneven and wants a clearer picture of what is actually happening
•You have invested in AI tools or training and are not yet seeing consistent behaviour change
•You want clear standards without creating a compliance culture that slows people down
•You recognise that this is a people and culture challenge as much as a technology one
If that sounds like where you are, the work is straightforward. It is not quick, and it is not a single training session. But it is well understood, and it is very doable.
The Square Wave’s People Advisory helps leadership teams move from fragmented AI use to a culture of open, consistent, governed adoption. We start with a structured diagnostic that replaces assumptions with evidence, then work through clarity, climate, and competence in sequence.
If your team is already using AI but you are not yet confident about how, a short conversation is enough to work out where to start. Find out more at thesquarewave.com or reach out to Kate directly on LinkedIn.

