Fill in your details below and one of our AI experts will be in contact to find out how we can support you.
According to Datacom’s 2025 Cybersecurity Index, security was ranked as the biggest concern around artificial intelligence (AI) adoption. While awareness is growing, many organisations are still moving ahead without the proper guardrails in place, exposing themselves to significant and avoidable risk.
As AI adoption surges — particularly large language models (LLMs), generative tools and agentic systems — secure implementation has never been more important. But too often, organisations race ahead without setting the foundations and running the same checks they’d apply to any other critical system.
You wouldn’t let a stranger into your business and give them access to everything, but that’s essentially what some organisations are doing with untested AI systems.
These new tools are being introduced without clear boundaries or a deep understanding of what they’re accessing, how they’re behaving or what guardrails are in place to stop them from going rogue.
Unchecked, a new AI tool can expose sensitive data, introduce vulnerabilities and make decisions that no one’s reviewed and no one can trace.
As Adam Kirkpatrick, Director of Automation, AI and Project Services at Datacom says: “AI might look helpful on the surface. But if you’re not testing what it’s doing, or where it’s going, you’re essentially giving a stranger the keys to your most valuable systems and hoping for the best.”
It’s something Datacom is seeing more of across Australia and New Zealand. As organisations embrace LLMs to boost productivity or automation tools to simplify workflows, the challenge isn’t just how to adopt them — it’s how to do it safely.
That starts with testing the right way, before anything hits production.
The reality is that most organisations aren’t ready to adopt AI safely — technically, culturally or strategically. There’s a misconception that AI is plug and play, but implementing it in a secure and useful way still requires strong engineering, data, security and change foundations.
The most common challenge? Businesses aren’t starting with the right question.
Rather than asking “What can this model do?”, Datacom helps customers flip the focus to “What do we need this AI to do for the business, and how can we make sure it does that safely?”
That shift sets the direction for everything that follows, from model selection and infrastructure to how outputs are tested and governed.
For many businesses, AI readiness doesn’t begin with tools. It begins with awareness, alignment and accountability.
Datacom sees four recurring roadblocks that slow — or derail — adoption:
According to Angela Millward, Practice Manager of Artificial Intelligence at Datacom, “It’s not a tech problem. It’s a business problem … the tech can do what you need it to do, but only if you understand your business use case and build around that.”
Millward says that Datacom’s approach always starts with a use case.
“We use our AI Ignite framework to identify where AI can drive value, then translate that into a proof of concept,” she explains. “That gives you the evidence you need to build a business case and know what infrastructure, data and governance need to follow.”
From there, testing begins.
“You can’t just hope it works. You need to test it like you would any major system,” says Millward.
“That means defining success metrics, using real subject matter experts to validate outputs and refining everything from your business context layer to your prompt injections.”
Before any model is rolled out, it’s essential to stress test it, just like you would any other critical system.
That includes penetration testing and testing against potential misuse or failure points. The goal isn’t just performance, it’s resilience.
"AI security isn’t a one-time fix. AI systems need constant testing and monitoring to stay reliable. Successful adoption means putting strong guardrails and governance in place to manage risks and ensure AI remains secure, accurate and trustworthy over time," Kirkpatrick says.
Common gaps Datacom helps uncover include:
Getting this right is about more than compliance — it’s about confidence. If you don’t test your system’s limits, someone else eventually will.
Organisations should never take AI public without first testing it internally.
“Use a trusted team. Act like the assassin. Try to break it before anyone else does,” Millward says. “You’d be amazed how often sensitive data becomes exposed when those guardrails aren’t in place.”
It’s not enough to run the model and hope for the best. Every access point, prompt and outcome needs to be verified, contained and monitored. And that process has to happen long before real customers or employees get their hands on it.
Hope isn’t a strategy. Testing is.
Datacom’s AI framework helps businesses move beyond experimentation to validating use cases, building strong business cases and rolling out AI with governance, performance and risk built in.
As AI becomes more embedded in the way we work, the strongest businesses won’t necessarily be those that rushed ahead. They’ll be the ones who asked the right questions, ran the right tests and treated AI with the respect any critical system deserves.
Talk to Datacom about a security posture review or AI framework tailored to your maturity, risk profile and industry and take your next step with confidence.