A paper from the Office of the Minister of Science, Innovation and Technology released late last month points out that New Zealand businesses have been slow to adopt AI, “due in part to uncertainty about the future regulatory environment.”

The paper proposes taking steps as a nation to unlock the potential of AI to “deliver better outcomes for New Zealanders through greater innovation, productivity, and export opportunities.” 

It suggests adopting “a light-touch, proportionate and risk-based approach” to regulating AI, with preference given to leveraging existing laws where possible rather than implementing an AI Act, as the European Union has done.  

The approach makes sense, though we certainly need to accelerate our efforts as a nation to spur responsible use of AI if we are to tap into the potential of the technology to transform virtually every area of the economy. 

The current landscape: Uncertainty and hesitation 

Kiwi businesses are eager to harness AI's potential but are largely in experimentation mode when it comes to the latest wave of generative AI tools being harnessed in everything from office productivity to customer service bots. 

Our own survey of 200 senior business leaders conducted late last year found that 49% had yet to take up AI in their organisation or didn’t plan to do so at all. While we have a lot of work to do to build capability in organisations to govern and implement AI, a lingering question mark over regulation has created a risk-averse environment with business leaders often taking a wait-and-see approach to AI deployment. This is leading to missed opportunities for growth and innovation. 

At the same time, public wariness around AI is a very real thing. An Ipsos study published in June found that while 43% of Kiwis are excited for products and services that use AI, 66% say AI makes them nervous. That sees us rank second only behind Ireland (67%) internationally when it comes to nervousness about AI. 

Man wearing a suit leaning on a railing
The fact we haven’t rushed to regulate AI gives us an opportunity to lay the right foundations, build trust, and learn from other countries, says Datacom MD, Justin Gray.

Drawing on global examples 

The European Union's AI Act and the Biden Administration's executive order on AI provide insightful frameworks that balance innovation with risk management. 

The EU AI Act 

The European Union has taken a proactive approach with its AI Act, which categorises AI applications based on their risk levels. Low-risk uses of AI, such as for generating non-player characters in games, may require little regulatory action, while high-risk uses, such as for generating social credit scores are prohibited. 

This risk-based approach ensures that high-risk AI systems, such as those used in critical infrastructure or healthcare, are subject to stringent requirements. By focusing on the potential impact of AI systems, the EU AI Act promotes innovation while safeguarding public interests. 

Biden’s Executive Order on AI 

The US approach is more light-touch compared to the EU's more comprehensive AI Act. It aims to balance innovation with risk management, but relies more on guidelines and best practices rather than strict rules. Existing federal laws can, in many cases, be applied to use of AI. 

There is currently no comprehensive federal legislation or regulations in the US that regulate the development and use of AI, though many individual states have passed or are in the process of drafting AI-related legislation. 

The White House Executive Order on AI requires developers of powerful AI systems to share safety test results with the US government, and a number of AI leaders, OpenAI, Google, Meta, and Microsoft among them, have committed to internal and external security testing of AI systems before release. 

Other nations are settling on their own approach. The Bletchley Declaration, which New Zealand, the EU and around 30 countries are involved in, is a global effort pursuing AI development that is safe, human-centric, trustworthy and responsible. 

Finding our own path 

New Zealand is well-positioned to develop a balanced approach to AI regulation, leveraging insights from global examples while tailoring policies to our unique cultural context.  

An over-reliance on large language models trained by US AI giants may not adequately serve our needs. Use of AI here needs to take into account the principles of the Treaty of Waitangi and Māori data sovereignty. A lot of great work is underway from the likes of Te Mana Raraunga - the Māori Data Sovereignty Network, to build capacity and awareness in this area. 

Existing legislation such as the Privacy Act 2020 is applicable to the use of AI and the Privacy Commissioner has released a comprehensive guide to how the Information Privacy Principles apply in the world of AI. 

There remain regulatory grey areas, including the vexed issue of content generation by AI systems and application of copyright law. Legal case law will increasingly stress-test existing legislation to determine how fit for purpose it is for AI’s myriad uses. 

Time to develop your own strategy

But clarity around the government’s approach to regulating AI is what is needed, rather than a hasty effort to draft up new AI-focused regulations. The government paper on AI, which notes that an “integrated cross-portfolio work programme on AI” should be delivered by September, represents the first steps in setting that much needed clear strategic direction. 

Initiatives such as the AI Forum’s AI Blueprint for Aotearoa, which aims to establish New Zealand as a world-leading hub for responsible AI innovation by 2030 can inform the Government’s efforts.  

Businesses shouldn’t sit on their hands in the meantime. It's imperative that they build the internal capability and capacity in their workforce to responsibly develop and deploy AI, with appropriate guardrails in place. 

A very useful guide to AI governance issued recently by the New Zealand Institute of Directors illustrates how oversight of AI can be incorporated into existing governance frameworks that board directors apply to other uses of technology.  

As New Zealand’s largest technology solutions provider, Datacom has extensive experience in building and deploying AI systems, and helping our customers use AI responsibly. We are excited about the potential of this technology and the fact that a plan is taking shape to make the most of it.   

Related industries
Technology Public sector
Related solutions
Advisory & consulting