The legal and regulatory framework of AI



One of the more frequent questions we received during our recent Future of Work series was around the legal and regulatory framework status of artificial intelligence and what that means for business.

As we have seen in the past, the rapid evolution of new technology is often accompanied by a lag in developing appropriate regulations as governments consider the best course of action to take.

Unsurprisingly, regulation of artificial intelligence (AI) is a growing topic of conversation around the world, with governments, experts and commentators becoming increasingly concerned about the potential negative impacts of this evolving technology.

And there are some good reasons to be concerned. While AI delivers enormous benefits, it also presents risks and challenges. These include privacy concerns, bias and discrimination issues, security risks, and the risk of job displacement. We also know that the malicious use AI tools can make it easier for bad actors to create deep fakes, spread misinformation and create security issues.

As a result, governments around the world are actively looking at different options to regulate AI. While most agree that there is a need for AI to be governed by a regulatory framework, there is less agreement as to what form this regulation should take.

The challenge with designing regulations is creating an enabling framework that supports innovation and amplifies the benefits of AI, both now and in the future, while also ensuring it has adequate safeguards to minimise its negative impacts.

Recently, the UK hosted the global AI Safety Summit, which brought together global leaders, leading AI companies, civil society groups and experts to consider the risk of AI and how they can be mitigated through internationally coordinated action.

The outcome was the Bletchley Declaration, signed by 28 countries and the European Union (EU). This saw them agreeing on the need to understand and collectively manage potential risks through a new joint global effort, ensuring AI is developed and deployed in a safe, responsible way for the benefit of the global community.

More recently, the EU has agreed to the AI Act, the world’s first comprehensive law to regulate AI. While the exact shape of these regulations is yet to be worked through, the new law is expected to come into force in 2026 and will include restrictions on the use of AI and fines and penalties for non-compliance.

What does this mean for New Zealand businesses?

New Zealand does not currently have any laws relating specifically to the use of AI, and it is unlikely that we will be a leader when it comes to regulating it.

Instead, we are more likely to be a fast follower, taking our lead from larger countries that have similar legal frameworks to us, such as the UK, Canada, Australia and the EU nations.

In saying that, it is important to note that the EU’s proposed AI Act is extra-terrestrial, which means that it applies to New Zealand businesses which are offering AI systems or services within the EU. If you are offering AI services in the EU, you will need to make sure your product is compliant when the law comes into force, or you could face significant penalties.

And while New Zealand doesn’t yet have any regulation relating specifically to AI, that doesn’t mean it is the Wild West when it comes to how you use it. You must abide by several other laws and you are legally accountable for any decisions or outcomes that arise from your use of AI.

For example, when using an AI tool, you need to be mindful of your obligations under the Privacy Act. It is also your responsibility to ensure any outputs from an AI tool, for example generating financial advice, are legally compliant and meet the standards of New Zealand legislation.

You also need to follow employment legislation when introducing AI into your business. That means if by introducing AI you are changing someone’s job, redeploying them or reducing staff numbers, you will need to go through the standard change process.

And using AI does not exempt you from copyright or human rights legislation, so you need to be mindful about the information you are sourcing and how you are using it, so that you are not infringing on someone else’s copyright or reinforcing inherent biases.

That is why it is important, when integrating AI into your business, that you take the time to undertake a legal risk assessment to help identify, analyse and manage the potential legal risks you might face.

Once you have completed this assessment, you will be in a much better position to develop a policy that outlines how you will use AI in your business activities. This policy should outline acceptable and unacceptable uses AI, and ensure you have human input at critical decision points so that you maintain appropriate oversight of any outputs or outcomes that you will be legally accountable for. Blaming AI is not a legal defence if something goes wrong.

While this might seem overwhelming, the benefits of introducing AI far outweigh the risks. You just need to be careful and manage these risks.

Our legal team at the EMA can support you through this process, ensuring you are integrating AI into your business safely and with confidence.

To help Kiwi businesses introduce AI, we have developed an Artificial Intelligence Legal Bundle that will give you everything you need to introduce it safely. Included in the bundle is a draft AI policy statement, a checklist for employers and a one-hour complimentary consultation with an employment law expert to discuss how to build an AI framework and discuss the legal implications for your business.

We are here to help New Zealand businesses take advantage of the many benefits that AI can bring. To learn more and see how we can help, simply visit www.futureofwork.ema.co.nz/legal-bundle

Scroll to Top