Artificial intelligence (AI) has swiftly transitioned from an aspirational concept to an everyday reality. There’s an increasingly broad array of tools, like ChatGPT and MidJourney, that enable one to generate quality written content or amazing images in a fraction of the time it would usually take a mere mortal. In addition to writing content, they can change its tone, summarise it, and translate it. You can give ChatGPT examples of your own writing and ask it to model your style, you can ask it to write code, it can write you a sonnet, and it can come up with a recipe for vegan banana muffins. All in lightning fast time.

These advanced AI tools, underpinned by the likes of complex algorithms, natural language processing, and large language models, promise unprecedented efficiencies in generating content and analysing data. However, along with the apocalyptic fears that some people have, their use raises important issues relating to intellectual property rights, the protection of personal and confidential information, and factual accuracy.

For example:

  • given the vast array of potentially copyright material that AI tools like ChatGPT have consumed as part of their training/learning, there is a risk that their generative output could infringe third party copyright;
  • there are untested questions in New Zealand as to whether new copyright arises in AI-generated output (despite New Zealand law recognising copyright in computer-generated works);
  • a service provider may think it owns copyright in AI-generated content when, in fact, there may be no copyright in it, either under New Zealand law because there’s no relevant author (there’s an open issue here) or because the output isn’t original (in the case of content generated by ChatGPT, a service provider may think this because the OpenAI terms state: “Subject to your compliance with these Terms, OpenAI hereby assigns to you all its right, title and interest in and to Output”);
  • to generate output, a service provider might enter classified information into an AI tool like ChatGPT, which could raise privacy, confidentiality, and security issues; and
  • there is a risk of an AI tool’s output being incorrect or biased.

Given these risks, there could be circumstances where an agency wishes to prohibit the use of AI tools in the performance of services without the agency’s prior consent. This could be the case where, for example, a service provider is contracted to provide written, artistic, design, or code-oriented deliverables. The agency may want an “AI Use Restriction” clause in its contract with the service provider.

Drafting such a clause can be a little tricky, as we need to come up with a meaningful definition of something like AI Tool, or Artificial Intelligence Tool. I’ve had a crack at drafting a full clause (including a definition of Artificial Intelligence Tool). To have a look, sign up to the Contract Foundry today. If you have a .govt.nz or parliament.nz email address, you can sign up to the knowledge base for free, and you’ll get limited-time access to the clause library where you’ll find the “AI Use Restriction” clause. If you don’t have one of these email addresses but are still in the public sector, let me know and I’ll add your agency’s domain as a qualifying domain. Everyone else can access everything (including 9 contract builders) by signing up for a subscription.

Would you like free access to the
'AI Use Restriction' clause?

Be the first to learn of new posts and other goodies