Promoted
Brandtech

Human in the loop: How to navigate the legal complexities of generative AI

It’s confusing and ever-changing, but with clear sighted, common-sense policies and guidance, you can steer clear of the pitfalls to make AI work for you

Human in the loop: How to navigate the legal complexities of generative AI

The pace of change in the world of AI is eye-wateringly swift – ChatGPT was only launched in late November 2022 – and it’s increasingly important for marketers and advertisers to understand the legal nuances of generative AI. Despite the uncertainties surrounding the specifics of regulations in this domain, the advertising industry must prepare for what lies ahead.

“Unlike other technological innovations, like social media, where the regulation lagged quite a long way behind the implementation of the technology. In AI, regulation will come early and be driven by governments as well as regulatory bodies,” explained Rebecca Sykes, a partner at the marketing technology group, The Brandtech Group. 

Sykes was speaking at the recent Power of AI Summit, an event jointly hosted by Campaign and Performance Marketing World.

The regulatory landscape

Broadly speaking, the European Union – with its proposal for overarching AI law – is expected to be more stringent in regulatory terms than the UK, which itself will be marginally tougher than the US.

The EU proposals are rules-based and risk-driven, broken down into four categories of risk: unacceptable, high, limited and minimal. The ‘limited’ category is likely to be the most relevant for day-to-day use, ensuring disclosure to users and customers when they are, for example, engaging with chatbots or where content is generated by AI.

In the UK, an AI whitepaper proposes a principles-based approach with sector-specific regulation rather than overarching regulation. In the US, President Biden’s executive order seeks to regulate the federal government’s use of AI and use the purchasing power of the US government to drive market behaviours. 

The three levels of AI

There are essentially three layers of GenAI usage. The foundation level is the large language model (LLM) such as OpenAI’s GPT, Google’s PaLM2, Microsoft’s Transformer or Meta’s LLaMa. A key consideration here is the legal terms around usage of the LLM and whether the data you input will be used to retrain the model.

The application layer (such as ChatGPT, Midjourney or Stable Diffusion) is where the actively creative, generative operations take place and it’s important that the usage terms of the applications align with the LLM. 

The passive layer is where AI is already integrated into everyday production tools and suites with Adobe, Microsoft and Google representing 80% of the world’s commercial content. “You may not even realise AI is woven into the tools that you're using,” said Sykes. “When you come to write a policy, it’s important to think about both the times when we’re actively choosing an AI tool for a particular job, as well as when we're passively using it in the way that we do business every day.” 

Proper prompts and the importance of data input

When it comes to potential infringement of rights or licences, everything starts with the prompt used to generate AI output. If your prompt itself infringes (eg asking for an image of ‘Wonder Woman on a horse’), the output will also infringe.

The fine-tuning of data to refine the particular operation of the system is key. If you want to take contextual or other scene-setting data from within your organisation and you want to use that to train and inform the behaviours of the model that you deploy, you need to be clear about what rights you have.

Anything non proprietary in the public domain that you have access or rights to is safe to input but real business data and commercial sensitive information carries risk, while the use of personal data would be a red line.

Risk can be mitigated by a prompt log so the lifecycle of the generative AI process can be tracked. 

The challenges of ownership and copyright

So, you’ve created an amazing, innovative piece of content but who does it belong to? These are grey areas. 

In terms of protecting the asset you’ve created, the more human involvement in the process the more likely it is you will be able to have copyrights subsisting in the works. UK copyright law does cover work created by computer programmes, stating the owner is ‘the person who made the arrangements necessary for the creation of the work’. Hence, the importance of the ‘human in the loop’, a significant phrase in the GenAI process.

Sykes offered the reassurance that much of the existing content and assets in common usage – such as non-exclusive image rights, or low-risk generic assets – would not be owned by the user. “So GenAI is a relevant, viable alternative to those models, because we don’t have copyright in those assets that we use either,” she explained.

Strategy and policy

The advice is not to wait for regulation to be formalised. Businesses need a policy that should consider five elements: transparency and accountability; IP and ownership; human in the loop; inputs and first-party training data; and legal showstoppers.

The next stage is bias mitigation in the AI model being used, likewise the carbon footprint of the model because LLMs use vast amounts of computing power. 

The ultimate aim is self-governance. Businesses should develop their own set of principles around what AI should do, why and how, so that its usage adheres to their own standards.

Greenlisting

The Brandtech Group has greenlisted dozens of AI tools, using a traffic light system:

  • Client-facing and publishable (green) 
  • Moodboarding and ideation that can be shared with clients but only as part of an internal process (amber)
  • Not safe for use (red) where there are issues about owning the outputted content, there was a loss of control or the platform licence was too broad. 

Topics