Future of AIAI

AI poses a unique data protection risk; How to avoid falling foul of ICO fines

By James Fox, Partner in Gardner Leader's Commercial and Life Sciences team

It’s no secret that AI is the biggest emerging technology globally. The market is estimated to be worth around $244 billion currently and may well rise to over $800 billion by 2030. For lots of organisations, it is a race to keep up amid ever-growing pressure to automate, cut costs, increase efficiency, and drive forward results. 

Unlike other technology implementations, AI can seem less expensive, with entry-level AI that doesn’t require huge investment to get going.

Small monthly subscription fees are making the technology easy to adopt, so it can feel like a no-brainer. From Co-Pilot to create detailed meeting notes to FinanceGPT to analyse reports, the technology can have a big impact across all areas of a business.

With more expensive investments like Co-Counsel (and the expense of acquiring high-quality data), the results can start to look revolutionary. However, aside from some of the hidden financial costs with AI (including quality data, integration and training, for example), the benefits of AI are widely applauded. But it also poses a unique and critical data protection risk – one which countless firms could easily fall foul of. 

The hidden risk of AI

Chatter around whether your data is safe in the hands of AI has picked up over the last 18 months. Much of the conversation is rooted in whether other companies or individuals will be able to find your data through their own use of AI platforms. While this is rightly a risk and topic of conversation, there is a much broader and likely damaging issue which is going under the radar – GDPR compliance.

Upheld by the Information Commissioner’s Office (ICO), GDPR prohibits the misuse and mismanagement of data. A breach of the rules can be hugely costly, in the worst cases resulting in fines of up to 20 million euros, or 4% of total global turnover, whichever is higher.

While GDPR has been on the agenda for nearly a decade, the rise of AI is causing more and more businesses to become noncompliant, even if they don’t realise it. How? It’s all in the way the technology works. AI processes and moves data extensively, comparing it to various sources, using internal tools to analyse it, and potentially transferring it to servers in countries that may not be covered by a company’s data protection policy. 

Under GDPR your organisation has a responsibility to your customers. If their data is unprotected or unaccounted for at any point while in transit or at rest, you are in breach of the rules. In the absence of AI-specific laws, current protections rely heavily on existing data protection regulations like GDPR. This means businesses must ensure their AI systems comply with GDPR requirements, including data security, transparency, and accountability. 

Adding to the risk is the ambiguity of the protection given to you by suppliers. The fact that AI is still, in some ways, an experimental technology, means commercial contracts are often lightweight when it comes to liability. Traditional SaaS contracts might offer more robust liability protections and clearer terms regarding data handling and compliance. In most cases with AI contracts, the majority of the responsibility falls on those using the technology rather than those supplying it, in contrast to other SaaS contracts. 

Mitigate these risks, without falling behind the AI curve

Many are hopeful that the UK’s AI Bill, expected to become law this summer, will provide clearer frameworks for managing AI-related risks. 

One of the Bill’s key objectives is to establish a pioneering AI Authority, a central regulator tasked with overseeing the development and use of AI technology in the UK. This authority will address critical issues such as data protection and the use of copyrighted material – topics that have garnered significant attention recently. For instance, Meta faced allegations of illegally using unlicensed books and music to train its models. While the specifics of the Bill are still unclear, it is poised to shape the future of AI development in the UK for many years to come.

Key steps

In the meantime, for organisations adopting AI technology but concerned about data protection headaches, here are a few key steps they can take:

  1. Understand the Risk Before Signing: Before entering into any agreement with a supplier, it’s crucial to understand the risks involved. Learn as much as you can about their processes. Ask questions like: How does the platform work? Where does the data travel to? Can it access data and applications in the background? What data protection policies do they have in place? Knowing the answers to these questions will help you formulate a framework for securing your data. 
  1. Create Data Protection Processes: Establish data protection processes for every touchpoint. Where you have concerns, draw up specific contracts to secure additional liability from vendors. 
  1. Limit Your Exposure: As with any type of cyber or data security, controlling who and what has access is key. Ensure your AI platform only has access to the data it truly needs to fulfil its function. 
  1. Train Employees: It may seem obvious, but staff need to know how to use AI securely and safely. Educate them about the risks and ensure they understand and adhere to your data protection policies. 
  2. Undertake Regular Risk Assessments: The key to managing risk is having oversight of it. Conduct regular risk assessments to ensure that your processes and policies remain up to date. 

AI has opened up vast opportunities for businesses, offering new ways to reduce costs, increase revenue, and enhance operational efficiency. However, if you don’t follow the rules and get hit with a multi-million-pound fine, these savings will likely not be worth it. As with all technology implementations or operational changes, it is critical to balance risk and reward. AI is no different. 

Author

Related Articles

Back to top button