Using AI tools in your accounting practice while remaining compliant with UK GDPR requires you to treat AI as a data processor, not just a productivity tool. Any AI system that processes client data — names, financial figures, tax identifiers, or business information — must be covered by a Data Processing Agreement, and you must document that processing in your Record of Processing Activities.

This guide explains your obligations as a data controller when using AI tools, what to check before deploying any new tool, and how to manage ongoing compliance as your AI use evolves.

Your role as data controller

When you use an AI tool to process client data, you are the data controller. The AI tool supplier is the data processor. This distinction matters because it places the legal compliance obligations on your firm, not on the technology provider.

As the data controller, you are responsible for ensuring that:

  • You have a lawful basis for processing the personal data
  • The processing is documented in your Record of Processing Activities (ROPA)
  • The data processor (the AI tool supplier) is covered by a signed Data Processing Agreement
  • Data subjects (your clients) have been informed of the processing in your privacy notice
  • Data is not transferred outside the UK or EEA without an appropriate international transfer mechanism

Failing any of these requirements does not mean the AI tool was the problem. It means your data governance was inadequate. The ICO's enforcement position is that controllers are responsible for the processors they appoint.

What counts as personal data in accounting AI use

In accounting, almost everything you handle is personal data. A client's name, address, UTR number, NI number, bank account details, payroll records, tax return figures, and company registration information are all personal data under UK GDPR.

When you feed any of this information into an AI tool — whether by uploading a document, pasting text into a chat interface, or connecting via an API — you are processing personal data. The fact that the AI processes it automatically rather than a human reading it does not change the classification.

Some AI tools also collect usage data that can be linked back to individuals — including queries, uploaded file names, and session metadata. Check the supplier's privacy policy and DPA carefully for any secondary uses of data, including use for training AI models.

Important

Free and consumer-grade AI tools — including the free tiers of ChatGPT, Google Gemini, and similar products — typically reserve the right to use inputs to improve their models. This means client data pasted into these tools may be used for AI training by the supplier. This is almost certainly incompatible with your GDPR obligations and your professional duty of confidentiality. Only use business or enterprise versions of AI tools that explicitly exclude data from model training and provide a compliant DPA.

Data Processing Agreements: what to check

Before deploying any AI tool that processes client data, obtain and review the supplier's Data Processing Agreement. A compliant DPA under UK GDPR must cover:

  • The subject matter and duration of the processing
  • The nature and purpose of the processing
  • The type of personal data and categories of data subjects
  • The obligations and rights of the controller
  • Confirmation that the processor will only act on documented instructions from the controller
  • Confirmation that persons authorised to process the data are under appropriate confidentiality obligations
  • Implementation of appropriate technical and organisational security measures
  • Restrictions on sub-processing (and notification requirements when sub-processors change)
  • Assistance with data subject rights requests
  • Assistance with security obligations, breach notification, DPIAs, and prior consultation
  • Deletion or return of data at the end of the contract
  • Provision of all information necessary to demonstrate compliance

If a supplier cannot produce a DPA that covers these points, or if the DPA excludes processing restrictions that would give you sufficient control, do not deploy that tool on client data.

For practical guidance on selecting AI tools with appropriate GDPR safeguards, see our AI tools and technology for UK accountants hub.

International data transfers

Many AI tools are operated by US companies. Data transferred from the UK to the US for processing must be covered by an appropriate transfer mechanism. Since the UK-US Data Bridge came into effect in October 2023, US companies that certify under the UK Extension to the EU-US Data Privacy Framework can receive UK personal data without requiring additional safeguards.

Check whether your AI tool supplier is certified under the UK-US Data Bridge before assuming transfers are covered. The list of certified organisations is publicly available through the UK government's international data transfer mechanisms guidance.

If the supplier is not certified, you will need to rely on International Data Transfer Agreements (IDTAs) — the UK equivalent of Standard Contractual Clauses — or another recognised transfer mechanism. Some suppliers include IDTAs within their standard DPA. Others require you to request them separately.

The safest approach is to prioritise AI tools that process and store data within the UK or EEA, eliminating the international transfer question entirely.

Updating your privacy notice

Your client-facing privacy notice must accurately describe how you use AI tools that process client data. If you have introduced new AI tools since your privacy notice was last reviewed, update it before using those tools on live client data.

The notice should explain:

  • That AI tools are used to assist with specific categories of work (document processing, bookkeeping, correspondence)
  • The identity of the main AI tool suppliers used as data processors
  • The lawful basis for that processing
  • How long data is retained within AI systems
  • Whether data is transferred outside the UK and the safeguards in place

You do not need to list every tool you use by name in the main privacy notice, but clients are entitled to ask for more detail about the processing and you must be able to provide it.

If your current privacy notice was written before AI tools were adopted, it almost certainly needs updating. Review it now.

Data minimisation and prompt design

UK GDPR requires that you collect and process only the personal data that is necessary for the specified purpose — this is the data minimisation principle. Applied to AI use, it means you should not include personal data in AI prompts or uploads unless it is genuinely needed for the task.

In practice, this often means anonymising or pseudonymising data before feeding it into an AI tool. For example:

  • When using AI to draft a client communication, refer to the client as "the client" in your prompt rather than using their name, unless personalisation is required
  • When testing AI document processing, use anonymised sample documents rather than live client files
  • When generating reports or analyses, use reference codes rather than client names until the output is reviewed and confirmed

This approach reduces your GDPR exposure and also reduces the risk of inadvertently disclosing one client's data to another through AI system logs or shared sessions.

Data breach obligations when AI is involved

If an AI tool is involved in a personal data breach — for example, if a data leak at the AI supplier exposes client financial information you uploaded — you have the same breach notification obligations as for any other type of breach.

Under UK GDPR, you must notify the ICO within 72 hours of becoming aware of a breach that is likely to result in a risk to the rights and freedoms of individuals. If the breach is likely to result in a high risk, you must also notify the affected data subjects without undue delay.

Your AI tool supplier should notify you promptly if they experience a breach that affects your data. This notification obligation should be written into the DPA. However, do not rely solely on supplier notification — monitor your AI tools for unusual activity and ensure you have a breach response procedure that covers AI-related incidents.

Completing a Data Protection Impact Assessment

If your use of AI involves processing data at scale, automated decision-making, or processing sensitive personal data, you may be required to complete a Data Protection Impact Assessment (DPIA) before you start.

For most accounting practices, AI use for document processing, correspondence drafting, and bookkeeping assistance is unlikely to trigger a mandatory DPIA. However, if you are considering using AI for payroll processing at scale, automated credit assessments, or any system that makes decisions about individuals without human review, a DPIA is required.

Even where it is not mandatory, completing a DPIA is good practice before deploying any significant new AI capability. It forces you to think systematically about the risks and the mitigations, and it creates a documented record that demonstrates your accountability to the ICO if questions arise later.

Key takeaways

  • All AI tools that process client data must be covered by a signed, UK GDPR-compliant Data Processing Agreement before you deploy them on live data.
  • Never use free or consumer-grade AI tools on client data — most reserve the right to use inputs for model training, which is incompatible with your GDPR obligations and professional duty of confidentiality.
  • Update your client-facing privacy notice to reflect AI tool use before deploying new tools on client data.
  • Apply data minimisation when designing AI prompts — use reference codes or pseudonyms rather than identifiable data where the task allows it.
  • You have 72 hours to notify the ICO of a breach if it is likely to risk the rights and freedoms of individuals, even if the breach originated at an AI supplier.

Frequently asked questions

Can I use ChatGPT with client data in my accounting practice?

The free and standard paid versions of ChatGPT include terms that may allow OpenAI to use your inputs for model improvement. This makes them unsuitable for processing identifiable client data. ChatGPT Enterprise and OpenAI's API (with the appropriate data processing terms) exclude data from model training and can provide a DPA. If you use these business tiers with appropriate data processing agreements and UK GDPR safeguards, use is permissible, subject to the other compliance requirements described above.

What is a Data Processing Agreement and why do I need one?

A Data Processing Agreement (DPA) is a contract between you (the data controller) and the AI tool supplier (the data processor) that sets out the terms on which personal data is processed. Under UK GDPR Article 28, any arrangement where a processor handles personal data on behalf of a controller must be governed by a binding DPA. Without one, you are in breach of UK GDPR regardless of how securely the supplier handles the data.

Do I need to tell my clients I am using AI tools?

Yes — your privacy notice must accurately describe your data processing activities, including the use of AI tools. Clients have a right to know how their personal data is being processed. You do not need to contact existing clients to announce new AI tools, but your privacy notice must be updated before you deploy those tools on client data, and clients can request further detail at any time.

Where should AI tools store accounting data to comply with UK GDPR?

UK GDPR does not prohibit international data transfers, but it requires appropriate safeguards. The safest option for UK accounting practices is to use AI tools that store and process data within the UK or EEA. If the supplier is US-based, check whether they are certified under the UK-US Data Bridge. For other jurisdictions, you will need an IDTA or another recognised transfer mechanism.

What happens if an AI supplier has a data breach affecting my clients?

Your AI tool supplier is required by the DPA to notify you promptly of any breach affecting your data. Once you are aware, you have 72 hours to notify the ICO if the breach is likely to risk the rights and freedoms of individuals. You may also need to notify affected clients directly if the risk is high. Document your response and investigation thoroughly, as the ICO may request this information.