AI and Your Company: A Privacy Roadmap (Part 2)

Published by Alisha McKerron Heese on 16 October 2025

What further steps should we privacy professionals take to achieve AI privacy now that we have a structured technical understanding of AI and consequent privacy challenges (see AI and Your Company: A Privacy Road Map (Part 1))?.   

Step 1: Understand the Regulatory Gaps and Categorise AI Law

The starting point is recognising that traditional privacy frameworks — built on principles like transparency, fairness, data minimisation, meaningful human review, and data subject rights — weren’t built for the unique challenges posed by AI. Specifically: (i) The black box nature of many AI systems makes it incredibly difficult to understand how decisions are reached, challenging transparency and explainability. (ii) Biased data or algorithms can result in systemic and difficult- to- detect discriminatory outcomes, challenging fairness. (iii) AI systems often require enormous amounts of data to train and operate, challenging principal of minimisation. (iv) Speed and complexity of AI decision making, can hinder individuals’ ability to interpret and oversee outputs. (v) Dynamic data flows make it difficult for individuals to understand, access and exercise their rights e.g. rectification or erasure. There are also gaps around deep fakes, harassment and scams.

Given these limitations, AI regulatory requirements generally fall into five categories. Understanding this landscape is crucial for strategic compliance:

CategoryDescriptionExample
Comprehensive regulations Broad laws covering all AI systems based on their risk level with general obligations for development and deployment. EU AI Act with Code/s and Guidelines
Sector-specific regulationsRegulations which build on existing data handling roles, but add AI-specific requirements into existing industry-specific legal frameworks.  healthcare, finance and employment
Generative AI focused regulationsSpecific rules for large language models, LLMs, and other generative AI addressing issues like copyright, content labelling, and model safetyCalifornia AB 2013 , See Code above
Data /privacy centric AI provisionsOften amendments or specific sections within broader privacy laws that address automated decision-making, profiling, algorithmic bias.GDPR Article 22
Ethical AI guidelines and Voluntary FrameworksNon-binding recommendations and best practices, often promoting privacy by design and fairness.USA NIST, International ISO, country forum OECD AI Principles

Step 2: Deep Dive into Key Global Frameworks

Once the landscape is categorised, privacy professionals must familiarise themselves with the specific legal texts that most impact the business’s operations and geographic scope.

Law/ FrameworkPrimary FocusKey Example of Requirement
EU AI ActRisk based framework that prioritises transparency, accountability and ethical AI development. Strict compliance measures for “High Risk” AI systems
Australian National Framework for the Assurance of Artificial Intelligence in Government Sets foundations for a nationally
consistent approach to AI assurance, in government.
Clear expectations and consistency for partners in public sector AI use.
California Generative Artificial Intelligence: Training Data Transparency Act  (AB 2013)Requires transparency of data sets used to train modelsRequires documentation of copyrighted material used in training data.
Colorado Artificial Intelligence ActAims to protect individuals from risks associated with algorithmic discrimination and require AI assessmentsRequires AI assessments to prevent discrimination.
New York City Local Law 144.Requires employers who use AI for hiring to subject AI systems to bias audits regularly Mandatory, independent bias audits for employment screening tools.

Step 3: Internally Define Core Principles

A helpful way of prioritizing your compliance efforts, when faced with such a diverse array of AI laws, some broad some highly specific, and the absence of agreement across the world on one universal version of what are all the issues that impact stakeholders, is to obtain executive agreement on Core AI Governance Principles tailored for your organisation. These principles act as the internal “constitution” for AI use, providing clarity when facing diverse, sometimes conflicting, global regulations. HP’s AI governance principles  is a good example. 

Step 4: Operationalise: Foster Collaboration Across Teams and Role Assignment

If we are going to translate legal requirements into actionable technical and operational constraints we need crucial input from various internal stakeholders across the AI system’s lifecycle. The agreed-upon core principles need to be assigned to governance teams that already exist within your organisation. There must be agreement as to who will become the expert or lead on various topics.   

StakeholderNecessary Input for Privacy CounselWhy its needed
Data Science / EngineeringTechnical Architecture & Mechanics: Detailed information on the model type (e.g., deep learning, classic ML), the training data sources and quantity, the algorithms used, and how outputs are generated (the system’s “logic”).To assess explainability, identify high-risk processing, and determine the technical feasibility of data subject rights (e.g., erasure, correction).
Product ManagementUse Case and Purpose: The specific business goal of the AI system, the intended outcomes for users, the expected data flow (input/output), and the target user base.To establish the lawful basis for processing, confirm the system complies with purpose limitation, and scope the required Data Protection Impact Assessment (DPIA).
IT/SecuritySecurity Controls and Environment: Details on data access controls, encryption methods, data retention policies, and whether the AI is hosted internally, by a vendor, or in a public cloud.To advise on security by design, manage risks like model inversion attacks, and ensure contractual security standards for vendors are met.
Business Units / HRReal-World Impact & Governance: How the AI is used in practice (e.g., hiring, profiling, customer service) and the company’s overall risk appetite and ethical principles.To assess potential algorithmic bias leading to discrimination and ensure human oversight mechanisms are in place for automated decisions.
Procurement / Vendor ManagementThird-Party Contracts: Terms of service, data processing agreements (DPAs), and assurances from AI tool vendors regarding their use of the data (especially with generative AI).To determine the company’s and vendor’s Controller/Processor roles and manage third-party data risks (e.g., data input being used for vendor model training).

Step 5: Establish a Scalable AI Governance and Assessment Programme 

The final step is to build a scalable AI governance and assessment programme to mitigate risk and enable responsible innovation. This program requires 5 pillars balancing traditional risk management concerns against the opportunity cost of delaying deployment 

Clear Policies and PrinciplesTo establish expectation, provide resources to the workforce on how to interact with AI at all stages of the life cycle.
Cross- functional CommitteesTo ensure collaboration, sign off and shared responsibility across teams ( legal engineering, product, risk).
Data Mapping and AI inventoriesTo know where AI systems are, what data they are touching, and where they sit in their lifecycle ( training vs deployment).
AssessmentsTo flag and remediate high priority risks, such as algorithmic bias, explainability issues and lack of human oversight before deployment.
Training and awareness Promote understanding of AI risks, individual responsibilities and adherence to ethical policies.

AI and Your Company: A Privacy Roadmap (Part 1)

Published by Alisha McKerrron on 24 September 2025 and revised on 13 October

It seems there is no avoiding artificial intelligence (AI) these days. The topic comes up constantly in the news, eye watering amounts of money are being invested in AI systems, and it ever more increasingly touches our lives, via products we use at home, on the go, at work, in the field of commerce, and in healthcare. Some say it may make the world’s economic growth explode! So how should we privacy lawyers tackle compliance issues relating to the use of AI in all its forms? A structured technical understanding of AI and resulting privacy challenges is a good start.

Step 1 : Differentiate True AI from “Smart” AI

As a first step it is important to recognise when an AI system is involved. It’s a common misconception that many seemingly “smart” products are powered by AI, when they rely on older, simpler technology. This is often because the term “AI” is used as a marketing buzzword to suggest a level of intelligence and adaptability that isn’t there. Take chatbots as an example:

Rule based chatbots work on a simple, “if/then” logic. If you type a specific question like “What are your opening hours?”, the bot is programmed to recognize the exact phrase and provide a pre-written answer. If you ask the same question in a slightly different way (e.g., “Are you open today?”), the bot may fail to understand and simply respond with “I’m sorry, I don’t understand.” This is a rule-based system, not a learning AI.

AI powered chatbots, on the other hand, employ AI components to process user input. The system analyses a user query by identifying their intent (their goal) and extracting entities (key pieces of information) even with variations in phrasing, typos, or colloquialisms. Instead of relying on a limited, pre-written library of responses, it can generate novel, human-like text on the fly. It can simulate human conversation but is based on complex computational and statistical models rather than actual understanding or thought.

Step 2: Grasp the Mechanics of Core AI Models

As a next step it is important to have a basic understanding of the core models that power AI systems. They include machine learning (ML) models, deep learning (DL) models, and large language models (LLMs). 

ML is a broad subset of AI where systems learn from data to make predictions or decisions without being explicitly programmed. These models are effective for well-defined tasks with structured data. There are three main types of ML models based on how they learn: (i) Supervised learning models which are trained on labelled datasets, where the correct answer is provided to the model during training. They’re commonly used for tasks like email spam detection or classifying accounts by type; (ii) Unsupervised learning models which find patterns in unlabelled data without human intervention. They’re used for things like customer segmentation or grouping similar data points; and (iii) Reinforcement learning models which learn through a trial-and-error process, receiving rewards for good behaviours and penalties for bad behaviours. This is often used for teaching robots or AI to play games.

Deep learning is an advanced subset of ML that uses artificial neural networks with multiple layers to process complex, unstructured data like images, video, and text. The layered structure allows them to automatically learn complex features from data with minimal human intervention. There are two types of networks: (i) Convolutional neural networks (CNNs), a specialized type of deep learning algorithm particularly well-suited for analysing visual data. They are used in tasks like image classification, object detection, and facial recognition; and (ii) Recurrent Neural Networks (RNNs) which are designed for sequential data, like time series or natural language, because they have a form of “memory” that allows them to process information in sequence.

LLMs are a cutting-edge type of deep learning model that are pre-trained on vast amounts of text data from the internet. They use a transformer architecture which allows them to understand context and the relationships between words across a text. LLMs are incredibly flexible and are primarily used for: (i) Generative AI which creates new content such as text, code, or images; and (ii) Natural Language Processing (NLP) for “understanding”, summarizing, translating, and classifying text.

Step 3: Deconstruct the AI System Architecture

Before assessing risk, you must understand the individual components of the AI system your organization uses and where the intelligence resides.

Continuing with the AI chatbot example, the components would include: a user interface, a natural language processing (NLP) engine, a dialogue manager, a knowledge base/database, and a natural language generation (NLG) engine. The user face (the part of the system the user sees, such as a chat window or a voice interface) and the knowledge base/ data (where the chatbot’s information is stored) do not use AI. But the other components do or may do.

The NLP engine (which interprets human language by identifying the user’s intent (their goal) and extracts key entities (important pieces of information) from a query), uses ML, and the NLG engine (used to generate a human-like response from the data provided by other components) uses DL and a LLM. The dialogue manager (which uses the information from the NLP engine to decide how the chatbot should respond) may or may not use AI depending on how sophisticated it is. 

Step 4: Map AI Model Type to GDPR Remediation Difficulty

A critical insight for privacy lawyers is that the architectural complexity of a model, coupled with its data processing methodology, determines the spectrum of GDPR compliance risk and the difficulty of remediation.

Model TypeCompliance challenges
Simpler MLTrained on smaller, proprietary, and often labelled data. The model focuses on content, not identity, allowing for anonymization/pseudonymization and training within a secure, controlled environment. Remediation is manageable through data minimization and strict access controls
Massive LLM
Trained on a huge, diverse internet corpus that inevitably contains personal data. This results in the risk of verbatim memorisation and potential leakage of sensitive training data via prompts, violating  the GDPR core principle of Purpose Limitation and Storage Limitation. Right to Erasure and establishing clear data lineage are nearly impossible.

Step five: Assess AI delivery Method for Data Control

The company’s chosen method of delivery dictates its level of control and visibility over customer data, which is essential for defining the Controller/Processor relationship

Delivery MethodControl and VisibilityPrimary Compliance Risk
Software-as-a ServiceLow. The business (Data Controller) is reliant on the provider (Data Processor) for security and data handling within their system.Data security, international data transfers, and the provider potentially using customer data to train public LLMS. Require a strong DPA
API IntegrationMedium. The business has more direct control over the data being sent via the API call.The business must practice data minimisation (redacting personal data before sending) and manage third party data transfers.
On-Premises DeploymentHigh. The business is the sole Data Controller and Processor; data never leaves the company’s data centre.Most GDPR complaint. The only risk is international security protocol failures
Embedded AI in HardwareHigh (Privacy by Design). Processing occurs locally on the device’s chip; data is minimised before any cloud transfer.The business must be transparent about any data collected and sent to the cloud, requiring explicit consent for cloud processing

Step 6: Familiarise Yourself with Applicable Laws

The final step is to stay current with the rapidly evolving regulatory landscape. New and revised local and extraterritorial laws relating to AI, such as the EU’s recent acts, are emerging constantly. Prioritising compliance efforts based on your company’s sector, jurisdiction and role (Controller/Processor) is vital for building the necessary governance framework —a crucial topic I shall tackle in my next blog post.