AI and Your Company: A Privacy Roadmap (Part 2)

Published by Alisha McKerron Heese on 16 October 2025

What further steps should we privacy professionals take to achieve AI privacy now that we have a structured technical understanding of AI and consequent privacy challenges (see AI and Your Company: A Privacy Road Map (Part 1))?.   

Step 1: Understand the Regulatory Gaps and Categorise AI Law

The starting point is recognising that traditional privacy frameworks — built on principles like transparency, fairness, data minimisation, meaningful human review, and data subject rights — weren’t built for the unique challenges posed by AI. Specifically: (i) The black box nature of many AI systems makes it incredibly difficult to understand how decisions are reached, challenging transparency and explainability. (ii) Biased data or algorithms can result in systemic and difficult- to- detect discriminatory outcomes, challenging fairness. (iii) AI systems often require enormous amounts of data to train and operate, challenging principal of minimisation. (iv) Speed and complexity of AI decision making, can hinder individuals’ ability to interpret and oversee outputs. (v) Dynamic data flows make it difficult for individuals to understand, access and exercise their rights e.g. rectification or erasure. There are also gaps around deep fakes, harassment and scams.

Given these limitations, AI regulatory requirements generally fall into five categories. Understanding this landscape is crucial for strategic compliance:

CategoryDescriptionExample
Comprehensive regulations Broad laws covering all AI systems based on their risk level with general obligations for development and deployment. EU AI Act with Code/s and Guidelines
Sector-specific regulationsRegulations which build on existing data handling roles, but add AI-specific requirements into existing industry-specific legal frameworks.  healthcare, finance and employment
Generative AI focused regulationsSpecific rules for large language models, LLMs, and other generative AI addressing issues like copyright, content labelling, and model safetyCalifornia AB 2013 , See Code above
Data /privacy centric AI provisionsOften amendments or specific sections within broader privacy laws that address automated decision-making, profiling, algorithmic bias.GDPR Article 22
Ethical AI guidelines and Voluntary FrameworksNon-binding recommendations and best practices, often promoting privacy by design and fairness.USA NIST, International ISO, country forum OECD AI Principles

Step 2: Deep Dive into Key Global Frameworks

Once the landscape is categorised, privacy professionals must familiarise themselves with the specific legal texts that most impact the business’s operations and geographic scope.

Law/ FrameworkPrimary FocusKey Example of Requirement
EU AI ActRisk based framework that prioritises transparency, accountability and ethical AI development. Strict compliance measures for “High Risk” AI systems
Australian National Framework for the Assurance of Artificial Intelligence in Government Sets foundations for a nationally
consistent approach to AI assurance, in government.
Clear expectations and consistency for partners in public sector AI use.
California Generative Artificial Intelligence: Training Data Transparency Act  (AB 2013)Requires transparency of data sets used to train modelsRequires documentation of copyrighted material used in training data.
Colorado Artificial Intelligence ActAims to protect individuals from risks associated with algorithmic discrimination and require AI assessmentsRequires AI assessments to prevent discrimination.
New York City Local Law 144.Requires employers who use AI for hiring to subject AI systems to bias audits regularly Mandatory, independent bias audits for employment screening tools.

Step 3: Internally Define Core Principles

A helpful way of prioritizing your compliance efforts, when faced with such a diverse array of AI laws, some broad some highly specific, and the absence of agreement across the world on one universal version of what are all the issues that impact stakeholders, is to obtain executive agreement on Core AI Governance Principles tailored for your organisation. These principles act as the internal “constitution” for AI use, providing clarity when facing diverse, sometimes conflicting, global regulations. HP’s AI governance principles  is a good example. 

Step 4: Operationalise: Foster Collaboration Across Teams and Role Assignment

If we are going to translate legal requirements into actionable technical and operational constraints we need crucial input from various internal stakeholders across the AI system’s lifecycle. The agreed-upon core principles need to be assigned to governance teams that already exist within your organisation. There must be agreement as to who will become the expert or lead on various topics.   

StakeholderNecessary Input for Privacy CounselWhy its needed
Data Science / EngineeringTechnical Architecture & Mechanics: Detailed information on the model type (e.g., deep learning, classic ML), the training data sources and quantity, the algorithms used, and how outputs are generated (the system’s “logic”).To assess explainability, identify high-risk processing, and determine the technical feasibility of data subject rights (e.g., erasure, correction).
Product ManagementUse Case and Purpose: The specific business goal of the AI system, the intended outcomes for users, the expected data flow (input/output), and the target user base.To establish the lawful basis for processing, confirm the system complies with purpose limitation, and scope the required Data Protection Impact Assessment (DPIA).
IT/SecuritySecurity Controls and Environment: Details on data access controls, encryption methods, data retention policies, and whether the AI is hosted internally, by a vendor, or in a public cloud.To advise on security by design, manage risks like model inversion attacks, and ensure contractual security standards for vendors are met.
Business Units / HRReal-World Impact & Governance: How the AI is used in practice (e.g., hiring, profiling, customer service) and the company’s overall risk appetite and ethical principles.To assess potential algorithmic bias leading to discrimination and ensure human oversight mechanisms are in place for automated decisions.
Procurement / Vendor ManagementThird-Party Contracts: Terms of service, data processing agreements (DPAs), and assurances from AI tool vendors regarding their use of the data (especially with generative AI).To determine the company’s and vendor’s Controller/Processor roles and manage third-party data risks (e.g., data input being used for vendor model training).

Step 5: Establish a Scalable AI Governance and Assessment Programme 

The final step is to build a scalable AI governance and assessment programme to mitigate risk and enable responsible innovation. This program requires 5 pillars balancing traditional risk management concerns against the opportunity cost of delaying deployment 

Clear Policies and PrinciplesTo establish expectation, provide resources to the workforce on how to interact with AI at all stages of the life cycle.
Cross- functional CommitteesTo ensure collaboration, sign off and shared responsibility across teams ( legal engineering, product, risk).
Data Mapping and AI inventoriesTo know where AI systems are, what data they are touching, and where they sit in their lifecycle ( training vs deployment).
AssessmentsTo flag and remediate high priority risks, such as algorithmic bias, explainability issues and lack of human oversight before deployment.
Training and awareness Promote understanding of AI risks, individual responsibilities and adherence to ethical policies.

Leave a comment