AI and Your Company: A Privacy Roadmap (Part 2)

Published by Alisha McKerron Heese on 16 October 2025

What further steps should we privacy professionals take to achieve AI privacy now that we have a structured technical understanding of AI and consequent privacy challenges (see AI and Your Company: A Privacy Road Map (Part 1))?.   

Step 1: Understand the Regulatory Gaps and Categorise AI Law

The starting point is recognising that traditional privacy frameworks — built on principles like transparency, fairness, data minimisation, meaningful human review, and data subject rights — weren’t built for the unique challenges posed by AI. Specifically: (i) The black box nature of many AI systems makes it incredibly difficult to understand how decisions are reached, challenging transparency and explainability. (ii) Biased data or algorithms can result in systemic and difficult- to- detect discriminatory outcomes, challenging fairness. (iii) AI systems often require enormous amounts of data to train and operate, challenging principal of minimisation. (iv) Speed and complexity of AI decision making, can hinder individuals’ ability to interpret and oversee outputs. (v) Dynamic data flows make it difficult for individuals to understand, access and exercise their rights e.g. rectification or erasure. There are also gaps around deep fakes, harassment and scams.

Given these limitations, AI regulatory requirements generally fall into five categories. Understanding this landscape is crucial for strategic compliance:

CategoryDescriptionExample
Comprehensive regulations Broad laws covering all AI systems based on their risk level with general obligations for development and deployment. EU AI Act with Code/s and Guidelines
Sector-specific regulationsRegulations which build on existing data handling roles, but add AI-specific requirements into existing industry-specific legal frameworks.  healthcare, finance and employment
Generative AI focused regulationsSpecific rules for large language models, LLMs, and other generative AI addressing issues like copyright, content labelling, and model safetyCalifornia AB 2013 , See Code above
Data /privacy centric AI provisionsOften amendments or specific sections within broader privacy laws that address automated decision-making, profiling, algorithmic bias.GDPR Article 22
Ethical AI guidelines and Voluntary FrameworksNon-binding recommendations and best practices, often promoting privacy by design and fairness.USA NIST, International ISO, country forum OECD AI Principles

Step 2: Deep Dive into Key Global Frameworks

Once the landscape is categorised, privacy professionals must familiarise themselves with the specific legal texts that most impact the business’s operations and geographic scope.

Law/ FrameworkPrimary FocusKey Example of Requirement
EU AI ActRisk based framework that prioritises transparency, accountability and ethical AI development. Strict compliance measures for “High Risk” AI systems
Australian National Framework for the Assurance of Artificial Intelligence in Government Sets foundations for a nationally
consistent approach to AI assurance, in government.
Clear expectations and consistency for partners in public sector AI use.
California Generative Artificial Intelligence: Training Data Transparency Act  (AB 2013)Requires transparency of data sets used to train modelsRequires documentation of copyrighted material used in training data.
Colorado Artificial Intelligence ActAims to protect individuals from risks associated with algorithmic discrimination and require AI assessmentsRequires AI assessments to prevent discrimination.
New York City Local Law 144.Requires employers who use AI for hiring to subject AI systems to bias audits regularly Mandatory, independent bias audits for employment screening tools.

Step 3: Internally Define Core Principles

A helpful way of prioritizing your compliance efforts, when faced with such a diverse array of AI laws, some broad some highly specific, and the absence of agreement across the world on one universal version of what are all the issues that impact stakeholders, is to obtain executive agreement on Core AI Governance Principles tailored for your organisation. These principles act as the internal “constitution” for AI use, providing clarity when facing diverse, sometimes conflicting, global regulations. HP’s AI governance principles  is a good example. 

Step 4: Operationalise: Foster Collaboration Across Teams and Role Assignment

If we are going to translate legal requirements into actionable technical and operational constraints we need crucial input from various internal stakeholders across the AI system’s lifecycle. The agreed-upon core principles need to be assigned to governance teams that already exist within your organisation. There must be agreement as to who will become the expert or lead on various topics.   

StakeholderNecessary Input for Privacy CounselWhy its needed
Data Science / EngineeringTechnical Architecture & Mechanics: Detailed information on the model type (e.g., deep learning, classic ML), the training data sources and quantity, the algorithms used, and how outputs are generated (the system’s “logic”).To assess explainability, identify high-risk processing, and determine the technical feasibility of data subject rights (e.g., erasure, correction).
Product ManagementUse Case and Purpose: The specific business goal of the AI system, the intended outcomes for users, the expected data flow (input/output), and the target user base.To establish the lawful basis for processing, confirm the system complies with purpose limitation, and scope the required Data Protection Impact Assessment (DPIA).
IT/SecuritySecurity Controls and Environment: Details on data access controls, encryption methods, data retention policies, and whether the AI is hosted internally, by a vendor, or in a public cloud.To advise on security by design, manage risks like model inversion attacks, and ensure contractual security standards for vendors are met.
Business Units / HRReal-World Impact & Governance: How the AI is used in practice (e.g., hiring, profiling, customer service) and the company’s overall risk appetite and ethical principles.To assess potential algorithmic bias leading to discrimination and ensure human oversight mechanisms are in place for automated decisions.
Procurement / Vendor ManagementThird-Party Contracts: Terms of service, data processing agreements (DPAs), and assurances from AI tool vendors regarding their use of the data (especially with generative AI).To determine the company’s and vendor’s Controller/Processor roles and manage third-party data risks (e.g., data input being used for vendor model training).

Step 5: Establish a Scalable AI Governance and Assessment Programme 

The final step is to build a scalable AI governance and assessment programme to mitigate risk and enable responsible innovation. This program requires 5 pillars balancing traditional risk management concerns against the opportunity cost of delaying deployment 

Clear Policies and PrinciplesTo establish expectation, provide resources to the workforce on how to interact with AI at all stages of the life cycle.
Cross- functional CommitteesTo ensure collaboration, sign off and shared responsibility across teams ( legal engineering, product, risk).
Data Mapping and AI inventoriesTo know where AI systems are, what data they are touching, and where they sit in their lifecycle ( training vs deployment).
AssessmentsTo flag and remediate high priority risks, such as algorithmic bias, explainability issues and lack of human oversight before deployment.
Training and awareness Promote understanding of AI risks, individual responsibilities and adherence to ethical policies.

AI and Your Company: A Privacy Roadmap (Part 1)

Published by Alisha McKerrron on 24 September 2025 and revised on 13 October

It seems there is no avoiding artificial intelligence (AI) these days. The topic comes up constantly in the news, eye watering amounts of money are being invested in AI systems, and it ever more increasingly touches our lives, via products we use at home, on the go, at work, in the field of commerce, and in healthcare. Some say it may make the world’s economic growth explode! So how should we privacy lawyers tackle compliance issues relating to the use of AI in all its forms? A structured technical understanding of AI and resulting privacy challenges is a good start.

Step 1 : Differentiate True AI from “Smart” AI

As a first step it is important to recognise when an AI system is involved. It’s a common misconception that many seemingly “smart” products are powered by AI, when they rely on older, simpler technology. This is often because the term “AI” is used as a marketing buzzword to suggest a level of intelligence and adaptability that isn’t there. Take chatbots as an example:

Rule based chatbots work on a simple, “if/then” logic. If you type a specific question like “What are your opening hours?”, the bot is programmed to recognize the exact phrase and provide a pre-written answer. If you ask the same question in a slightly different way (e.g., “Are you open today?”), the bot may fail to understand and simply respond with “I’m sorry, I don’t understand.” This is a rule-based system, not a learning AI.

AI powered chatbots, on the other hand, employ AI components to process user input. The system analyses a user query by identifying their intent (their goal) and extracting entities (key pieces of information) even with variations in phrasing, typos, or colloquialisms. Instead of relying on a limited, pre-written library of responses, it can generate novel, human-like text on the fly. It can simulate human conversation but is based on complex computational and statistical models rather than actual understanding or thought.

Step 2: Grasp the Mechanics of Core AI Models

As a next step it is important to have a basic understanding of the core models that power AI systems. They include machine learning (ML) models, deep learning (DL) models, and large language models (LLMs). 

ML is a broad subset of AI where systems learn from data to make predictions or decisions without being explicitly programmed. These models are effective for well-defined tasks with structured data. There are three main types of ML models based on how they learn: (i) Supervised learning models which are trained on labelled datasets, where the correct answer is provided to the model during training. They’re commonly used for tasks like email spam detection or classifying accounts by type; (ii) Unsupervised learning models which find patterns in unlabelled data without human intervention. They’re used for things like customer segmentation or grouping similar data points; and (iii) Reinforcement learning models which learn through a trial-and-error process, receiving rewards for good behaviours and penalties for bad behaviours. This is often used for teaching robots or AI to play games.

Deep learning is an advanced subset of ML that uses artificial neural networks with multiple layers to process complex, unstructured data like images, video, and text. The layered structure allows them to automatically learn complex features from data with minimal human intervention. There are two types of networks: (i) Convolutional neural networks (CNNs), a specialized type of deep learning algorithm particularly well-suited for analysing visual data. They are used in tasks like image classification, object detection, and facial recognition; and (ii) Recurrent Neural Networks (RNNs) which are designed for sequential data, like time series or natural language, because they have a form of “memory” that allows them to process information in sequence.

LLMs are a cutting-edge type of deep learning model that are pre-trained on vast amounts of text data from the internet. They use a transformer architecture which allows them to understand context and the relationships between words across a text. LLMs are incredibly flexible and are primarily used for: (i) Generative AI which creates new content such as text, code, or images; and (ii) Natural Language Processing (NLP) for “understanding”, summarizing, translating, and classifying text.

Step 3: Deconstruct the AI System Architecture

Before assessing risk, you must understand the individual components of the AI system your organization uses and where the intelligence resides.

Continuing with the AI chatbot example, the components would include: a user interface, a natural language processing (NLP) engine, a dialogue manager, a knowledge base/database, and a natural language generation (NLG) engine. The user face (the part of the system the user sees, such as a chat window or a voice interface) and the knowledge base/ data (where the chatbot’s information is stored) do not use AI. But the other components do or may do.

The NLP engine (which interprets human language by identifying the user’s intent (their goal) and extracts key entities (important pieces of information) from a query), uses ML, and the NLG engine (used to generate a human-like response from the data provided by other components) uses DL and a LLM. The dialogue manager (which uses the information from the NLP engine to decide how the chatbot should respond) may or may not use AI depending on how sophisticated it is. 

Step 4: Map AI Model Type to GDPR Remediation Difficulty

A critical insight for privacy lawyers is that the architectural complexity of a model, coupled with its data processing methodology, determines the spectrum of GDPR compliance risk and the difficulty of remediation.

Model TypeCompliance challenges
Simpler MLTrained on smaller, proprietary, and often labelled data. The model focuses on content, not identity, allowing for anonymization/pseudonymization and training within a secure, controlled environment. Remediation is manageable through data minimization and strict access controls
Massive LLM
Trained on a huge, diverse internet corpus that inevitably contains personal data. This results in the risk of verbatim memorisation and potential leakage of sensitive training data via prompts, violating  the GDPR core principle of Purpose Limitation and Storage Limitation. Right to Erasure and establishing clear data lineage are nearly impossible.

Step five: Assess AI delivery Method for Data Control

The company’s chosen method of delivery dictates its level of control and visibility over customer data, which is essential for defining the Controller/Processor relationship

Delivery MethodControl and VisibilityPrimary Compliance Risk
Software-as-a ServiceLow. The business (Data Controller) is reliant on the provider (Data Processor) for security and data handling within their system.Data security, international data transfers, and the provider potentially using customer data to train public LLMS. Require a strong DPA
API IntegrationMedium. The business has more direct control over the data being sent via the API call.The business must practice data minimisation (redacting personal data before sending) and manage third party data transfers.
On-Premises DeploymentHigh. The business is the sole Data Controller and Processor; data never leaves the company’s data centre.Most GDPR complaint. The only risk is international security protocol failures
Embedded AI in HardwareHigh (Privacy by Design). Processing occurs locally on the device’s chip; data is minimised before any cloud transfer.The business must be transparent about any data collected and sent to the cloud, requiring explicit consent for cloud processing

Step 6: Familiarise Yourself with Applicable Laws

The final step is to stay current with the rapidly evolving regulatory landscape. New and revised local and extraterritorial laws relating to AI, such as the EU’s recent acts, are emerging constantly. Prioritising compliance efforts based on your company’s sector, jurisdiction and role (Controller/Processor) is vital for building the necessary governance framework —a crucial topic I shall tackle in my next blog post. 

EU Supervisory Authorities

Published by Alisha McKerron

In celebration of today’s Data Privacy Day and in the spirit of empowering individuals and businesses to respect privacy, safeguard data and enable trust, I have compiled a list of EU supervisory authorities (a.k.a data protection authorities) in the 27 EU member states, with links to their website:

  1. Austria Österreichische Datenschutzbehörde
  2. Belgium Autorité de protection des données
  3. Bulgaria Commission for Personal Data Protection
  4. Croatia Croatian Personal Data Protection Agency
  5. Cyprus Commissioner for Personal Data
  6. Czech Republic The Office for Personal Data Protection
  7. Denmark Datatilsynet
  8. Estonia Estonian Data Protection Inspectorate
  9. Finland Office of the Data Protection Ombudsman
  10. France Commission Nationale de l’Informatique et des Libertés – CNIL
  11. Germany splits complaints amongst a number of different agencies; to sort out which one applies use: Die Bundesbeauftragte für den Datenschutz und die Informationsfreiheit
  12. Greece Hellenic Data Protection Authority
  13. Hungary National Authority for Data Protection and Freedom of Information
  14. Ireland Data Protection Commissioner
  15. Italy Garante per la protezione dei dati personali
  16. Latvia Data State Inspectorate
  17. Lithuania State Data Protection Inspectorate
  18. Luxembourg Commission Nationale pour la Protection des Données
  19. Malta Information and Data Protection Commissioner
  20. Netherlands Autoriteit Persoonsgegevens
  21. Poland The Bureau of the Inspector General for the Protection of Personal Data – GIODO
  22. Portugal Comissão Nacional de Protecção de Dados – CNPD
  23. Romania The National Supervisory Authority for Personal Data Processing
  24. Slovakia Office for Personal Data Protection of the Slovak Republic
  25. Slovenia Information Commissioner
  26. Spain Agencia de Protección de Datos
  27. Sweden Swedish Authority for Privacy Protection ( IMY)

Privacy authorities in non-EU member states which together with the 27 EU members states make up the European Economic Area include:

  1. Iceland Icelandic Data Protection Agency
  2. Liechtenstein Datenschutzstelle
  3. Norway Datatilsynet

What may also be of interest is the following list of countries, which have been recognised by the European Commission as providing adequate privacy protection:

  1. Andorra
  2. Argentina
  3. Canada
  4. Faroe Islands
  5. Guernsey
  6. Israel
  7. Isle of Man
  8. Japan
  9. Jersey
  10. New Zealand
  11. Switzerland
  12. Uruguay

How does mobile in-app advertising contribute to our web profile and how can we guard against it?

Published by Alisha McKerron

This article is the third in a series which considers how we come to have web activity profiles. To recap: in my first and second articles, we learnt that third party cookies enable our web browsing to be tracked and that sets of data related to our device — data fingerprints — can be used to do this too. The discussion thus far has been in the context of our desktops and surfing the web. But what about our mobile devices?  With mobile device traffic accounting for over half (51.51%) of global online traffic and executives at Apple and Google unveiling on-device features to help people monitor and restrict how much time they spend on their phones, are we properly considering how applications may be adding to our already growing profile? More specifically, are we considering the privacy implication of the seemingly free apps which we happily download on our mobile phones?  

What is an in-app? 

It may not surprise you that there’s is no such thing as a free lunch; the developers who wrote the mobile apps need to eat too. Consequently some apps have ads in them  — called in-apps — from which app developers derive a revenue. When we download an in-app on our mobile device and agree to its privacy terms we enable our app usage to be tracked and our profile to be enhanced. This is made possible because of mobile advertising IDs — or MAID’s for short.

How do MAIDs work?

MAIDs help app developers identify who is using their app, via an API request to the mobile device’s operating system. Both of the ‘big’ mobile platforms have their own; Google’s version is known as the GAID (Google Advertiser Identification) in the case of the Android operating system, and Apple’s is called the IDFA (Identifier For Advertisers) in the case of the Apple iOS operating system. They all operate in an anonymous way and can always be reset or zero out i.e. a dummy ID of all zeros returned. 

A MAIDs value lies in identifying a user not a device. Combined with a large pool of data MAIDs can be used to match up someone’s mobile habits with their desktop, connected TV, and even their offline habits, thereby gaining a fuller picture of who they are and how to market to them. For example if the app user has a Facebook account, has installed the Facebook app on their mobile device and has downloaded various other apps, Facebook will be able to connect the identity of its Facebook account holder to the mobile device and start to track their app use — rather like third party cookies. This is all made possible with ad tech mobile infrastructure helped by software development kits (SDKs) that can be embedded in the app code by developers — sometimes with little understanding of how it works. For example, the AdMob SDK uses Google’s data and the MAID to display ads in developer’s apps that are actually personalized to the user (because they know who the user is from the MAID), instead of generic ones. As personalized ads generally perform better, the developer makes more money.  Unsurprising developers wishing to increase their users’ numbers, will use more than one SDK. We can tell how many SDK’s are embedded in an app by how many privacy notices it has. Everyone wins: with the use of a mobile advertising platform, developers are able to offer up ad requests to brands and brands with the help of publishers are able to increase the visibility of their products using targeting marketing. If the targeting is accurate (i.e. users engage with the ads and product is sold),  everyone makes money. Perhaps this may be one of many reasons why Chrome is able to phase out support for third party cookies.  However, with Apple’s iOS’s 14 update which requires developers to ask permission before accessing the IDFA, MAIDs may become less useful. 

How can we protect ourselves?

With just a few taps on either an Android or iOS platform, we can disrupt the profiles ad networks have collected about us. To do it on Android, go to Settings > Privacy > Advanced > Ads and toggle on Opt out of Ads Personalization. On iOS, navigate to Settings > Privacy > Advertising and toggle on Limit Ad Tracking. If we don’t want to stop ad tracking altogether—we’re getting ads anyway, might as well be relevant—we can navigate to those same screens and tap Reset advertising ID on Android or Reset Advertising Identifier on iOS to cycle your ad ID and essentially force advertisers to start a new profile on us. Android actually shows us our (very long) alpha-numeric ad ID at the bottom of this screen and when we initiate a reset we can watch it change. A clean slate never hurts.

But how effective is this really? While Apple and Google have increasingly limited what apps collect for advertising purposes, other hardcoded IDs still exist such as device identifiers like serial numbers and other permanent sequences like your Wi-Fi network’s MAC address  and some apps have legitimate reasons to collect them. 

Perhaps the answer lies in pressurising industry to comply by supporting consumers’ expectations to be able to tell any and all companies not to track them when they’re not intentionally choosing to interact with them.

What ELSE makes it possible for us to have a web activity profile and how can we guard against it?

Published by Alisha McKerron

In my last article,”What makes it possible for us to have a web activity profile and how can we guard against it?”, we learnt that third party cookies enable our internet browsing to be tracked and that there are various ways we can block them. However there are other methods of tracking that can be used — for example, using browser fingerprinting techniques.

What are browser fingerprinting techniques?

Just like our unique fingerprints can be used to identify us, so can a set of data related to our device — from the hardware to the operating system, to the browser and its configuration — be used to identify us. We may be surprised if not dismissive that such information has any value since the devices and software we use are pretty common. But, consider that everytime we visit a webpage our browser is communicating with the server hosting that page; consider the variable content (text, pictures, logos, live feeds etc.) of each webpage and the settings on our computer and hardware needed to render a webpage, and consider that combining all of this information into one set of data can be used to create reasonably effective identifiers. Adding more data to the mix can be used to identify increasingly more specific groups of users: for example, while 10 people may share the same browser, only 5 might share the same browser and operating system, only 3 share the same browser, operating system, and screen size, … and so on, and so forth, until ideally there’s enough data to uniquely identify one user, because nobody else shares the same device, or browser-specific attributes. 

Examples of this kind of data include plug-ins, time zone, screen size, system fonts, if cookies are enabled, language, ad blocker used, device memory, type of browser (i.e. Mozilla, Chrome, Safari etc.), screen size, screen orientation and display aspect ratio etc. 

So how is this data able to be collected? HTTP— through a series of requests and responses—allows websites (or more correctly servers serving web pages) to interact with our browser and retrieve information in the process of serving up its web page. How this is done is discussed in my last article. The information our browser receives consists of so-called Web resources (like HTML, CSS, and JavaScript files), that give instructions to our browser about what it should render on our computer screen. Whereas HTML and CSS are languages that give structure and style to web pages, JavaScript gives web pages an interactive element that engages users.  It is the existence of JavaScript that is most relevant when it comes to digital fingerprinting.

What is JavaScript?

JavaScript is a programming language that allows web designers to implement complex features on web pages. Every time a web page does more than just sit there and display static information for us to look at — displaying timely content updates, interactive maps, animated 2D/3D graphics, scrolling video, jukeboxes, etc. — we can bet that JavaScript is probably involved. It is widely used across the web because it has this ability to create rich interfaces, it plays nicely with other languages, can be used in a huge variety of applications, and is relatively simple to learn and implement.  

What is relevant is that it is designed to run on our browser (i.e. client side as opposed to server side). JavaScript files are embedded in HTML documents which are served to our browser. Our browser creates a representation of the HTML document, called the Document Object Model (DOM) and JavaScript is able to manipulate the elements in the DOM in order to to make a web application responsive to the user. This makes the webpage potentially quite a lot faster (unless outside resources are required) and can reduce demand on website servers. 

Also relevant is that, since the mid 2000s, browsers automatically enable JavaScript by default and without our prior explicit permission. This is because these scripts are considered safe — they cannot be used to make evil file-destroying viruses. Also, when our browser loads a webpage it runs it inside an isolated browser tab, that prevents it from interacting with the software on our computer. But what about unintended consequences of JavaScript? 

Unintended consequences of browser fingerprinting 

It is important to point out that just running JavaScript in our browser does not in itself expose any identifying information. However, because the code executes on our computer, websites interested in identifying us can exploit certain JavaScript features for fingerprinting. They can do this by writing JavaScript that detects subtle differences in how different browsers, hardware configurations, etc. interpret and run the JavaScript code, and various JavaScript features the browser provides. 

Additionally, although Javascript is not an insecure programming language, code bugs or improper implementation can create backdoors which attackers can exploit. This is explained more fully in this article. Should we be concerned about this?

Uses of fingerprint data

Like cookies, while the result of browser fingerprinting benefits us — for example improving security, allowing us to receive services that are useful to us etc.— it is a power for good. But it benefits third parties too— such as the advertising industry with a 2020 Q2 global digital ad spend of $614 billion. Since it does so without our knowledge and at our expense, it is a serious threat to our online privacy. How can we protect ourselves against browser fingerprinting?

Protecting ourselves from browser fingerprinting

The most drastic measure we can take is to turn JavaScript off completely in our browsers. This will stop any JavaScript code from running, that detects any subtle differences in how different browsers, hardware configurations, etc. interpret and run the JavaScript code, and various JavaScript features the browser provides. But this will make home browsing more difficult; most websites rely on it and very few popular browsers will work as well without it. 

Perhaps less drastic but requiring some input on our side, would be to add plugins or browser extensions to our browser that control when we wish to turn JavaScript on or off.

Conclusion

I don’t think anyone will disagree that it’s important to gain an understanding of what makes it possible for us to have a web activity profile. Being careful about what JavaScript we allow our browser to run can go a long way in protecting our privacy. 

What makes it possible for us to have a web activity profile and how can we guard against it?

Free image/jpeg, Resolution: 1024×804, File size: 87Kb, Tasty chocolate cookies clipart

Published by Alisha McKerron

Most of us will be aware of web profiling, with the advent of the General Data Protection Regulations (GDPR) and some shocking data breaches – the most infamous being Cambridge Analytica. We have all heard how companies like Facebook and Google can use cookies to follow us around the internet and keep track of what we are interested in. They do this to serve targeted advertising or in some cases even share that data with others without our permission. We will also be aware of cookie banners and privacy notices which disclose, amongst other things, how our personal data is collected and with whom it is shared. But how many of us actually read these things? I suspect not many, given how few of us read websites’ terms of service. (It’s worth looking at Terms of Service; Didn’t Read, if you are one of them.) Perhaps we might feel and behave differently if we had a better understanding of one of the many tools that enable tracking – namely cookies. What are they, and why do they exist?

First party cookies

The cookie – a small often encrypted text file- was invented in 1994 by an employee of Netscape Communications, the same company that made the browser. At the time Netscape was trying to help websites become viable commercial enterprises. One of its employees Lou Montulli, was creating an online shop and he didn’t want to store the contents of the shopping cart on the website’s server. So what he did was store it in the user’s browser until they made their purchase. This proved to be a useful solution as it meant that the server did not need to spend time and money keeping track of everyone’s shopping cart. It also proved to be a useful solution in other instances – for example, in identifying users.

Simply Explained describes how cookies work in their youtube clip:

Let’s imagine we have a website that requires people to log in to see the content of the site. When you log in your browser sends your username and password to the server who verifies them and -if everything checks out- sends you the requested content. However there is a small caveat. The HTTP protocol – which is used to browse the internet- is stateless. That means if you make another request to the same server, it has forgotten who you are and will ask you to log in again. Can you imagine how time consuming this would be to browse around a site like Facebook and having to log in again every time you click on something?  So cookies to the rescue!  You still log into the website, and the server still validates your credentials. If everything checks out, however, the server not only responds with the content but also sends a cookie to your browser. The cookie is then stored on your computer and submitted to the server with every request you make to that website. The cookie contains a unique identifier that allows the server to “remember” who you are and keep you logged in.”

As you can see this type of cookie (known as a first party cookie) is helpful and makes our lives easier.

Browsers

If we are interested in getting under the hood of our web browser then cookies can be explained as follows. When we type in an HTTP address of an online shop we wish to visit, that web page in its entirety is not actually stored on a server ready and waiting to be delivered. In fact each web page that we request is individually created in response to our request. Our web browser submits a request message to the server hosting the website in order to retrieve the webpage. The Hyper Text Transfer Protocol dictates that this request message be submitted in a set way. First must come a method (eg GET) which indicates a desired action to be performed on the identified web resource; next the path of the web resource (/ ….); and then the request header fields. Likewise the protocol dictates that the servers’ response be submitted in a set way: HTTP status code; response header fields; and an optional message body which is used to upload web resources. The relevance of all this is to explain how and at what stage cookies are passed from web browser to server and vice versa.  

If we have not visited the website before, and therefore have never received cookies from this website, and the server wants our browser to store its cookie/s, it includes it/them in a HTTP response header called Set-Cookie.  If we have visited the website before our browser looks to see if it has cookies for the site that have not expired and if it finds cookies it puts the cookies in a request header called Cookie. HTTP headers can be viewed in web development tools that come as browser add-ons or built in features in web browsers. 

Third party cookies

Cookies become a cause for concern when they are used by external servers which the website is relying on to deliver content. Think about what we typically find on websites: images; media; links to YouTube, Twitter, and Facebook; advertisements, Facebook Like buttons etc. In order for our browser to serve up this content, it will send a request to a third party website. When this happens, the external website might place a cookie (called a third party cookie) on our browser (or, to be more precise, it asks the browser to store the cookie). Our browser then would send the information contained in the cookie next time it made a request to that external site – helping that site remember who we are. With the help of the HTTP referer header, a site loaded as a 3rd-party resource will also know which (first-party) website we were visiting. This is not such good news because the third party cookie is enabling our web browsing to be tracked.

Simply Explained goes on to explain how this works using Facebook as an example:

Well, the whole process starts when you log in to Facebook. To remember that you’re logged in, Facebook stores a cookie on your computer, nothing unusual about that, many other sites do the same thing.This cookie is scoped, or bound to Facebook’s domain name, meaning that no one else besides facebook.com can read what’s in the cookie. Let’s now imagine that you browse away and you land on someone’s blog.The blog cannot read your Facebook cookie, and the scope prevents that. Facebook also can’t see that you’re on this blog. All is well.But let’s now assume that the owner of the blog places a Facebook like button on his website. To show this like button, your browser has to download some code from the Facebook servers, and when it’s talking to facebook.com, it sends along the cookie that Facebook set earlier.Facebook now knows who you are and that you visited this blog. I’m using Facebook as the example here, but this technique is used by many other companies to track you around the internet.The trick is simple: convince as many websites as possible to place some of your code on their sites. Facebook has it easy because a lot of people want a like or share button on their website. Google also has an easy job because many websites rely on its advertisement network or on Google Analytics. At this stage, cookies are getting out of hand.”

Unfortunately the information sites can gather by tracking us around the Web in this manner has proved to be quite lucrative. As a consequence there are websites that have capitalised on third party cookies by embedding small digital image files in web pages (called a tracking pixel). The image could be as small as a single pixel, and could be of the same colour as the background, or completely transparent. Although we may not see the image, our web browser will automatically send a request to the external hosting server and so the process described above is triggered.

Guarding against third party cookies

How can we best protect ourselves? The first thing we can do is run a panopticlick test to determine how good a job our web browser is doing in protecting us from tracking. If the results are not as good as we expected, then we should consider installing a browser extension that blocks third party cookies such as Privacy Badger or Ghostery. We could also switch to a browser with built in protection such as Firefox or Safari, or, if we wish to continue using our current browser, ensure that we have blocked third party cookies in our browser settings.

If we don’t want to do anything, the law is on our side. In Europe, we have the GDPR which requires websites to be transparent about their use of cookies and requires sites to offer users simple ways to opt out. We’ve probably seen these annoying cookie banners asking for our permission. Next time we see them, we shouldn’t just click on accept but look at what cookies the website wants to place on our computer and for what purpose. More than ever it is important that we get involved and if necessary enforce our rights- particularly, since, a new study by researchers at MIT, UCL and Aarhus University, has revealed that most cookie consent pop-ups served to internet users in the EU, are likely to be non compliant. We must do this, if not for ourselves, then for the sake of web users.  

International transfers: what will be the effect of a no deal Brexit?

Published by Alisha McKerron on 22 August 2019

With a no deal Brexit looking like a genuine possibility on the 31st of October, it’s worth considering afresh its implications on cross border data flows, from the point of view of EEA organisations,which will continue to be subject to the General Data Protection Regulation (GDPR), and UK organisations (which shall become subject to a UK version of GDPR). The good news is that the UK government has done what it can to ease the process.

Personal data flowing into the UK from the EEA

For transfers of data into the UK, a no deal Brexit will mean that EEA organisations have to legitimise the flow of personal data into the UK. This is because the UK’s status will change (under GDPR) to that of a third country and rather importantly, cross-border transfers to third countries are prohibited (without a lawful data transfer mechanism, that is)! In other words, the UK would become like any other non-EU country with respect to data transfers any EEA organisations would need a lawful data transfer mechanism (under art.
44
, GDPR) to continue to transfer personal data.

UK organisations receiving personal data from EU organisations will therefore have to request such EU organisations to use a suitable cross border transfer mechanism.

If the UK is recognised as an “adequate” country, (under art. 45(1), GDPR) the status
quo
could continue, without having to implement any other transfer mechanism. But achieving adequacy status requires satisfying the EU Commission that the UK has an equivalent level of protection to that of the EU. This may take some time to determine because although the UK has adopted the GDPR into its domestic legislation, it has far reaching government surveillance powers which may adversely effect data subjects privacy rights. Until this issue has been resolved, EEA organisations will have to look to other transfer mechanisms.

EU Commission approved standard contractual clauses may be a suitable choice, as they are widely used for transfers around the world and could easily be introduced into existing documentation. However their validity is currently being questioned in a case before the European Court of Justice (Schrems
II)
a final decision should come out around the end of this year.

A regulatory approved set of rules (under art. 47, GDPR) binding a group of undertakings, or group of enterprises engaged in a joint economic activity, could be considered, but these require time and money to set up.

Needless to say, it will be up to EU organisations to decide which mechanism to use. The European Data Protection Board’s “Information note on data transfers under the GDPR in the event of no-deal Brexit” should help them make the correct decision. But what about data flows from the UK to the EU?

Personal data flowing out of the UK to the EEA

For transfers in the other direction, what was said above pretty much applies in reverse (albeit under the UK’s version of the GDPR, instead of the real thing). The status of EU member states (from the UK’s point of view) will change to that of ‘third countries’, and a data transfer mechanism will be required, in order to continue transferring personal data. However, cross-border transfers will be easier because the UK has made it clear it intends to permit data to flow from the UK to EEA member states. It has also committed transitionally to recognising EEA member states and Gibraltar as “adequate” and so data transfer can continue as it currently is.

Personal data flowing out of the UK to countries that are not EEA member states

Transfers to third countries which are not EEA member states will stay the same too; the UK government will mirror the status quo of GDPR in the EU by adopting the same approach as the EU. It will recognise the same list of countries as being “adequate”, recognise the standard contractual clauses approved by the European Commission and any binding corporate rules approved by supervisory authorities. Interestingly, the UK’s version of GDPR will have an extraterritorial jurisdiction and apply to the EEA! This is all explained in the UK government guidance note entitled “Amendments to the UK data protection law in the event the EU Leaves the EU without a deal”. So what steps should UK organsiations take to protect themselves?

What you should do

UK organisations need to assist their EEA stakeholders/organisations in assessing their exposure to cross-border transfer to the UK. Both parties should consider the necessity of cross-border transfers. Perhaps data flows could be minimised or even temporarily stopped, pending a favourable UK adequacy decision. If their EEA stakeholders/organisations continue to transfer any personal data to them, they must use a suitable transfer mechanism under GDPR. Whilst the outcome of the Schrems II case is pending, standard contractual clauses should be avoided even though they are approved.

Organisations in the UK have somewhat less cause for concern, since the UK has committed transitionally to recognising EEA member states and Gibraltar as “adequate” and so data transfers to the EEA member states can continue as they are. However UK organisations should review their documentation (for example, what their privacy notices and data processing agreements say about international transfers, since EEA transfers will now fall into this category) and maintain organisational awareness of the issue.

Aside from cross border transfers they should also consider whether they have to appoint a representative in a EEA member state under article 27 of the GDPR- another side effect of becoming a third country. The same question needs to be considered by EEA member states in relation to the UK.

Cross Border Transfers: What should companies be doing pending the judgement of Schrems II?

Published by Alisha McKerron on 19 August 2019

International transfers

Under the General Data Protection Regulation (GDPR), we are not allowed to transfer personal data to countries outside the European Economic Area (EEA). If we do, we must use a lawful method of cross border transfer (art. 44 GDPR) which is designed to ensure an equivalent level of protection to that is in the EU.

This seems straightforward; it is merely a question of identifying what lawful methods of cross border transfers are available, and choosing the least onerous one. In reality, however, it is anything but, especially with Brexit looming and two important cases pending in the Court of Justice of the European Union (CJEU).

SCC and the EU-US Privacy Shield

Two popular methods of transfer are being challenged in the CJEU – namely, transfers on the basis of EU Commission approved standard contractual clauses (SCC) in the case of 311/18 (also known as Schrems II), and transfers on the basis of there being an adequate EU-US Privacy Shield, in the case of 511/18 La Quadrature du Net. (It’s worth noting that until either challenge is upheld, both methods continue to be valid).

La Quadrature du Net has been postponed, pending the outcome of the Schrems II case. A decision in the Schrems II case is unlikely before the end of 2019 or early 2020, although a hearing of Schrems II took place on 9 July this year. Whilst we wait for a decision, we would be foolish to ignore the fact that a successful challenge will put businesses in a hugely difficult and worrying position.

If SCC and the EU-US Privacy Shield are no longer valid

For starters, SCC and the Shield are widely used by businesses within the European Economic Area (EEA) to legitimise the transfer of personal data to countries outside the EEA. Alternative methods of transfer are not really suitable because they are either limited, expensive, take time to put in place, are not yet available or a combination of all of those things.

If either of these methods are struck down, there could be rather unpleasant consequences: the court could halt data flows outside the EU, third parties could claim for compensation, and possible GDPR revenue-based fines and regulatory sanctions could follow. Companies would also have to pay the cost of remedying the problem as soon as a solution was found.

You may be wondering why we could be placed in this situation, after using transfer methods which have, after all, been approved by the Commission. Shouldn’t data controllers or processors be found accountable only to the extent that they did not adhere to the SCC? Perhaps the CJEU will find that even if transfers to the U.S. are problematic organizations, do not have to stop using SCC or the Shield; instead, data protection authorities would have to suspend problematic data flows and the Commission would be asked to revise the SCC and reconsider the Shield.

However this line of thinking ignores a central challenge that is being made in the Schrems II case – namely, the failure of the SCC to provide EU citizens with a meaningful redress to mass surveillance by US authorities.

This failure, according to DLA Piper, has given rise to the widely held expectation amongst privacy professionals that the CJEU will reach a finding to invalidate SCC (which would be consistent with its approach in an earlier Schrems I case ). Worse still, once a decision has been made by the CJEU, it will take effect immediately and apply retroactively!

What you should do

Accordingly, it is vital that you plan for the worst – particularly given that any infringement of the of GDPR regulations has the potential to attract a fine of anything up to 4% of an organisation’s annual worldwide turnover, or €20,000,000 – whichever is largest (!).

You should assess your exposure to cross-border transfers of data (by finding out to whom, where and on what basis are you transferring personal data). You should draw up an action plan – for example, consider either stopping some types of cross border transfers, or investigate alternative methods of transfer. Perhaps you could use data centres inside the EEA. You should discuss contingency plans internally and with suppliers.

However, the principle of safety in numbers might well still apply; you will certainly not be the only one to be affected, should either the SCC or the Shield be struck down by the CJEU. There may be a period of leniency, since there are no readily available alternatives for large-scale cross border transfers of personal data to outside the EEA. In any case, contingency planning should help you assess the impact of the CJEU’s decision and enable you to hit the ground running.

Useful Article: UK – Liability Limits for GDPR in Commercial Contracts – the Law and Recent Trend

Published by Alisha McKerron on 5 March 2019

In her article (listed in the Menu of this blog) entitled GDPR is Coming: 7 Steps Processors Need to Take to be Compliant (12 December 2017), Alisha sets out mandatory provisions (concerning data processors), which must be inserted in data processing agreements (art. 28 GDPR). Consequences of contractual breaches or non compliance with GDPR are not discussed in any detail.

This important topic is discussed in DLA Piper’s article (7 February 2019) UK: Liability Limits for GDPR in Commercial Contracts – the Law and Recent Trends which looks at how to allocate the risk and liability when negotiating commercial contracts. It considers:

  • Obligations- the source of liability;
  • Types of liability;
  • Limits  of liability.

It concludes that:

“Limiting financial liability under GDPR has been made much more complex than under the Data Protection Act 1998, both because the nature of the obligations placed on both parties has changed and because the consequences of breaches are much more serious. Parties looking to limit their exposure should be realistic and not assume that it will be either possible or desirable to simply pass liability to the other party under the contract in all circumstances, instead, they will need to take a more balanced approach to liability, based on the terms of GDPR and who has caused the loss in question to arise.”

Useful article: reaching the end of your GDPR journey – what should you be thinking about now?

Published by Alisha McKerron on 27 February 2019

In his article GDPR nine months on | What should you be thinking about now? Osborne Clark lists nine items to consider:

  • Updates to existing policies and procedures
  • New policies or procedure
  • Supplier relationships
  • Privacy Impact Assessments
  • GDPR training refresh
  • Data transfers and no-deal Brexit
  • Security breaches and ICO enforcement
  • Compliance strategy
  • One year audit

This is a useful continuation of A GDPR Journey: Where to Start and What to do Next, (listed in the Menu of this blog) depending where you are on your GDPR journey.