EU Supervisory Authorities

Published by Alisha McKerron

In celebration of today’s Data Privacy Day and in the spirit of empowering individuals and businesses to respect privacy, safeguard data and enable trust, I have compiled a list of EU supervisory authorities (a.k.a data protection authorities) in the 27 EU member states, with links to their website:

  1. Austria Österreichische Datenschutzbehörde
  2. Belgium Autorité de protection des données
  3. Bulgaria Commission for Personal Data Protection
  4. Croatia Croatian Personal Data Protection Agency
  5. Cyprus Commissioner for Personal Data
  6. Czech Republic The Office for Personal Data Protection
  7. Denmark Datatilsynet
  8. Estonia Estonian Data Protection Inspectorate
  9. Finland Office of the Data Protection Ombudsman
  10. France Commission Nationale de l’Informatique et des Libertés – CNIL
  11. Germany splits complaints amongst a number of different agencies; to sort out which one applies use: Die Bundesbeauftragte für den Datenschutz und die Informationsfreiheit
  12. Greece Hellenic Data Protection Authority
  13. Hungary National Authority for Data Protection and Freedom of Information
  14. Ireland Data Protection Commissioner
  15. Italy Garante per la protezione dei dati personali
  16. Latvia Data State Inspectorate
  17. Lithuania State Data Protection Inspectorate
  18. Luxembourg Commission Nationale pour la Protection des Données
  19. Malta Information and Data Protection Commissioner
  20. Netherlands Autoriteit Persoonsgegevens
  21. Poland The Bureau of the Inspector General for the Protection of Personal Data – GIODO
  22. Portugal Comissão Nacional de Protecção de Dados – CNPD
  23. Romania The National Supervisory Authority for Personal Data Processing
  24. Slovakia Office for Personal Data Protection of the Slovak Republic
  25. Slovenia Information Commissioner
  26. Spain Agencia de Protección de Datos
  27. Sweden Swedish Authority for Privacy Protection ( IMY)

Privacy authorities in non-EU member states which together with the 27 EU members states make up the European Economic Area include:

  1. Iceland Icelandic Data Protection Agency
  2. Liechtenstein Datenschutzstelle
  3. Norway Datatilsynet

What may also be of interest is the following list of countries, which have been recognised by the European Commission as providing adequate privacy protection:

  1. Andorra
  2. Argentina
  3. Canada
  4. Faroe Islands
  5. Guernsey
  6. Israel
  7. Isle of Man
  8. Japan
  9. Jersey
  10. New Zealand
  11. Switzerland
  12. Uruguay

How does mobile in-app advertising contribute to our web profile and how can we guard against it?

Published by Alisha McKerron

This article is the third in a series which considers how we come to have web activity profiles. To recap: in my first and second articles, we learnt that third party cookies enable our web browsing to be tracked and that sets of data related to our device — data fingerprints — can be used to do this too. The discussion thus far has been in the context of our desktops and surfing the web. But what about our mobile devices?  With mobile device traffic accounting for over half (51.51%) of global online traffic and executives at Apple and Google unveiling on-device features to help people monitor and restrict how much time they spend on their phones, are we properly considering how applications may be adding to our already growing profile? More specifically, are we considering the privacy implication of the seemingly free apps which we happily download on our mobile phones?  

What is an in-app? 

It may not surprise you that there’s is no such thing as a free lunch; the developers who wrote the mobile apps need to eat too. Consequently some apps have ads in them  — called in-apps — from which app developers derive a revenue. When we download an in-app on our mobile device and agree to its privacy terms we enable our app usage to be tracked and our profile to be enhanced. This is made possible because of mobile advertising IDs — or MAID’s for short.

How do MAIDs work?

MAIDs help app developers identify who is using their app, via an API request to the mobile device’s operating system. Both of the ‘big’ mobile platforms have their own; Google’s version is known as the GAID (Google Advertiser Identification) in the case of the Android operating system, and Apple’s is called the IDFA (Identifier For Advertisers) in the case of the Apple iOS operating system. They all operate in an anonymous way and can always be reset or zero out i.e. a dummy ID of all zeros returned. 

A MAIDs value lies in identifying a user not a device. Combined with a large pool of data MAIDs can be used to match up someone’s mobile habits with their desktop, connected TV, and even their offline habits, thereby gaining a fuller picture of who they are and how to market to them. For example if the app user has a Facebook account, has installed the Facebook app on their mobile device and has downloaded various other apps, Facebook will be able to connect the identity of its Facebook account holder to the mobile device and start to track their app use — rather like third party cookies. This is all made possible with ad tech mobile infrastructure helped by software development kits (SDKs) that can be embedded in the app code by developers — sometimes with little understanding of how it works. For example, the AdMob SDK uses Google’s data and the MAID to display ads in developer’s apps that are actually personalized to the user (because they know who the user is from the MAID), instead of generic ones. As personalized ads generally perform better, the developer makes more money.  Unsurprising developers wishing to increase their users’ numbers, will use more than one SDK. We can tell how many SDK’s are embedded in an app by how many privacy notices it has. Everyone wins: with the use of a mobile advertising platform, developers are able to offer up ad requests to brands and brands with the help of publishers are able to increase the visibility of their products using targeting marketing. If the targeting is accurate (i.e. users engage with the ads and product is sold),  everyone makes money. Perhaps this may be one of many reasons why Chrome is able to phase out support for third party cookies.  However, with Apple’s iOS’s 14 update which requires developers to ask permission before accessing the IDFA, MAIDs may become less useful. 

How can we protect ourselves?

With just a few taps on either an Android or iOS platform, we can disrupt the profiles ad networks have collected about us. To do it on Android, go to Settings > Privacy > Advanced > Ads and toggle on Opt out of Ads Personalization. On iOS, navigate to Settings > Privacy > Advertising and toggle on Limit Ad Tracking. If we don’t want to stop ad tracking altogether—we’re getting ads anyway, might as well be relevant—we can navigate to those same screens and tap Reset advertising ID on Android or Reset Advertising Identifier on iOS to cycle your ad ID and essentially force advertisers to start a new profile on us. Android actually shows us our (very long) alpha-numeric ad ID at the bottom of this screen and when we initiate a reset we can watch it change. A clean slate never hurts.

But how effective is this really? While Apple and Google have increasingly limited what apps collect for advertising purposes, other hardcoded IDs still exist such as device identifiers like serial numbers and other permanent sequences like your Wi-Fi network’s MAC address  and some apps have legitimate reasons to collect them. 

Perhaps the answer lies in pressurising industry to comply by supporting consumers’ expectations to be able to tell any and all companies not to track them when they’re not intentionally choosing to interact with them.

What ELSE makes it possible for us to have a web activity profile and how can we guard against it?

Published by Alisha McKerron

In my last article,”What makes it possible for us to have a web activity profile and how can we guard against it?”, we learnt that third party cookies enable our internet browsing to be tracked and that there are various ways we can block them. However there are other methods of tracking that can be used — for example, using browser fingerprinting techniques.

What are browser fingerprinting techniques?

Just like our unique fingerprints can be used to identify us, so can a set of data related to our device — from the hardware to the operating system, to the browser and its configuration — be used to identify us. We may be surprised if not dismissive that such information has any value since the devices and software we use are pretty common. But, consider that everytime we visit a webpage our browser is communicating with the server hosting that page; consider the variable content (text, pictures, logos, live feeds etc.) of each webpage and the settings on our computer and hardware needed to render a webpage, and consider that combining all of this information into one set of data can be used to create reasonably effective identifiers. Adding more data to the mix can be used to identify increasingly more specific groups of users: for example, while 10 people may share the same browser, only 5 might share the same browser and operating system, only 3 share the same browser, operating system, and screen size, … and so on, and so forth, until ideally there’s enough data to uniquely identify one user, because nobody else shares the same device, or browser-specific attributes. 

Examples of this kind of data include plug-ins, time zone, screen size, system fonts, if cookies are enabled, language, ad blocker used, device memory, type of browser (i.e. Mozilla, Chrome, Safari etc.), screen size, screen orientation and display aspect ratio etc. 

So how is this data able to be collected? HTTP— through a series of requests and responses—allows websites (or more correctly servers serving web pages) to interact with our browser and retrieve information in the process of serving up its web page. How this is done is discussed in my last article. The information our browser receives consists of so-called Web resources (like HTML, CSS, and JavaScript files), that give instructions to our browser about what it should render on our computer screen. Whereas HTML and CSS are languages that give structure and style to web pages, JavaScript gives web pages an interactive element that engages users.  It is the existence of JavaScript that is most relevant when it comes to digital fingerprinting.

What is JavaScript?

JavaScript is a programming language that allows web designers to implement complex features on web pages. Every time a web page does more than just sit there and display static information for us to look at — displaying timely content updates, interactive maps, animated 2D/3D graphics, scrolling video, jukeboxes, etc. — we can bet that JavaScript is probably involved. It is widely used across the web because it has this ability to create rich interfaces, it plays nicely with other languages, can be used in a huge variety of applications, and is relatively simple to learn and implement.  

What is relevant is that it is designed to run on our browser (i.e. client side as opposed to server side). JavaScript files are embedded in HTML documents which are served to our browser. Our browser creates a representation of the HTML document, called the Document Object Model (DOM) and JavaScript is able to manipulate the elements in the DOM in order to to make a web application responsive to the user. This makes the webpage potentially quite a lot faster (unless outside resources are required) and can reduce demand on website servers. 

Also relevant is that, since the mid 2000s, browsers automatically enable JavaScript by default and without our prior explicit permission. This is because these scripts are considered safe — they cannot be used to make evil file-destroying viruses. Also, when our browser loads a webpage it runs it inside an isolated browser tab, that prevents it from interacting with the software on our computer. But what about unintended consequences of JavaScript? 

Unintended consequences of browser fingerprinting 

It is important to point out that just running JavaScript in our browser does not in itself expose any identifying information. However, because the code executes on our computer, websites interested in identifying us can exploit certain JavaScript features for fingerprinting. They can do this by writing JavaScript that detects subtle differences in how different browsers, hardware configurations, etc. interpret and run the JavaScript code, and various JavaScript features the browser provides. 

Additionally, although Javascript is not an insecure programming language, code bugs or improper implementation can create backdoors which attackers can exploit. This is explained more fully in this article. Should we be concerned about this?

Uses of fingerprint data

Like cookies, while the result of browser fingerprinting benefits us — for example improving security, allowing us to receive services that are useful to us etc.— it is a power for good. But it benefits third parties too— such as the advertising industry with a 2020 Q2 global digital ad spend of $614 billion. Since it does so without our knowledge and at our expense, it is a serious threat to our online privacy. How can we protect ourselves against browser fingerprinting?

Protecting ourselves from browser fingerprinting

The most drastic measure we can take is to turn JavaScript off completely in our browsers. This will stop any JavaScript code from running, that detects any subtle differences in how different browsers, hardware configurations, etc. interpret and run the JavaScript code, and various JavaScript features the browser provides. But this will make home browsing more difficult; most websites rely on it and very few popular browsers will work as well without it. 

Perhaps less drastic but requiring some input on our side, would be to add plugins or browser extensions to our browser that control when we wish to turn JavaScript on or off.


I don’t think anyone will disagree that it’s important to gain an understanding of what makes it possible for us to have a web activity profile. Being careful about what JavaScript we allow our browser to run can go a long way in protecting our privacy. 

What makes it possible for us to have a web activity profile and how can we guard against it?

Free image/jpeg, Resolution: 1024×804, File size: 87Kb, Tasty chocolate cookies clipart

Published by Alisha McKerron

Most of us will be aware of web profiling, with the advent of the General Data Protection Regulations (GDPR) and some shocking data breaches – the most infamous being Cambridge Analytica. We have all heard how companies like Facebook and Google can use cookies to follow us around the internet and keep track of what we are interested in. They do this to serve targeted advertising or in some cases even share that data with others without our permission. We will also be aware of cookie banners and privacy notices which disclose, amongst other things, how our personal data is collected and with whom it is shared. But how many of us actually read these things? I suspect not many, given how few of us read websites’ terms of service. (It’s worth looking at Terms of Service; Didn’t Read, if you are one of them.) Perhaps we might feel and behave differently if we had a better understanding of one of the many tools that enable tracking – namely cookies. What are they, and why do they exist?

First party cookies

The cookie – a small often encrypted text file- was invented in 1994 by an employee of Netscape Communications, the same company that made the browser. At the time Netscape was trying to help websites become viable commercial enterprises. One of its employees Lou Montulli, was creating an online shop and he didn’t want to store the contents of the shopping cart on the website’s server. So what he did was store it in the user’s browser until they made their purchase. This proved to be a useful solution as it meant that the server did not need to spend time and money keeping track of everyone’s shopping cart. It also proved to be a useful solution in other instances – for example, in identifying users.

Simply Explained describes how cookies work in their youtube clip:

Let’s imagine we have a website that requires people to log in to see the content of the site. When you log in your browser sends your username and password to the server who verifies them and -if everything checks out- sends you the requested content. However there is a small caveat. The HTTP protocol – which is used to browse the internet- is stateless. That means if you make another request to the same server, it has forgotten who you are and will ask you to log in again. Can you imagine how time consuming this would be to browse around a site like Facebook and having to log in again every time you click on something?  So cookies to the rescue!  You still log into the website, and the server still validates your credentials. If everything checks out, however, the server not only responds with the content but also sends a cookie to your browser. The cookie is then stored on your computer and submitted to the server with every request you make to that website. The cookie contains a unique identifier that allows the server to “remember” who you are and keep you logged in.”

As you can see this type of cookie (known as a first party cookie) is helpful and makes our lives easier.


If we are interested in getting under the hood of our web browser then cookies can be explained as follows. When we type in an HTTP address of an online shop we wish to visit, that web page in its entirety is not actually stored on a server ready and waiting to be delivered. In fact each web page that we request is individually created in response to our request. Our web browser submits a request message to the server hosting the website in order to retrieve the webpage. The Hyper Text Transfer Protocol dictates that this request message be submitted in a set way. First must come a method (eg GET) which indicates a desired action to be performed on the identified web resource; next the path of the web resource (/ ….); and then the request header fields. Likewise the protocol dictates that the servers’ response be submitted in a set way: HTTP status code; response header fields; and an optional message body which is used to upload web resources. The relevance of all this is to explain how and at what stage cookies are passed from web browser to server and vice versa.  

If we have not visited the website before, and therefore have never received cookies from this website, and the server wants our browser to store its cookie/s, it includes it/them in a HTTP response header called Set-Cookie.  If we have visited the website before our browser looks to see if it has cookies for the site that have not expired and if it finds cookies it puts the cookies in a request header called Cookie. HTTP headers can be viewed in web development tools that come as browser add-ons or built in features in web browsers. 

Third party cookies

Cookies become a cause for concern when they are used by external servers which the website is relying on to deliver content. Think about what we typically find on websites: images; media; links to YouTube, Twitter, and Facebook; advertisements, Facebook Like buttons etc. In order for our browser to serve up this content, it will send a request to a third party website. When this happens, the external website might place a cookie (called a third party cookie) on our browser (or, to be more precise, it asks the browser to store the cookie). Our browser then would send the information contained in the cookie next time it made a request to that external site – helping that site remember who we are. With the help of the HTTP referer header, a site loaded as a 3rd-party resource will also know which (first-party) website we were visiting. This is not such good news because the third party cookie is enabling our web browsing to be tracked.

Simply Explained goes on to explain how this works using Facebook as an example:

Well, the whole process starts when you log in to Facebook. To remember that you’re logged in, Facebook stores a cookie on your computer, nothing unusual about that, many other sites do the same thing.This cookie is scoped, or bound to Facebook’s domain name, meaning that no one else besides can read what’s in the cookie. Let’s now imagine that you browse away and you land on someone’s blog.The blog cannot read your Facebook cookie, and the scope prevents that. Facebook also can’t see that you’re on this blog. All is well.But let’s now assume that the owner of the blog places a Facebook like button on his website. To show this like button, your browser has to download some code from the Facebook servers, and when it’s talking to, it sends along the cookie that Facebook set earlier.Facebook now knows who you are and that you visited this blog. I’m using Facebook as the example here, but this technique is used by many other companies to track you around the internet.The trick is simple: convince as many websites as possible to place some of your code on their sites. Facebook has it easy because a lot of people want a like or share button on their website. Google also has an easy job because many websites rely on its advertisement network or on Google Analytics. At this stage, cookies are getting out of hand.”

Unfortunately the information sites can gather by tracking us around the Web in this manner has proved to be quite lucrative. As a consequence there are websites that have capitalised on third party cookies by embedding small digital image files in web pages (called a tracking pixel). The image could be as small as a single pixel, and could be of the same colour as the background, or completely transparent. Although we may not see the image, our web browser will automatically send a request to the external hosting server and so the process described above is triggered.

Guarding against third party cookies

How can we best protect ourselves? The first thing we can do is run a panopticlick test to determine how good a job our web browser is doing in protecting us from tracking. If the results are not as good as we expected, then we should consider installing a browser extension that blocks third party cookies such as Privacy Badger or Ghostery. We could also switch to a browser with built in protection such as Firefox or Safari, or, if we wish to continue using our current browser, ensure that we have blocked third party cookies in our browser settings.

If we don’t want to do anything, the law is on our side. In Europe, we have the GDPR which requires websites to be transparent about their use of cookies and requires sites to offer users simple ways to opt out. We’ve probably seen these annoying cookie banners asking for our permission. Next time we see them, we shouldn’t just click on accept but look at what cookies the website wants to place on our computer and for what purpose. More than ever it is important that we get involved and if necessary enforce our rights- particularly, since, a new study by researchers at MIT, UCL and Aarhus University, has revealed that most cookie consent pop-ups served to internet users in the EU, are likely to be non compliant. We must do this, if not for ourselves, then for the sake of web users.  

International transfers: what will be the effect of a no deal Brexit?

Published by Alisha McKerron on 22 August 2019

With a no deal Brexit looking like a genuine possibility on the 31st of October, it’s worth considering afresh its implications on cross border data flows, from the point of view of EEA organisations,which will continue to be subject to the General Data Protection Regulation (GDPR), and UK organisations (which shall become subject to a UK version of GDPR). The good news is that the UK government has done what it can to ease the process.

Personal data flowing into the UK from the EEA

For transfers of data into the UK, a no deal Brexit will mean that EEA organisations have to legitimise the flow of personal data into the UK. This is because the UK’s status will change (under GDPR) to that of a third country and rather importantly, cross-border transfers to third countries are prohibited (without a lawful data transfer mechanism, that is)! In other words, the UK would become like any other non-EU country with respect to data transfers any EEA organisations would need a lawful data transfer mechanism (under art. 44, GDPR) to continue to transfer personal data.

UK organisations receiving personal data from EU organisations will therefore have to request such EU organisations to use a suitable cross border transfer mechanism.

If the UK is recognised as an “adequate” country, (under art. 45(1), GDPR) the status quo could continue, without having to implement any other transfer mechanism. But achieving adequacy status requires satisfying the EU Commission that the UK has an equivalent level of protection to that of the EU. This may take some time to determine because although the UK has adopted the GDPR into its domestic legislation, it has far reaching government surveillance powers which may adversely effect data subjects privacy rights. Until this issue has been resolved, EEA organisations will have to look to other transfer mechanisms.

EU Commission approved standard contractual clauses may be a suitable choice, as they are widely used for transfers around the world and could easily be introduced into existing documentation. However their validity is currently being questioned in a case before the European Court of Justice (Schrems II) a final decision should come out around the end of this year.

A regulatory approved set of rules (under art. 47, GDPR) binding a group of undertakings, or group of enterprises engaged in a joint economic activity, could be considered, but these require time and money to set up.

Needless to say, it will be up to EU organisations to decide which mechanism to use. The European Data Protection Board’s “Information note on data transfers under the GDPR in the event of no-deal Brexit” should help them make the correct decision. But what about data flows from the UK to the EU?

Personal data flowing out of the UK to the EEA

For transfers in the other direction, what was said above pretty much applies in reverse (albeit under the UK’s version of the GDPR, instead of the real thing). The status of EU member states (from the UK’s point of view) will change to that of ‘third countries’, and a data transfer mechanism will be required, in order to continue transferring personal data. However, cross-border transfers will be easier because the UK has made it clear it intends to permit data to flow from the UK to EEA member states. It has also committed transitionally to recognising EEA member states and Gibraltar as “adequate” and so data transfer can continue as it currently is.

Personal data flowing out of the UK to countries that are not EEA member states

Transfers to third countries which are not EEA member states will stay the same too; the UK government will mirror the status quo of GDPR in the EU by adopting the same approach as the EU. It will recognise the same list of countries as being “adequate”, recognise the standard contractual clauses approved by the European Commission and any binding corporate rules approved by supervisory authorities. Interestingly, the UK’s version of GDPR will have an extraterritorial jurisdiction and apply to the EEA! This is all explained in the UK government guidance note entitled “Amendments to the UK data protection law in the event the EU Leaves the EU without a deal”. So what steps should UK organsiations take to protect themselves?

What you should do

UK organisations need to assist their EEA stakeholders/organisations in assessing their exposure to cross-border transfer to the UK. Both parties should consider the necessity of cross-border transfers. Perhaps data flows could be minimised or even temporarily stopped, pending a favourable UK adequacy decision. If their EEA stakeholders/organisations continue to transfer any personal data to them, they must use a suitable transfer mechanism under GDPR. Whilst the outcome of the Schrems II case is pending, standard contractual clauses should be avoided even though they are approved.

Organisations in the UK have somewhat less cause for concern, since the UK has committed transitionally to recognising EEA member states and Gibraltar as “adequate” and so data transfers to the EEA member states can continue as they are. However UK organisations should review their documentation (for example, what their privacy notices and data processing agreements say about international transfers, since EEA transfers will now fall into this category) and maintain organisational awareness of the issue.

Aside from cross border transfers they should also consider whether they have to appoint a representative in a EEA member state under article 27 of the GDPR- another side effect of becoming a third country. The same question needs to be considered by EEA member states in relation to the UK.

Cross Border Transfers: What should companies be doing pending the judgement of Schrems II?

Published by Alisha McKerron on 19 August 2019

International transfers

Under the General Data Protection Regulation (GDPR), we are not allowed to transfer personal data to countries outside the European Economic Area (EEA). If we do, we must use a lawful method of cross border transfer (art. 44 GDPR) which is designed to ensure an equivalent level of protection to that is in the EU.

This seems straightforward; it is merely a question of identifying what lawful methods of cross border transfers are available, and choosing the least onerous one. In reality, however, it is anything but, especially with Brexit looming and two important cases pending in the Court of Justice of the European Union (CJEU).

SCC and the EU-US Privacy Shield

Two popular methods of transfer are being challenged in the CJEU – namely, transfers on the basis of EU Commission approved standard contractual clauses (SCC) in the case of 311/18 (also known as Schrems II), and transfers on the basis of there being an adequate EU-US Privacy Shield, in the case of 511/18 La Quadrature du Net. (It’s worth noting that until either challenge is upheld, both methods continue to be valid).

La Quadrature du Net has been postponed, pending the outcome of the Schrems II case. A decision in the Schrems II case is unlikely before the end of 2019 or early 2020, although a hearing of Schrems II took place on 9 July this year. Whilst we wait for a decision, we would be foolish to ignore the fact that a successful challenge will put businesses in a hugely difficult and worrying position.

If SCC and the EU-US Privacy Shield are no longer valid

For starters, SCC and the Shield are widely used by businesses within the European Economic Area (EEA) to legitimise the transfer of personal data to countries outside the EEA. Alternative methods of transfer are not really suitable because they are either limited, expensive, take time to put in place, are not yet available or a combination of all of those things.

If either of these methods are struck down, there could be rather unpleasant consequences: the court could halt data flows outside the EU, third parties could claim for compensation, and possible GDPR revenue-based fines and regulatory sanctions could follow. Companies would also have to pay the cost of remedying the problem as soon as a solution was found.

You may be wondering why we could be placed in this situation, after using transfer methods which have, after all, been approved by the Commission. Shouldn’t data controllers or processors be found accountable only to the extent that they did not adhere to the SCC? Perhaps the CJEU will find that even if transfers to the U.S. are problematic organizations, do not have to stop using SCC or the Shield; instead, data protection authorities would have to suspend problematic data flows and the Commission would be asked to revise the SCC and reconsider the Shield.

However this line of thinking ignores a central challenge that is being made in the Schrems II case – namely, the failure of the SCC to provide EU citizens with a meaningful redress to mass surveillance by US authorities.

This failure, according to DLA Piper, has given rise to the widely held expectation amongst privacy professionals that the CJEU will reach a finding to invalidate SCC (which would be consistent with its approach in an earlier Schrems I case ). Worse still, once a decision has been made by the CJEU, it will take effect immediately and apply retroactively!

What you should do

Accordingly, it is vital that you plan for the worst – particularly given that any infringement of the of GDPR regulations has the potential to attract a fine of anything up to 4% of an organisation’s annual worldwide turnover, or €20,000,000 – whichever is largest (!).

You should assess your exposure to cross-border transfers of data (by finding out to whom, where and on what basis are you transferring personal data). You should draw up an action plan – for example, consider either stopping some types of cross border transfers, or investigate alternative methods of transfer. Perhaps you could use data centres inside the EEA. You should discuss contingency plans internally and with suppliers.

However, the principle of safety in numbers might well still apply; you will certainly not be the only one to be affected, should either the SCC or the Shield be struck down by the CJEU. There may be a period of leniency, since there are no readily available alternatives for large-scale cross border transfers of personal data to outside the EEA. In any case, contingency planning should help you assess the impact of the CJEU’s decision and enable you to hit the ground running.

Useful Article: UK – Liability Limits for GDPR in Commercial Contracts – the Law and Recent Trend

Published by Alisha McKerron on 5 March 2019

In her article (listed in the Menu of this blog) entitled GDPR is Coming: 7 Steps Processors Need to Take to be Compliant (12 December 2017), Alisha sets out mandatory provisions (concerning data processors), which must be inserted in data processing agreements (art. 28 GDPR). Consequences of contractual breaches or non compliance with GDPR are not discussed in any detail.

This important topic is discussed in DLA Piper’s article (7 February 2019) UK: Liability Limits for GDPR in Commercial Contracts – the Law and Recent Trends which looks at how to allocate the risk and liability when negotiating commercial contracts. It considers:

  • Obligations- the source of liability;
  • Types of liability;
  • Limits  of liability.

It concludes that:

“Limiting financial liability under GDPR has been made much more complex than under the Data Protection Act 1998, both because the nature of the obligations placed on both parties has changed and because the consequences of breaches are much more serious. Parties looking to limit their exposure should be realistic and not assume that it will be either possible or desirable to simply pass liability to the other party under the contract in all circumstances, instead, they will need to take a more balanced approach to liability, based on the terms of GDPR and who has caused the loss in question to arise.”

Useful article: reaching the end of your GDPR journey – what should you be thinking about now?

Published by Alisha McKerron on 27 February 2019

In his article GDPR nine months on | What should you be thinking about now? Osborne Clark lists nine items to consider:

  • Updates to existing policies and procedures
  • New policies or procedure
  • Supplier relationships
  • Privacy Impact Assessments
  • GDPR training refresh
  • Data transfers and no-deal Brexit
  • Security breaches and ICO enforcement
  • Compliance strategy
  • One year audit

This is a useful continuation of A GDPR Journey: Where to Start and What to do Next, (listed in the Menu of this blog) depending where you are on your GDPR journey.

What Impact do Search Engines have on Individual’s Reputation and does the “new” Right to be Forgotten Assist in any way?


Published by Alisha McKerron on 25 February 2019

What would we do without modern day commercial search engines? For starters it would take us much longer and require much more effort to find answers to everyday questions. Search engines allow us to find the proverbial needle in a haystack.

At first glance this may seem like a good thing, but what if the search results produce links to incriminating information about us. What protection if any do private individuals have?

Google vs Spain

This question was considered in a landmark case of Google v. Spain (C‑131/12). The case involves an individual who requested the removal of a link to a digitized 1998 article in La Vanguardia newspaper about an auction for his foreclosed home, for a debt that he had subsequently paid. He asked the news organisation to remove the article and Google to remove any links to it. The Spanish Data Protection Agency said that the news organisation should be left alone but that Google should remove any links to the article.

On appeal the European Court of Justice affirmed the judgment of the Spanish Data Protection Agency i.e. it upheld press freedoms by rejecting a request to have the article concerning personal bankruptcy removed from the web site of the news organization. However, the Court ruled that European citizens have a right to request that commercial search firms, such as Google, that gather personal information for profit, should remove links to private information when asked, provided the information is no longer relevant. The Court found that the fundamental right to privacy is greater than the economic interest of the commercial firm and, in some circumstances, the public interest in access to information.

(It’s worth mentioning that in November 2018 Google held an 89.1% market share in the UK.)

Google subsequently set up an online removal-of-links-from-its-search-results form for customers to use. It has also published a useful guide entitled “Fix problems & request removals” on Google Search Help. The guide explains the few instances Google will remove content from Search which includes sensitive personal information, like your bank account number, or an image of your handwritten signature, or a nude or sexually explicit image or video of you that’s been shared without your consent. Interestingly the guide does not refer to data that is “inadequate, irrelevant or excessive in relation to the purposes of the processing” (para 92 Google v. Spain).

Right to erasure (“right to be forgotten”) (art. 17 GDPR)

Two years after the Google v. Spain judgement, the General Data Protection Regulations (GDPR) 2016 were published which included a right to erasure (art. 17). This is also know as the right to be forgotten and has been described as “the right to silence on past events in life that are no longer occurring.” It is distinct from a private right (which involves information which is not publicly known) because it involves removing information that was publicly known at a certain time and not allowing third parties to access the information. Although referred to as a new right it isn’t; it existed to an extended degree in EU law, and in the first data protection laws enforced in Europe.

Under GDPR, we have the right to have our personal data erased in six circumstances:

  • if the organisation no longer needs our data;
  • we initially consented to the use of our data, but have now withdrawn our consent;
  • we have objected to the use of our data, and our interests outweigh those of the organisation using it;
  • the organisation has collected or used our data unlawfully;
  • the organisation has a legal obligation to erase our data; or
  • the data was collected from us as a child for an online service.

Exemptions to the right to erasure (art. 17(3) GDPR)

Our right to erasure does not apply if processing is necessary for one of the following reasons (GDPR art.17(3)):

  • to exercise the right of freedom of expression and information;
  • to comply with a legal obligation;
  • for the performance of a task carried out in the public interest or in the exercise of official authority;
  • for archiving purposes in the public interest, scientific research historical research or statistical purposes where erasure is likely to render impossible or seriously impair the achievement of that processing; or
  • for the establishment, exercise or defence of legal claims.


In summary our right to erasure is limited and is trumped by certain exemptions; freedom of expression and information (or the right of the public to have access to information) being one of them. This is demonstrated in the 2015 court ruling in the Manni case (C-398/15), which clarifies that an individual seeking to limit the access to his/her personal data published in a Companies Register does not have the right to obtain erasure of that data, not even after his/her company ceased to exist.

Mr Manni requested his personal data to be erased from the Public Registry of Companies after he found out that he was losing clients who performed background checks on him through a private company that specialised in finding information in the Public Registry. This happened because Mr Manni had been an administrator of a company that was declared bankrupt more than 10 years before the facts in the main proceedings. In fact, the former company itself was removed from the Public Registry. The court concluded that Mr Manni did not have the right to obtain erasure from the Companies Register, but he did have a right to object.


Case law shows that the web and search engine results impact individual’s reputation and not always in a positive way. Privacy law does protect us.

The right to be forgotten under GDPR gives us the right to have our personal data erased but only in limited circumstances (listed above) and not if any of the exemptions (listed above) apply. One of these exemptions is freedom of expression. The effect of this is to exempt companies listed as “media” companies.

The Google v. Spain case gives us a right to request that commercial search firms, that gather personal information for profit, should remove links to private information when asked, provided the information is no longer relevant.

So, what practical steps should we take if searching our name on the internet brings back a link to information about us, and this is having a negative effect on our privacy?

Personal data

The first step we should take is to ask the publisher to remove the personal data from its website; that way it will no longer appear in search results. Should the publisher refuse to do so and we are satisfied that one of the six circumstances mentioned above applies, and none of the exemptions mentioned above apply, we should complete the Information Commissioner’s Office (ICO) online complaint form so that ICO can pursue the matter further on our behalf.

If we are not satisfied, that one of the six circumstances mentioned above apply we could ask the publisher to use the robot exclusion standard to inform web robots or crawlers not to process or scan the page with the personal data. This will stop any links appearing in search results. However, the publisher may well reject this request on the basis that its freedom of speech trumps our right to privacy.

Search result links to personal data

If the publisher refuses to remove the personal data from its website, the next step we should take is to complete Google’s an online removal-of-links-from-its-search-results form. Although the personal data shall remain on a website it will be less visible if links are removed. Should Google refuse to remove search result links we should complete ICO’s online complaint form  but only if we are satisfied that the personal data is “inadequate, irrelevant or no longer relevant, or excessive in relation to the purposes for which they were processed” and that our right to privacy is greater than the economic interest of Google and the public interest in access to information.

If we are unsuccessful on all of these fronts, it may be worth writing an article in rebuttal or an article which others may find useful. Although searching our name on the internet will continue to bring back a link to information about us which has a negative effect on our privacy, it will now bring back our positive article as well. The more meaningful articles we publish the better.

A GDPR Journey: Where To Start and What To Do Next

Published by Alisha McKerron on 11 February 2019

The European Union’s General Data Protection Regulations (GDPR) impose many obligations on anyone who processes personal data, with substantial fines (art. 83) for any breaches. Although some of these obligations are not altogether new, they are much more extensive: having an extra material and territorial scope (art. 3), extending to data processors, (art.28) and giving data subjects enhanced rights (ch.III). The definition of personal data (art. 4) is much broader too. There is much more to worry about!

If you are non-complaint, what should you do, particularly if you do not have a budget to spent on making amends? Perhaps the starting point is for you to view privacy compliance, as the end destination of an ongoing journey. Your focus should be on travelling in the right direction and being able to demonstrate this. This way, regulators are more likely to focus less on you, and more on those who don’t comply or won’t comply. So where should one start?

The most visible starting point, for most organisations, has been the publication of a privacy notices before GDPR came into force. Less visible is the appointment of data protection officers (DPO) (art.37) which is required under the new regulations if you carry out certain types of processing activities. DPO’s can now report to one lead supervisory authority in instances of multi cross border processing which is a welcome change.

Privacy Notice

Preparing a privacy notice, is a good place for you to start, for a variety of reasons. Firstly, the content (art.13) of the privacy notice is regulated, which means you will have to find answers to the following questions, to prepare it correctly:

  • Who is collecting the data?
  • What data is being collected?
  • What is the legal basis for processing the data?
  • Will the data be shared with any third parties?
  • How will the information be used?
  • How long will the data be stored for?
  • What rights does the data subject have?
  • How can the data subject raise a complaint?

To find the answers you will need to update exiting data maps or prepare new ones. Data maps must reflect the current situation on an ongoing basis. You will need to show that you have at least one of the legal bases (art. 6) for processing. If you are relying on old consents, you will need to refresh them, so that they fall into the new definition of consent (art 4); if you are relying on legitimate interest you should complete a legitimate interest assessment. Checking your legal bases will help you better understand how you are using personal data.

You will also need to find out if the personal data you are processing is shared with others and mention this in your notice. Under the new regulations you are obliged to have a data processing agreement with every data processor you use. (Revising existing data processing agreements and/ or agreeing new ones, is an item to put on your things-to-do-next list).

The position regarding restricted transfers of personal data outside non EU countries has not changed that much: transfers continue to be restricted. There is however the thorny issue of Brexit looming. Have a look at the Information Commissioners Office guidance to help you decide if you will be effected.

If you are making international transfers of personal data, you must disclose this (art. 15(2)) and the permissible ground (ch. 5) you are relying on to do so. Grounds include: the European Commission made an “adequacy decision” about the country in which the receiver is based, or the restricted transfer is covered by appropriate safeguards (including binding corporate rules) or the restricted transfer is covered by an exception.

You must also disclose the use of cookies or similar technology under the GDPR and under the Privacy and Electronic Communication Regulation (PECR ) and ensure that you have a legal base under GDPR for any processing that ensues. (It is worth taking the time to understand the overlap between the PECR and GDPR as it can be confusing.)

GDPR provides that you must not keep personal data for longer than you need it and must disclose how long you will store the information. If you do not already have a data retention policy with a document schedule you should prepare one.

You must notify your data subjects of their enhanced privacy rights and new privacy rights and be prepared to respond if they choose to exercise their rights. New privacy rights include data portability (art. 20), the right to be forgotten (art.17) and safeguards for data processing by automated means (art. 22). (Ensuring that you have updated your policies and procedures to help your staff respond to new rights as well as the old enhanced rights (e.g., data subject access requests) in a correct and consistent way, is another item to add to your list).

Obligations with time constraints

After publishing your privacy notice, the next thing you should do is to identify any privacy obligations (whether under the regulations or by agreement) with time constraint attached. Reputational damage for non-compliance should not be underestimated.

One such obligation is the new duty to report personal data breaches (art.33) to a supervisory authority and affected individual. An internal breach register must also be maintained. GDPR requires you to notify the supervisory authority, without undue delay and not later than 72 hours after becoming aware of it, if the breach is likely to result in risks to rights and freedoms of a natural person. If the data breach is likely to result in high risk to the rights and freedoms of natural persons the data subject must be informed too, without undue delay. Questions worth considering include:

  • Do you have something in place (e.g. an API or web forms to document paper incidents) that facilitates both identifying and reporting on personal data breaches?
  • Do you have a consistent approach (i.e. risk assessment) to determine whether an incident is subject to a notification obligation or are you possibly over-notifying?
  • Are you determining jurisdictions impacted and the number of individuals involved on a consistent basis?
  • Does it make sense to create a diverse team to triage and risk rank to ensure that incidents are being escalated appropriately?

Another obligation with a time constraint, is revised subject access requests (art.12 and 15). Now a request can be communicated over the phone (art 15 (3)) and associated costs can’t be claimed. You must respond without undue delay and at the latest within one month (as opposed to the old 40 days) of receipt. The same new time period applies to the right to rectification (art.16). Again, it is worth checking that you have sufficient resources and policies and procedures in place to respond.


The most helpful way of tackling GDPR compliance is to view it as a journey to an end destination. Expect to discover compliance weaknesses on your journey and compile a things-to-do-next list to help propel you forward. To begin with it may feel l like your end destination is getting further away rather than closer, but don’t let this bog you down. What’s important is that you continually move forward in the right direction, are transparent with how you collect and process personal data and are constantly striving to keep your customer’s personal data secure.