Nothing challenges the effectiveness of privacy laws like technological innovation. As the volume of data being generated about individuals increases, technology is making it easier than ever for data to be captured and analyzed, making that data ever more valuable.
Unfortunately, technology also introduces new and previously unknown threats. As such, how companies collect, process and protect the personal data of their customers, staff and suppliers has become a key challenge.
The General Data Protection Regulation (GDPR), which came into effect on May 25, 2018, is the European Union’s legislative response to this challenge. Drafted to be “technology neutral,” the GDPR is intended to give individuals better control over their personal data and establish a single set of data protection rules across the EU, thereby making it simpler and cheaper for organisations to do business. So far, so sensible. Unfortunately, technology always runs ahead of the law and the GDPR is already starting to show some of its limitations as the law clashes with newer technologies.
Blockchain technology
Blockchain – or distributed ledger technology – replaces the centralised transaction database with a decentralised, distributed digital ledger where each and every transaction flowing through it is independently verified against other ledgers maintained by different parties, in different locations. In this way, the record of any single transaction cannot be altered without changing all subsequent transactions or “blocks” that are chained together across the entire distributed ledger. It is this immutability that ensures the reliability of the information stored on the chain.
The GDPR gives data subjects the right to request that their personal data is either rectified or deleted altogether. For blockchain projects that involve the storage of personal data, these legal rights do not mix well with the new technology. Drafted on the assumption that there will always be centralised services controlling access rights to the user’s data, the GDPR fails to take into account how a permissionless blockchain works. Ultimately, this may mean that blockchain technology cannot be used for the processing of personal data without potentially falling foul of the GDPR.
Interestingly, blockchain technology provides its own potential solution to this problem by allowing personal data to be kept off the various ledgers altogether. It does this by replacing the personal data with an encrypted reference to it – a “hash.” These hashes, or digital fingerprints, prove that the data exists, but without the data itself appearing on the chain.
Problem solved? Unfortunately not. The GDPR draws an unhelpful distinction between pseudonymised and anonymised data. Pseudonymisation occurs where personal data is subjected to technological measures (like hashing or encryption) so that it no longer directly identifies an individual without the use of additional information. Anonymisation on the other hand, results from processing personal data in order to irreversibly prevent identification. As such, anonymised personal data falls outside the scope of the GDPR, whereas pseudonymised data – including hashed data – does not.
Unlawful algorithms
Social media sites and search engines specialise in algorithms that allow them to target advertisements at users. However, the way those algorithms work makes all the difference and reveals an unintended consequence of the GDPR’s drafting.
Take the example of Bob. Bob decides to buy a new car by doing all of his research using an internet search engine. He then posts details of his new purchase on social media. The algorithms for Bob’s social media site correctly profile Bob as someone who is likely to buy car products or access car-related services in the future. Bob will therefore start to see targeted adverts on his social media page. Following his hours of online research for a new car, the algorithms used by his chosen search engine reach the same conclusion and Bob will also start to see some of those same adverts each time he goes online. While the resulting adverts Bob receives may be the same, the way the algorithms achieve this result is very different.
Social media algorithms target adverts by knowing who you are, whereas search engines target adverts by knowing what you are searching for. The who versus what dichotomy is therefore critical under the GDPR. Social media sites know which adverts to show Bob because they analyse his profile and hold personal data about him. The algorithms for most search engines, on the other hand, look only at what Bob searched for. The only data those engines need to target their advertising to Bob is to know that somebody in a particular geographic area used the search term “new car.” The engines have no idea that it was Bob searching for a new car, just that someone did. Search engines can therefore ignore personal data and still achieve the same algorithmic precision, social media sites cannot.
Should Bob be required to give his consent to this use of his data before it is used in this way? Under the GDPR, arguably yes, but only for the way the social media site uses his data. Bob has no ability to stop his chosen search engine using the data it holds because that data is not considered “personal data.”
Artificial intelligence
Artificial intelligence relies on machine learning, but for machines to learn, they need to crunch data, and lots of it. The GDPR makes it more difficult for those machines to get the data in the first place and once they have the data, rights granted to data subjects under the GDPR could also make it difficult for companies to reap the full benefits of machine learning.
The volume of data available for machine learning is not a problem, but under the GDPR, using that data lawfully often will be. This is because those developing machine learning will often be data processors rather than data controllers. Data processors are not permitted to decide for themselves how personal data is used, they can only use the data as directed to do so by the data controller and with the consent of the data subject.
Assuming consent is obtained and the machines learn from the data they consume, the output those machines then generate may also be restricted by the GDPR. This is because data subjects have a right under the GDPR not to be subject to a decision based solely on automated processing if that decision significantly affects the data subject. In other words, much of the ability to allow machines to make automated decisions will be linked to how those decisions affect our lives. Automated decisions about our shopping habits will probably be fine but automated decisions which determine a career promotion or mortgage application are likely to be challenged in the future.
Conclusion
With the GDPR now in force, not only is the long arm of EU data protection law reaching beyond the EU’s borders, potentially it is also impacting our use of new technologies.
Technology will not stop to adjust to the new laws, which means legal frameworks like the GDPR need to remain flexible enough to strike a balance between technological progress and the protection of individual privacy.