AI and GDPR: Data Protection and Transparency in Focus
16 June, 2020
When the coronavirus began to spread internationally after first emerging in Wuhan, China in December 2019, governments worldwide immediately started to implement various technologies aimed at tracking, diagnosing, and, ultimately, stopping the virus. These include contact tracing applications utilizing location data from mobile phones and artificial intelligence (AI) based instruments for improved diagnosis efficiency and patient classification. But while a number of governments have looked into using these tools, how such technologies have been effectuated varies depending on context.
Notwithstanding Italy and Spain—two countries with aging populations that recently underwent severe economic crises and faced massive downsizing of their health systems over the past decade—EU countries, in general, have managed the pandemic without resorting to the highly invasive measures implemented in other countries. One important point this topical example illustrates is how the European approach to artificial intelligence differs from that of other areas of the world. The legal basis set by the EU and its member states generally places a high priority on the data protection of personal subjects and explicit consent to share data—for example, to a machine learning algorithm such as those used to fight coronavirus.
The General Data Protection Regulation, or GDPR for short, is considered to be the manual for how Europe and those who do business with it are supposed to handle privacy in most contexts. In the area of artificial intelligence, however, many believe that its guidance is far from comprehensive. The GDPR was intended both to harmonize member states’ various approaches to data protection, and to set forth general principles and guidelines for data protection that were independent of current and future trends in technological innovation. As a result, the language is often vague and will most likely undergo substantial interpretation by the member states themselves and the courts.
According to Kalliopi Spyridaki, Chief Privacy Strategist at SAS Europe, the GDPR only covers several key areas specifically pertaining to AI. When AI processes personal data, performs profiling, or makes automated decisions based on personal data and/or that affect the data subject, the GDPR applies. This includes the right to object to personal data processing, the right of access, the right to be forgotten, etc. The GDPR also gives individuals the right to not be subjected to solely automated decision-making except in certain instances.
Article 22 is the only provision within the GDPR that specifically applies to automated decision-making. However, it only applies in a limited set of circumstances and cannot be considered in isolation. The actual text reads:
“The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”
Forms of processing with legal or legally-significant effects include medical decisions (e.g., those affecting treatment), algorithmic filtering when hiring, access to education, and access to credit. Therefore, in cases when the structure of the algorithm or the data being processed could lead to life-changing errors, privacy breaches, or instances of discrimination, Article 22 must apply. However, Article 22 does not apply when explicit consent to processing is given, when the processing is necessary to the performance of a contract, or when it is authorized by another Union or member state law.
The dilemma therefore arises around whether regulations should strive towards universal algorithmic transparency, or whether exceptions to this principle should be allowed to exist for the sake of utility. As of now, according to Spyridaki, the concrete implementation of these regulations for various cases is still being discussed by the EU and national governments. In 2017, the EU adopted a set of guidelines aimed at clarifying the role of privacy in the realm of data processing in AI. The Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679 specify the duties of data controllers in restricting personal data processing under Article 22 of the GDPR.
However, a 2019 study conducted by the European Parliament, which evaluated the legal and regulatory frameworks surrounding algorithmic bias and the potential for discrimination arising from it, concluded that the GDPR was not sufficient on its own to address the problem. A rules-based approach to AI, rather than the principles-based approach of the GDPR, would be better able to keep up with the constantly changing requirements of AI adoption in the digital space. Furthermore, the inscrutability of so-called “Black Box” algorithms, necessary for many forms of machine learning, would make explaining the steps involved to the data subject, per Article 22 for example, unlikely to lead to an informative outcome.
As it stands now, the GDPR itself does not establish a right to algorithmic explanation and transparency. Only when the case falls into Article 22 does the GDPR require an explanation of the algorithmic method used, not the source code or the rationale behind the algorithmic decision. In these cases, clear, concise information must be provided to the data subject on how their data is being processed in this way, the logic involved, and the potential consequences of this processing. Accountability would be maintained by a Data Protection Officer who conducts regular Data Protection Impact Assessments, while allowing the data subject to give their point of view on how their data should be handled.
The study also concluded that supply- and demand-side measures to increase algorithmic transparency will not be effective in this way. The need for state intervention, at varying levels, would be the most appropriate in these circumstances. Measures such as algorithmic literacy education, “food label-style” disclaimers, and funding research into explainable algorithms are all possible solutions that the EU and its member states can pursue. The creation of regulatory bodies would also be necessary to carry out impact assessments on AI deployments that have a substantial impact on the public interest. This would include the involvement of system developers to document algorithmic behavior and the adoption of several best-practices on the governance and industry level.
At the moment, the European Union is still pursuing the principles-based approach in an effort to accommodate the varying interests of its member states. The European Commission’s AI for Europe project is one such endeavor that not only connects AI ecosystems and provides funding to startups, but includes a “European Ethical Observatory” to monitor the ethical impact of AI across the continent. There are also a number of initiatives such as the Regulation on the Free Flow of non-personal data, the ePrivacy Regulation, and the Cybersecurity Act.
The need for clearly-defined regulations is an essential part of creating a digital single market in the EU. The free flow of non-personal data is, in fact, guaranteed under the GDPR as a means to increase the market position of Europe’s deep tech environment vis-a-vis other advanced economies. As such, data protection and algorithmic transparency is not only viewed as a means to safeguard fundamental rights, but also a tool of regulatory harmonization between member states.
As Europe continues to close the artificial intelligence gap while the technology itself advances ever more rapidly, the regulatory framework surrounding this will inevitably become more complex and comprehensive. Explaining AI algorithms to lay people in terms of data protection is a monumental task in and of itself, not to mention the added difficulty of simultaneously negotiating the various competing regulatory mindsets and worldviews of the member states. Regulation of AI in Europe would need to evolve in tandem with changes in technological structures and prevailing social norms surrounding transparency and privacy. One might expect that the search for regulatory purity will prove elusive, but approximations towards an ideal form of data protection will indeed set the stage for further advances in this uncharted territory.