Ramona Weik writes,

The deployment of AI continues to grow within our economy and society. Its existence has implications for many sectors such as credit approval, media, education and healthcare. AI technology, especially technology based on machine learning, requires huge amounts of data, some of which can be very sensitive. As a result, data breaches or algorithmic biases pose significant risks for data subjects. This raises the question: does the increased use of AI technology need additional legal regulation?

Given the current issues, new laws have been passed to set standards in some fields of AI application, e.g. for automated decision making and profiling. The European General Data Protection Regulation (GDPR) is among the most influential laws. In this context, Art. 22(1) GDPR includes the right to not be subject to a decision based solely on automated processing including profiling.

Thus, at first glance, it seems like European data protection law and AI have a rather conflicting relationship. However, undefined legal terms and the high degree of technical complexity complicate the interpretation of the rules. By looking at the wording of the law it becomes obvious that not every use of AI is restricted. Instead, it must fulfill a series of requirements: It must be a decision without any human intervention and this decision either must produce legal effects for the data subject or similarly significantly affect it. Still, terms like “legal effects” or questions regarding whether a data subject is “similarly significantly” affected leave room for interpretation in each specific case. Further specification will eventually be provided by courts, especially the European Court of Justice (ECJ). Even though the scope of this regulation is not fully clear, the purpose of GDPR is not to prevent any use of AI.

Recently, California has been active in developing its own data protection standards. At the beginning of this year, only a few years after GDPR, the Californian Consumer Privacy Act (CCPA) came into force. However, GDPR and CCPA vary widely about the relationship between AI and protection of data subjects; the latter contains no equivalent legal regulation to Art. 22 GDPR. Yet, that doesn’t mean use of AI is left without any legal regulation.

Following European example, the CCPA strengthened the rights of data subjects. Particularly new is the right for data subjects to be informed and access their data. The consumer now has the right to know how information about them has been collected, along with other new rights. For example, if AI was used to observe the consumer’s behavior, this must be disclosed.

The regulation of AI with regard to data protection is not trivial. AI needs to be regulated to ensure the consumer’s influence in the use of their data. Nevertheless, AI and data protection law are not antagonists. There are even many ways AI can support data protection. For example, it can help prevent data breaches using machine learning to search for cyber-attacks or it can offer chat bots to help consumers understand their privacy rights.

Source:  https://law.stanford.edu/2020/04/07/data-protection-law-and-ai-antagonists-by-nature/

Pin It on Pinterest