First GDPR Fine in Sweden. The Data Protection Authority of Sweden has issued its first GDPR fine – a penalty of 200,000 Swedish Krona (approximately $20,000) – for the unlawful use of facial recognition technology to monitor student attendance at a local high-school in Sweden. The Swedish regulator held that the local school board had violated the GDPR in several ways. First, the use of facial recognition technology was found to be excessively intrusive considering the purpose of monitoring student attendance, in violation of the GDPR’s Article 5(1)(c) that lays down the ‘data minimization’ principle.
Second, under the GDPR’s Article 9, facial recognition data comprises ‘special categories of data’, which in turn may only be processed pursuant to a recognized legal basis for this type of data. One of the permissible bases for processing facial recognition data is the data subject’s explicit consent. The school board attempted to rely on this basis, arguing that all the students and their parents had given consent. However, the Swedish regulator held that the permission given by students, in this case, did not qualify for the GDPR-required consent, because it was not a voluntarily given and freely chosen decision by the students, considering the imbalance of power and dependence between the students and the school.
Finally, the school was found to violate Article 35 of the GDPR which requires conducting a documented Data Protection Impact Assessment (DPIA) before engaging in data processing that entails elevated data protection risks, such as facial recognition monitoring.
The Swedish regulator justified the relatively modest fine due to two mitigating circumstances: the relatively brief duration of the facial recognition project at the school – just three weeks – and the relatively small number of students monitored – only 22.
CLICK HERE to read the Swedish privacy regulator’s decision (in Swedish).
UK Commentary on Artificial Intelligence under the GDPR. The UK’s privacy watchdog, the Information Commissioner’s Office (ICO), continues publishing commentaries on the interplay between Artificial Intelligence (AI) technology, specifically Machine Learning (ML) systems, and the GDPR. This month, the ICO published notes on data minimization and privacy-preserving techniques in ML systems, as well as on fully automated decision making in ML systems.
The GDPR’s data minimization principle, codified in Article 5(1)(c), requires that data processing be “adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed”. The ICO’s commentary indicates several privacy-preserving techniques for ML system’s learning phase. First, modifying the training data to reduce the extent to which it can be traced back to specific individuals while retaining its utility for training well-performing models.
Second, a technique called federated learning allows different ML parties to train models on their data, and then combine some of the patterns that those models have identified into a single, more accurate ‘global’ model, without having to share any training data. At ML system’s inference phase, the commentary suggests converting personal data into less ‘human-readable’ formats and making the inferences locally on the user’s device, rather than on the service provider’s cloud.
The GDPR requires organizations to implement suitable safeguards when processing personal data to make solely automated decisions that have a legal or similarly significant impact on individuals. The safeguards include a data subject’s right to receive meaningful information and explanation about the logic of the automated decision made, express his or her personal view, contest the decision made and seek human review of the decision. The commentary points out that complex ML systems affect an organization’s ability to provide meaningful explanations to data subjects. Also, if an ML system is too complex to explain, it may be too complex to meaningfully contest, to intervene on, to review, or to put an alternative point of view against. The commentary provides an example of a system that uses hundreds of features and a complex, non-linear model to make a prediction, which renders it difficult for a data subject to determine which variables or correlations to object to.
CLICK HERE to read the ICO’s blog post on data minimization in AI.
CLICK HERE to read the ICO’s blog post on automated decision making in AI.