But some facial recognition technologies will anonymise or even delete images once biometric vectors are extracted, meaning there may no longer be a link between the biometric information and an image, making future identification more difficult.
The duration that material is retained also seems critical to any capacity for future identification.
The ‘landmark’ loophole
Further, not all biometric systems perform ‘identification’. Some perform other tasks such as demographic profiling, emotion detection, or categorisation.
Biometric information extracted from images may be insufficient for future identification because instead of creating a unique face template for biometric enrolment, the process may simply extract ‘landmarks’ for the sake of assessing some particular characteristic of the person in the image.
If biometric vectors are just landmarks for profiling, and no image is retained, identification may be impossible. The European Data Protection Board has suggested such information, i.e. ‘landmarks’, would not be subject to the more stringent protections in the European General Data Protection Regulation for sensitive information.
This raises a question as to whether the collection of images for a system that did not lend itself to identification may not be personal information (let alone sensitive information) under the Privacy Act.
Nevertheless, these technologies are often used for different types of ‘facial analysis’ and profiling, for instance, inferring a subject’s age, gender, race, emotion, sexuality or anything else.
Singled out at 7-Eleven
Before the Clearview AI determination, the OAIC considered 7-Eleven’s use of facial recognition that touched on these questions more closely.
7-Eleven was collecting facial images and biometric information as part of a customer survey system. The biometric system’s primary purpose was to infer the age and gender of survey participants. However, the system was also capable of recognising whether the same person had made two survey entities within a 20-hour period for the sake of survey quality control.
Here the Commissioner deemed the collection a breach of the APPs because there was not sufficient notice (APP 5), and collection was not reasonably necessary for the purpose of flagging potentially false survey results (APP 3.3).
7-Eleven argued that the images collected were not personal information because they could not be linked back to a particular individual, and therefore were not reasonably identifiable.
The Commissioner found that the images were personal information, however, because the biometric vectors still enabled matching of survey participants (for the sake of identifying multiple survey entries), and therefore that the collection was for the purpose of biometric identification.
This reasoning drew on the idea of ‘singling out’, which is when within a group of persons, an individual can be distinguished from all other members of the group.
Compared to the Clearview example, the finding here that images and templates were both personal information and sensitive information is somewhat less robust
In this case, the Commissioner also held that the facial images were themselves biometric information, in that they were used in a biometric identification system.
“Compared to the Clearview example, the finding here that images and templates were both personal information and sensitive information is somewhat less robust”
There has been substantial critique of approaches to identifiability premised on ‘singling out’ because the capacity to single out says nothing really about the identifiability of the individual. Even the assignment of a unique identifier within the system is not enough to ‘identify’ a person – there would still need to be a way to connect that identifier with an actual person or civil identity.
Finding that information is personal because of its capacity to single out, but without necessarily leading to identification, may be a desirable interpretation of the scope of data protection, but it is not settled as a matter of law.
Singling out remains important because it still enables decisions that affect the opportunities or life chances of people – even without knowing who they are. But whether singling out, alone, will bring data processing within the scope of data protection law is yet to be unequivocally endorsed by the courts.
Considering the Australian Federal Court’s somewhat parsimonious approach to the definition of personal information in the 2017 Telstra case, this finding might not hold up to juridical examination.
The TikTok settlement
The 2021 TikTok settlement under the US state of Illinois’ BIPA (Biometric Information Privacy Act) looked at similar issues. There, TikTok claimed the facial landmark data it collected and the demographic data it generated, used both for facial filters and stickers as well as targeted advertising, was anonymous and incapable of identifying individuals. But the matter was settled and the significance of anonymity was not further clarified.
Identifying and non-identifying biometric processes
BIPA has no threshold requirement for personal information, and is explicitly uninterested in governing the images from which biometric information is drawn.
To that end, it may be that BIPA has a more catch-all approach to biometric information irrespective of application, which is different from the Australian and European approaches under general data protection law which clearly distinguish identifying and non-identifying biometric processes.