Abstract Summary
Since the publication of the paper “On lines and planes of closest fit to systems of points in space” by Karl Pearson in 1918 principal component analysis (PCA) has become an important statistical method in multiple research fields from the natural sciences (i.e. archeology, atmospheric sciences, psychology and physical anthropology) where big datasets of observations are collected. In studies of human facial difference, PCA works by producing statistical description of these differences that are later used to support common sense racial distinctions. In doing so, it establishes standards of normality for different races and, by comparing these normal faces, naturalizes racial difference. The present paper explores the influence of Pearson’s PCA in the theory and development of applications for face perception and recognition. Therefore, it focuses on three central cases in the development of facial recognition technologies (FRT): (1) ‘Eigenvector’ algorithms developed, among others, by Turk and Pentland (1991), (2) Valentine’s (1991) influential “Face Space” theory of face perception, and (3) Recent FRT such as DeepFace from Facebook (2014 to present). As shown by these cases, Pearson’s technique has deeply shaped contemporary FRT as PCA guides the way how computer scientists, forensic scientists and psychologists understand human facial difference as well as the perception of these differences. More generally, telling the story of PCA shows why racial categorization remains central in contemporary identification technologies and practices.
Self-Designated Keywords :
Statistics, Race, Algorithms, Facial Recognition Technology, Identification