Researchers at Stanford have developed a controversial advanced facial analysis software that can accurately distinguish sexual orientation based on facial morphology.38 Using a deep neural network facial recognition technology called VGG-Face composed of multiple layers of interconnected artificial neurons arranged to mimic the mammalian brain, facial features were extracted from self-taken photographs of 35,326 self-reported homosexual and heterosexual men and women from public profiles of online U.S. dating websites.39 A logistic regression model was developed to classify sexual orientation using 500 values obtained from VGG-Face scores. Compared to human judges, who correctly predicted homosexuality with an accuracy of 61 percent and 54 percent for male and female images, respectively, the software correctly predicted homosexuality in 85 percent and 70 percent of male and female images, respectively. Computational facial analysis revealed subtle information about sexual orientation imperceptible to humans, with homosexual faces tending toward gender neutral: men possessed narrower jaws, longer noses, and larger foreheads, while women had larger jaws and smaller foreheads. [See Figure, Supplemental Digital Content 1, which shows composite faces and the average facial landmarks built by averaging faces classified as most and least likely to be gay (image reprinted with permission of the American Psychological Association from Wang Y, Kosinski M. Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. J Pers Soc Psychol. 2018;114:246–257), http://links.lww.com/PRS/D508.] This software foreshadows the privacy threats that accompany the development of such tools with facial recognition technology, which can be stigmatizing or even dangerous for individuals wishing to keep their sexual orientation private in authoritarian societies where homosexuality may be shunned or criminalized.
Advancements in computer recognition of human facial emotions can be used to quantify reactions, reveal subtle thoughts, or detect signs of lying (Fig. 3). This information may be used during job interviews or to identify malingering in narcotic abusers by exaggeration of pain to physicians.40 An Israeli start-up company, Faception, is marketing machine learning software that uses advanced facial analysis to predict a person’s personality and behavior in order to quantify extroversion, intelligence quotient, and likelihood to commit a crime.41
The number of individuals who have undergone plastic surgery continues to grow. In 2017, the American Society of Plastic Surgeons reported a total of 17.5 million cosmetic procedures, including 15.7 million minimally invasive procedures of the face such as botulinum toxin injections, soft-tissue fillers, chemical peels, and laser hair removal. Of the 1.8 million invasive surgical cosmetic procedures, facial procedures such as rhinoplasty and blepharoplasty were among the top five.42 Given the prevalence of plastic surgery and the rapid proliferation of facial recognition technology for personal biometric authentication and public security systems, patients may increasingly ask plastic surgeons whether a procedure will affect their ability to be recognized by facial recognition technology.
Plastic surgery poses a major challenge for current facial recognition technology and is recognized as a distinct categorical limitation of many facial recognition algorithms alongside pose, illumination, expression, aging, and disguise with makeup.43 After having undergone plastic surgery procedures, nonlinear alterations are made to facial landmarks that may lead to difficulty in identifying these individuals by facial biometric systems. Singh et al. have pioneered research in this field by developing the first plastic surgery database, which contains one preoperative and one postoperative frontal photograph of 900 patients in neutral expression (Table 4).44 These patients underwent procedures such as dermabrasion, brow lift, otoplasty, blepharoplasty, rhinoplasty, and rhytidectomy.45,46 This group evaluated the performance of six standard facial recognition algorithms by training each algorithm on preoperative and postoperative photographs of 360 subjects, and then evaluating performance on the remaining 540 subjects. The recognition accuracy of the tested facial recognition technologies was poor, ranging from 27 to 54 percent, an absolute reduction of approximately 30 percent. In particular, “global” procedures such as skin resurfacing and rhytidectomy resulted in large facial node alterations that decreased the efficacy of facial recognition technology, with accuracy ranging from 18 to 54 percent. In contrast, “local” procedures involving only the nose, chin, eyelids, cheek, lips, ears, or forehead had a less debilitating effect. In this report, Singh et al. encouraged the research community to develop higher fidelity facial recognition technology algorithms to account for nonlinear variations resulting from plastic surgery of the face.
Since the pioneering report by Singh et al., other groups have sought to improve or develop their own facial recognition algorithms specifically to improve performance for plastic surgery patients. In 2012, Aggarwal et al. used a part-wise, sparse representation approach to match individual facial regions instead of holistic facial analysis to achieve a matching performance of 78 percent in patients who had undergone plastic surgery.47 Jillela and Ross combined independently processed ocular information with standard holistic face detection to achieve 87 percent recognition.48 In 2013, Liu et al. achieved an accuracy of 86 percent using a novel method to divide preoperative and postoperative photographs into face patches and then fusing the patches to compensate for localized appearance changes caused by plastic surgery.49 Using Singh et al.’s facial database, Bhatt et al. developed a multiobjective evolutionary granular algorithm with an overall accuracy of 87 percent,50 and Moeni et al. reconstructed a three-dimensional model from two-dimensional frontal images to extract facial depth vectors and combined this with two-dimensional texture vectors, achieving accuracy rates of 92 to 98 percent.51
The accuracy of future facial recognition algorithms will depend largely on the quality and comprehensiveness of training data sets. Facial recognition training databases containing patients who have undergone facial procedures should be expanded and balanced with more photographs of different ages, ethnicities, and procedures. Patients who have undergone multiple procedures simultaneously would be helpful to account for cumulative effects during algorithm training. It would also be beneficial to have facial photographs of patients who have undergone facial surgery for nonaesthetic reasons, such as in head and neck reconstruction, craniomaxillofacial reconstruction, burn reconstruction, gender-affirmation surgery, and vascularized composite transplantation of the face. As facial recognition technologies become more sophisticated, they could potentially one day be correlated with patient-reported outcome measures, be used as a performance metric for plastic surgeons, or help predict the likelihood of complications following facial surgery.
We recommend that the discussion of facial biometric identification become a twenty-first century addition to the routine consultation or consent process for patients seeking aesthetic facial surgery. Patients should be made aware that there is a possibility of personal or commercial facial recognition technology failing to recognize their biometric faceprint following aesthetic facial surgery, and that the probability of this failure may be related to the type of procedure they are undergoing. Plastic surgeons should be able to explain to patients that they may not be able to unlock their personal smartphone device with their face immediately after surgery, because of dressings, swelling, and actual physical changes from the procedure. However, because smartphones such as the Apple iPhone X are configured to learn from erroneous recognition and continually update the user’s stored faceprint with every device unlock, the iPhone X should eventually learn the user’s new appearance.
In contrast, patients should also be made aware that large-scale commercial applications of facial recognition technology, such as government biometric passports, are based on historical user photographs and may fail to match an individual’s postoperative face with a preoperative photograph taken previously. Thus, sufficient advance warning should be given to such organizations, and the patient may be required to provide new identification photographs after surgery to avoid unexpected delays or questioning by security bodies. We believe that plastic surgeons should initiate this discussion, rather than patients, to avoid unexpected surprises or delays on the patient’s part that could lead to litigation. Plastic surgeons could offer to provide documentation certifying that the individual underwent cosmetic surgery that may alter their ability to be recognized by either a facial recognition system or a human border controller. Of note, facial recognition algorithms may be more sensitive than humans to physical deviations and as such there may be a higher rate of erroneous or failed matches in biometric border situations. In the Republic of Korea, which has a flourishing cosmetic surgery tourism industry, some clinics offer certificates to surgical tourists from outside the country verifying the individual’s identity with passport number, duration of stay, name and location of the clinic or hospital, and the clinic or hospital’s official seal.52 In the United Kingdom, following orthognathic surgery, Her Majesty’s Passport Office advises:
If the patient’s facial profile alters, the patient would need to consider applying for a new passport, as they may encounter problems using their presurgery issued passport after the surgery. We would suggest that patients make that decision after the surgery. They would require a supporting letter from their consultant explaining the situation, and the application form would have to be fully countersigned.53
Ryan et al. recommend that all orthognathic patients be informed during the consent process that their biometric profiles may be affected.53 Surgeons should offer to write a supporting letter to the appropriate administrative body to confirm the patient underwent an elective facial procedure, so patients may obtain a new passport following surgery. It may be necessary to disclose before surgery that the plastic surgeon is not responsible for any financial or time resources associated with updating biometric profiles as a consequence of a cosmetic procedure.
Lastly, current facial recognition algorithms specifically developed to improve performance following plastic surgery are trained using facial databases constructed from publicly available photographs obtained before and after surgery. Plastic surgeons posting before-and-after photographs online with patient consent should be aware of the possibility that these photographs could be compiled into data training sets for developing future facial recognition technologies.
Facial recognition technology is an intriguing and powerful application of artificial intelligence, pattern recognition, and image analysis, with expanding implications for security, recreation, and privacy. As with many disruptive new technologies, market forces will lead to improved precision, efficiency, and adoption.54 In the coming years, facial recognition technology will become increasingly prevalent, and policy makers should prioritize the establishment of clear regulatory statutes. As plastic surgeons, we have the power to change faces and consequently identities. Plastic surgeons should have a conceptual understanding of how facial recognition technology works, the current landscape of facial recognition technology in society, and the relevance of facial recognition technology to patients following surgical procedures of the face.
2. Kanade T. Picture Processing System by Computer Complex and Recognition of Human Faces (dissertation). 1973.Kyoto, Japan; Kyoto University.
3. Sirovich L, Kirby M. Low-dimensional procedure for the characterization of human faces. J Opt Soc Am A 1987;4:519–524.
4. Garg D, Sharma AK. Face recognition. IOSR J Eng. 2012;2:128–133.
5. Li SZ, Jain AK. Li SZ, Jain AK. Introduction. In: Handbook of Face Recognition. 2011.2nd ed. London: Springer-Verlag.
6. Zhao W, Chellappa R, Phillips PJ, Rosenfeld A. Face recognition: A literature survey. ACM Comput Surv. 2003;35:399–458.
8. Jain AK, Prabhakar S, Chen S. Combining multiple matchers for a high security fingerprint verification system. Pattern Recognit Lett. 1999;20:1371–1379.
9. Jain AK, Ross A, Prabhakar S. An introduction to biometric recognition. IEEE Trans Circuits Syst Video Technol. 2004;14:4–20.
10. Turk M, Pentland A. Eigenfaces for recognition. J Cogn Neurosci. 1991;3:71–86.
11. Penev PS, Atick JJ. Local feature analysis: A general statistical theory for object representation. Network 1996;7:477–500.
12. Lawrence S, Giles CL, Tsoi AC, Back AD. Face recognition: A convolutional neural-network approach. IEEE Trans Neural Netw. 1997;8:98–113.
13. Belhumeur PN, Hespanha JP, Kriegman DJ. Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection. IEEE Trans Pattern Anal Mach Intell. 1997;19:711–720.
17. Taigman Y, Yang M, Ranzato M. DeepFace: Closing the gap to human-level performance in face verification. In 2014 IEEE Conference on Computer Vision and Pattern Recognition, June 23–28, 2014: Columbus, Ohio; 1701–1708.
20. Buckley B, Hunter M. Say cheese! Privacy and facial recognition. Comput Law Secur Rev. 2011;27:637–640.
22. Bennett KA. Can facial recognition technology be used to fight the new war against terrorism? NC JOLT 2001;16.
23. Gaines S, Williams S; Georgetown Law Center on Privacy & Technology. The perpetual line-up. October 18, 2016. Available at: https://www.perpetuallineup.org/
. Accessed April 6, 2018.
24. O’Toole AJ, An X, Dunlop JP, Natu VS, Phillips PJ. Comparing face recognition algorithms to humans on challenging tasks. Trans Appl Percept. 2012;9:1–15.
25. Blanton A, Allen KC, Miller T, Kalka ND, Jain AK. A comparison of human and automated face verification accuracy on unconstrained image sets. Paper presented at: IEEE Conference on Computer Vision and Pattern Recognition Workshops; June 26–July 1, 2016; Las Vegas, Nev.
26. Klontz JC, Jain AK. A case study on unconstrained facial recognition using the Boston Marathon bombings suspects. Michigan State University Technical Report. 2013.East Lansing, Mich: Michigan State University.
27. Hirose M. Privacy in public spaces: The reasonable expectation of privacy against the dragnet use of facial recognition technology. Conn Law Rev. 2017;49:1591–1620.
28. Royakkers L, Timmer J, Kool L, van Est R. Societal and ethical issues of digitization. Ethics Inform Technol. 2018;20:127–142.
29. Nissenbaum H. Protecting privacy in an information age: The problem of privacy in public. Law Philos. 1998;17:559–596.
30. Brey P. Ethical aspects of facial recognition systems in public places. J Info Commun Ethics Soc. 2004;2:97–109.
34. Kruszka P, Addissie YA, McGinn DE, et al. 22q11.2 deletion syndrome in diverse populations. Am J Med Genet A 2017;173:879–888.
35. Valentine M, Bihm DCJ, Wolf L, et al. Computer-aided recognition of facial attributes for fetal alcohol spectrum disorders. Pediatrics 2017;140:e20162028.
36. Kong X, Gong S, Su L, Howard N, Kong Y. Automatic detection of acromegaly from facial photographs using machine learning methods. EBioMedicine 2018;27:94–102.
37. Ferry Q, Steinberg J, Webber C, et al. Diagnostically relevant facial gestalt information from ordinary photos. Elife 2014;3:e02020.
38. Wang Y, Kosinski M. Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. J Pers Soc Psychol. 2018;114:246–257.
39. Parkhi OM, Vedaldi A, Zisserman A. Deep face recognition. Paper presented at: British Machine Vision Conference 2015; September 7–10, 2015; Swansea, United Kingdom.
40. Cao NT, Ton-That AH, Choi HI. An effective facial expression recognition approach for intelligent game systems. Int J Comput Vis Robot. 2016;6:223–234.
41. Buolamwini J, Gebru T. Gender shades: Intersectional accuracy disparities in commercial gender classification. Proc Mach Learn Res. 2018;81:77–91.
43. Nappi M, Ricciardi S, Tistarelli M. Deceiving faces: When plastic surgery challenges face recognition. Image Vision Comput. 2016;54:71–82.
45. Singh R, Vatsa M, Bhatt HS, Bharadwaj S, Noore A, Hooreyezdan SS. Plastic surgery: A new dimension to face recognition. IEEE Trans Inform Forens Security 2010;5:441–448.
46. Singh R, Vatsa M, Noore A. Effect of plastic surgery on face recognition: A preliminary study. In 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Miami, Fla.; June 20–25, 2009: 72–77.
47. Aggarwal G, Biswas S, Flynn PJ, Bowyer KW. A sparse representation approach to face matching across plastic surgery. In 2012 IEEE Workshop on the Applications of Computer Vision, January 9–12, 2012: Breckenridge, Colo.; 113–119.
48. Jillela R, Ross A. Mitigating effects of plastic surgery: Fusing face and ocular biometrics. In 2012 IEEE Fifth International Conference on Biometrics: Theory, Applications and Systems (BTAS), September 23–27, 2012: Arlington, Va.; 402–411.
49. Liu J, Harris A, Kanwisher N. Perception of face parts and face configurations: An FMRI study. J Cogn Neurosci. 2010;22:203–211.
50. Bhatt HS, Bharadwaj S, Singh R, Vatsa M. Recognizing surgically altered face images using multiobjective evolutionary algorithm. IEEE Trans Inform Forens Security 2013;8:89–100.
51. Moeini A, Faez K, Moeini H. Face recognition across makeup and plastic surgery from real-world images. J Electronic Imag. 2015;24:053028.
53. Ryan PJ, Turner MJ, Gibbons AJ, Ricanek K Jr. Orthognathic surgery and the biometric e-passport: A change in surgical practice. Br J Oral Maxillofac Surg. 2014;52:384.
54. Kemelmacher-Shlizerman I, Seitz SM, Miller D, Brossard E. The MegaFace benchmark: 1 million faces for recognition at scale. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, June 27–30, 2016: Las Vegas, Nev.; 4873–4882.