Secondary Logo

Journal Logo

Facial Recognition Technology

A Primer for Plastic Surgeons

Zuo, Kevin J., M.D.; Saun, Tomas J., M.D.; Forrest, Christopher R., M.D., M.Sc.

Plastic and Reconstructive Surgery: June 2019 - Volume 143 - Issue 6 - p 1298e–1306e
doi: 10.1097/PRS.0000000000005673
Plastic Surgery Focus: Technology and Innovations
Free
SDC
Watch Video

Summary: The face is arguably the most unique and defining feature of the human body. From birth, humans are conditioned to perceive, interpret, and react to information conveyed by faces both familiar and unfamiliar. Although face recognition is routine for humans, only recently has it become possible for a computer to accurately recognize a human face in an image or video. With advances in artificial intelligence, image capture technology, and Internet connectivity, facial recognition technology has entered the forefront of personal and commercial technology. Plastic surgeons should be prepared to answer questions from patients about the fundamentals of facial recognition technology, and the potential effects of plastic surgery on facial recognition technology performance. This article provides an overview of facial recognition technology, describes its present applications, discusses its relevance within the field of plastic surgery, and provides recommendations for plastic surgeons to consider during preoperative discussions with patients.

Toronto, Ontario, Canada

From the Division of Plastic and Reconstructive Surgery, Department of Surgery, and the Institute of Biomaterials and Biomedical Engineering, Faculty of Applied Science and Engineering, University of Toronto; and the Division of Plastic and Reconstructive Surgery, The Hospital for Sick Children.

Received for publication June 27, 2018; accepted December 12, 2018.

Disclosure: The authors have no financial interest to declare in relation to the content of this article.

Supplemental digital content is available for this article. Direct URL citations appear in the text; simply type the URL address into any Web browser to access this content. Clickable links to the material are provided in the HTML text of this article on the Journal’s website (www.PRSJournal.com).

A “Hot Topic Video” by Editor-in-Chief Rod J. Rohrich, M.D., accompanies this article. Go to PRSJournal.com and click on “Plastic Surgery Hot Topics” in the “Digital Media” tab to watch.

Christopher R. Forrest, M.D., M.Sc., Division of Plastic and Reconstructive Surgery, The Hospital for Sick Children, 555 University Avenue, Suite 5430, Toronto, Ontario M5G 1X8, Canada, christopher.forrest@sickkids.ca, Instagram: @uoftprs

In 2017, Apple, Inc. (Cupertino, Calif.), unveiled the new iPhone X with a revolutionary new feature: biometric authentication using Face ID facial recognition.1 Although facial recognition software has existed for decades, commercial applications have become increasingly prevalent with recent advances in computational processing power, image capture technology, and Internet connectivity.2–6 Like other biometric security modalities already widely in use, such as fingerprinting or iris scanning, facial recognition technology verifies or identifies an individual based on a unique physiologic signature.7,8 Unique to facial recognition technology is its ability to be deployed from a distance (without physical contact) and unobtrusively (without user awareness).6,9 With the proliferation of facial recognition technology in society, both visibly in the hands of consumers and covertly on the screens of security organizations, individuals will undoubtedly have questions about the effects of plastic surgery on the recognition of their face. Plastic surgeons should be prepared to discuss this technology.

Back to Top | Article Outline

WHAT IS FACIAL RECOGNITION TECHNOLOGY?

Facial recognition technology is a form of biometric measurement that verifies or identifies a person by “who they are,” rather than “what they possess,” such as an identification badge, or “what they remember,” such as a password (Table 1).9 Biometric measurements cannot be lost or forgotten, are difficult to forge, and have no acquisition costs for users. Facial recognition technology represents the combined application of pattern recognition and image analysis using computers. Since the 1970s, a multitude of experimental and commercial facial recognition algorithms have been developed and refined in parallel with improved image capture quality, large data training sets, and computer processing power.2,3,10–14 Currently, 65 facial recognition technology algorithms are being evaluated by the National Institute of Standards and Technology in the 2018 Face Recognition Vendor Test, with a performance report due in September of 2018.15

Table 1

Table 1

The fundamental workflow of an automated facial recognition technology involves the following: identifying the presence of a human face in a photograph or video, segmenting or normalizing the face from the “nonface” background, extracting predetermined facial features into a unique mathematical “faceprint” algorithm, and then matching the faceprint to a profile in a facial recognition technology database.5,6 A faceprint is unique to an individual and is generated from the conversion of facial anthropometric textures, shapes, or landmarks into a computer representation (Fig. 1). By comparing an individual’s captured faceprint to a database of reference faceprints, facial recognition technology can be used to either verify an individual’s identity (1:1 matching—“Is this individual who they claim to be?”) or to uniquely identify an individual (1:N matching—“Who is this individual?”) (Table 2).

Table 2

Table 2

Fig. 1

Fig. 1

Back to Top | Article Outline

CURRENT APPLICATIONS

Personal Device Biometric Authentication

The Apple iPhone X features Face ID biometric authentication, which uses facial depth image acquisition to unlock the smartphone. The TrueDepth camera projects 30,000 invisible infrared spectrum points onto the user’s face to form a depth map and a two-dimensional infrared image. The data are transformed and compared to the user’s enrolled facial data, which are captured across a variety of poses at the time of smartphone setup, and if a match is attained, the phone unlocks. If the facial recognition technology fails to recognize the user but the correct passcode is inputted, an additional photograph is captured and the system “learns from its mistake.” Apple’s Face ID will incorporate changes in appearance such as aging, facial hair, makeup, or fashion accessories. With Face ID, the probability of an imposter unlocking the phone is one in 1 million compared to one in 50,000 with a fingerprint, thus reducing the risk of fraud or exploitation.16 Apple acknowledges the software is less effective for siblings (including twins) and children younger than 13 years for whom facial features have not yet fully matured.

Back to Top | Article Outline

Social Media and Recreation

Since 2010, Facebook has used facial recognition technology to facilitate automated tagging of friends in uploaded photographs. Facebook’s DeepFace neural network features nine layers of interconnected artificial neurons with 120 million connection weights trained on 4.4 million face images uploaded by users.17 The system performs with 97.25 percent accuracy, just shy of the 97.50 percent accuracy of human face recognition.18 In Russia, an app called FindFace can identify individuals in photographs taken on the street and link them to profiles on a social network. SnapChat, a smartphone app, features augmented reality selfie filters that recognize one or more user faces in the camera field and change the facial appearance of users in real time during video chat. The Apple iPhone X smartphone also debuted a new Animoji feature that superimposes the user’s facial expressions in real time onto a three-dimensional animated emoji.

Back to Top | Article Outline

Security and Surveillance

One of the greatest potential applications of facial recognition technology is for public security and surveillance applications. Facial biometric scanning is increasingly being implemented at border controls in train stations and airports around the world.19,20 In 2018, Orlando International Airport became the first U.S. airport to fully deploy the U.S. Customs & Border Protection Biometric Entry and Exit Program, which uses facial recognition technology to facilitate arrivals and departures of international travelers.21 These systems have improved passenger flow while screening for identity fraud and potential criminals or terrorists. In addition to travel hubs, closed circuit video cameras equipped with automatic face scanning software have been deployed at large-scale events, such as music festivals, the Super Bowl, and the 2008 Beijing Summer Olympics. Potential suspects or wanted individuals screened by closed circuit video facial recognition technology are passed on to human operators who decide whether to pursue further investigation.22 An estimated 117 million American adults are present in law enforcement face recognition networks, with 80 percent of photographs sourced from government identification documents.23 As large-scale live video surveillance applications of facial recognition technology occur in dynamic real-world conditions with variable illumination, inconsistent pose, motion, and no active user participation, accuracy rates remain limited and inferior to human operators.24–26 Unsurprisingly, concerns over privacy infringement have been raised, and fears of a transition to an Orwellian society have been expressed.27–31

Back to Top | Article Outline

Marketing and Advertising

Automated facial analysis is being used to facilitate targeted marketing by predicting what a customer will be likely to purchase. In China, fast food restaurants are testing “personalized” order prediction software based on age and facial expressions.32 Adidas, an athletic apparel company, is working with Intel, a technology company, to create digital display walls with targeted product advertisements based on automatic face recognition of the age and gender of passing individuals.32 Face data can be aggregated with data from multiple digital media sources to create comprehensive profiles about customer personalities and shopping behaviors to optimize marketing initiatives.33

Back to Top | Article Outline

FACIAL RECOGNITION IN MEDICINE

Advancements in computational facial image analysis have been adapted for medical applications, with impressive results.34–36 In 2014, Oxford University reported a facial analysis software capable of detecting dysmorphic craniofacial features suggestive of syndromic genetic disorders.37 Using unsupervised machine learning, a subfield of artificial intelligence that involves computers learning on their own without explicit programming, the algorithm was trained with a database of 2878 frontal face images to learn diagnostically relevant phenotypic features and to generate “average faces” of each of eight developmental disorders: Angelman, Apert, Cornelia de Lange, Down, fragile X, progeria, Treacher-Collins, and Williams-Beuren (Fig. 2). By automatically annotating 36 facial feature nodes (Table 3), the system demonstrated 93.1 percent top rank classification accuracy for these eight syndromes. The system was then fed 2754 patient faces associated with any of 90 syndromes, including Crouzon, Ehlers-Danlos, Marfan, Klippel-Trenaunay, Moebius, neurofibromatosis, and Saethre-Chotzen, and demonstrated that a diagnosis could be made with 27.6-fold greater ease than by random chance. This software illustrates the potential of advanced facial analysis technology to aid primary care physicians in diagnosing ultrarare disorders and to facilitate appropriate referral to craniofacial and genetic specialists. The authors conclude that because any two-dimensional photograph can be analyzed, any clinician worldwide with a camera and computer could leverage this technology.

Table 3

Table 3

Fig. 2

Fig. 2

Researchers at Stanford have developed a controversial advanced facial analysis software that can accurately distinguish sexual orientation based on facial morphology.38 Using a deep neural network facial recognition technology called VGG-Face composed of multiple layers of interconnected artificial neurons arranged to mimic the mammalian brain, facial features were extracted from self-taken photographs of 35,326 self-reported homosexual and heterosexual men and women from public profiles of online U.S. dating websites.39 A logistic regression model was developed to classify sexual orientation using 500 values obtained from VGG-Face scores. Compared to human judges, who correctly predicted homosexuality with an accuracy of 61 percent and 54 percent for male and female images, respectively, the software correctly predicted homosexuality in 85 percent and 70 percent of male and female images, respectively. Computational facial analysis revealed subtle information about sexual orientation imperceptible to humans, with homosexual faces tending toward gender neutral: men possessed narrower jaws, longer noses, and larger foreheads, while women had larger jaws and smaller foreheads. [See Figure, Supplemental Digital Content 1, which shows composite faces and the average facial landmarks built by averaging faces classified as most and least likely to be gay (image reprinted with permission of the American Psychological Association from Wang Y, Kosinski M. Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. J Pers Soc Psychol. 2018;114:246–257), http://links.lww.com/PRS/D508.] This software foreshadows the privacy threats that accompany the development of such tools with facial recognition technology, which can be stigmatizing or even dangerous for individuals wishing to keep their sexual orientation private in authoritarian societies where homosexuality may be shunned or criminalized.

Advancements in computer recognition of human facial emotions can be used to quantify reactions, reveal subtle thoughts, or detect signs of lying (Fig. 3). This information may be used during job interviews or to identify malingering in narcotic abusers by exaggeration of pain to physicians.40 An Israeli start-up company, Faception, is marketing machine learning software that uses advanced facial analysis to predict a person’s personality and behavior in order to quantify extroversion, intelligence quotient, and likelihood to commit a crime.41

Fig. 3

Fig. 3

Back to Top | Article Outline

RELEVANCE OF FACIAL RECOGNITION TECHNOLOGY TO PLASTIC SURGEONS

The number of individuals who have undergone plastic surgery continues to grow. In 2017, the American Society of Plastic Surgeons reported a total of 17.5 million cosmetic procedures, including 15.7 million minimally invasive procedures of the face such as botulinum toxin injections, soft-tissue fillers, chemical peels, and laser hair removal. Of the 1.8 million invasive surgical cosmetic procedures, facial procedures such as rhinoplasty and blepharoplasty were among the top five.42 Given the prevalence of plastic surgery and the rapid proliferation of facial recognition technology for personal biometric authentication and public security systems, patients may increasingly ask plastic surgeons whether a procedure will affect their ability to be recognized by facial recognition technology.

Plastic surgery poses a major challenge for current facial recognition technology and is recognized as a distinct categorical limitation of many facial recognition algorithms alongside pose, illumination, expression, aging, and disguise with makeup.43 After having undergone plastic surgery procedures, nonlinear alterations are made to facial landmarks that may lead to difficulty in identifying these individuals by facial biometric systems. Singh et al. have pioneered research in this field by developing the first plastic surgery database, which contains one preoperative and one postoperative frontal photograph of 900 patients in neutral expression (Table 4).44 These patients underwent procedures such as dermabrasion, brow lift, otoplasty, blepharoplasty, rhinoplasty, and rhytidectomy.45,46 This group evaluated the performance of six standard facial recognition algorithms by training each algorithm on preoperative and postoperative photographs of 360 subjects, and then evaluating performance on the remaining 540 subjects. The recognition accuracy of the tested facial recognition technologies was poor, ranging from 27 to 54 percent, an absolute reduction of approximately 30 percent. In particular, “global” procedures such as skin resurfacing and rhytidectomy resulted in large facial node alterations that decreased the efficacy of facial recognition technology, with accuracy ranging from 18 to 54 percent. In contrast, “local” procedures involving only the nose, chin, eyelids, cheek, lips, ears, or forehead had a less debilitating effect. In this report, Singh et al. encouraged the research community to develop higher fidelity facial recognition technology algorithms to account for nonlinear variations resulting from plastic surgery of the face.

Table 4

Table 4

Since the pioneering report by Singh et al., other groups have sought to improve or develop their own facial recognition algorithms specifically to improve performance for plastic surgery patients. In 2012, Aggarwal et al. used a part-wise, sparse representation approach to match individual facial regions instead of holistic facial analysis to achieve a matching performance of 78 percent in patients who had undergone plastic surgery.47 Jillela and Ross combined independently processed ocular information with standard holistic face detection to achieve 87 percent recognition.48 In 2013, Liu et al. achieved an accuracy of 86 percent using a novel method to divide preoperative and postoperative photographs into face patches and then fusing the patches to compensate for localized appearance changes caused by plastic surgery.49 Using Singh et al.’s facial database, Bhatt et al. developed a multiobjective evolutionary granular algorithm with an overall accuracy of 87 percent,50 and Moeni et al. reconstructed a three-dimensional model from two-dimensional frontal images to extract facial depth vectors and combined this with two-dimensional texture vectors, achieving accuracy rates of 92 to 98 percent.51

The accuracy of future facial recognition algorithms will depend largely on the quality and comprehensiveness of training data sets. Facial recognition training databases containing patients who have undergone facial procedures should be expanded and balanced with more photographs of different ages, ethnicities, and procedures. Patients who have undergone multiple procedures simultaneously would be helpful to account for cumulative effects during algorithm training. It would also be beneficial to have facial photographs of patients who have undergone facial surgery for nonaesthetic reasons, such as in head and neck reconstruction, craniomaxillofacial reconstruction, burn reconstruction, gender-affirmation surgery, and vascularized composite transplantation of the face. As facial recognition technologies become more sophisticated, they could potentially one day be correlated with patient-reported outcome measures, be used as a performance metric for plastic surgeons, or help predict the likelihood of complications following facial surgery.

Back to Top | Article Outline

RECOMMENDATIONS

We recommend that the discussion of facial biometric identification become a twenty-first century addition to the routine consultation or consent process for patients seeking aesthetic facial surgery. Patients should be made aware that there is a possibility of personal or commercial facial recognition technology failing to recognize their biometric faceprint following aesthetic facial surgery, and that the probability of this failure may be related to the type of procedure they are undergoing. Plastic surgeons should be able to explain to patients that they may not be able to unlock their personal smartphone device with their face immediately after surgery, because of dressings, swelling, and actual physical changes from the procedure. However, because smartphones such as the Apple iPhone X are configured to learn from erroneous recognition and continually update the user’s stored faceprint with every device unlock, the iPhone X should eventually learn the user’s new appearance.

In contrast, patients should also be made aware that large-scale commercial applications of facial recognition technology, such as government biometric passports, are based on historical user photographs and may fail to match an individual’s postoperative face with a preoperative photograph taken previously. Thus, sufficient advance warning should be given to such organizations, and the patient may be required to provide new identification photographs after surgery to avoid unexpected delays or questioning by security bodies. We believe that plastic surgeons should initiate this discussion, rather than patients, to avoid unexpected surprises or delays on the patient’s part that could lead to litigation. Plastic surgeons could offer to provide documentation certifying that the individual underwent cosmetic surgery that may alter their ability to be recognized by either a facial recognition system or a human border controller. Of note, facial recognition algorithms may be more sensitive than humans to physical deviations and as such there may be a higher rate of erroneous or failed matches in biometric border situations. In the Republic of Korea, which has a flourishing cosmetic surgery tourism industry, some clinics offer certificates to surgical tourists from outside the country verifying the individual’s identity with passport number, duration of stay, name and location of the clinic or hospital, and the clinic or hospital’s official seal.52 In the United Kingdom, following orthognathic surgery, Her Majesty’s Passport Office advises:

If the patient’s facial profile alters, the patient would need to consider applying for a new passport, as they may encounter problems using their presurgery issued passport after the surgery. We would suggest that patients make that decision after the surgery. They would require a supporting letter from their consultant explaining the situation, and the application form would have to be fully countersigned.53

Ryan et al. recommend that all orthognathic patients be informed during the consent process that their biometric profiles may be affected.53 Surgeons should offer to write a supporting letter to the appropriate administrative body to confirm the patient underwent an elective facial procedure, so patients may obtain a new passport following surgery. It may be necessary to disclose before surgery that the plastic surgeon is not responsible for any financial or time resources associated with updating biometric profiles as a consequence of a cosmetic procedure.

Lastly, current facial recognition algorithms specifically developed to improve performance following plastic surgery are trained using facial databases constructed from publicly available photographs obtained before and after surgery. Plastic surgeons posting before-and-after photographs online with patient consent should be aware of the possibility that these photographs could be compiled into data training sets for developing future facial recognition technologies.

Back to Top | Article Outline

CONCLUSIONS

Facial recognition technology is an intriguing and powerful application of artificial intelligence, pattern recognition, and image analysis, with expanding implications for security, recreation, and privacy. As with many disruptive new technologies, market forces will lead to improved precision, efficiency, and adoption.54 In the coming years, facial recognition technology will become increasingly prevalent, and policy makers should prioritize the establishment of clear regulatory statutes. As plastic surgeons, we have the power to change faces and consequently identities. Plastic surgeons should have a conceptual understanding of how facial recognition technology works, the current landscape of facial recognition technology in society, and the relevance of facial recognition technology to patients following surgical procedures of the face.

Back to Top | Article Outline

REFERENCES

1. Apple, Inc. Apple special event: September 12, 2017. Available at: https://www.apple.com/apple-events/september-2017/. Accessed May 2, 2018.
2. Kanade T. Picture Processing System by Computer Complex and Recognition of Human Faces (dissertation). 1973.Kyoto, Japan; Kyoto University.
3. Sirovich L, Kirby M. Low-dimensional procedure for the characterization of human faces. J Opt Soc Am A 1987;4:519–524.
4. Garg D, Sharma AK. Face recognition. IOSR J Eng. 2012;2:128–133.
5. Li SZ, Jain AK. Li SZ, Jain AK. Introduction. In: Handbook of Face Recognition. 2011.2nd ed. London: Springer-Verlag.
6. Zhao W, Chellappa R, Phillips PJ, Rosenfeld A. Face recognition: A literature survey. ACM Comput Surv. 2003;35:399–458.
7. Daugman JG. U.S. Patent number 5,291,560. Biometric personal identification system based on iris analysis. March 1, 1994. Available at: https://patentimages.storage.googleapis.com/7e/23/b8/11dc95d941b236/US5291560.pdf. Accessed August 5, 2018.
8. Jain AK, Prabhakar S, Chen S. Combining multiple matchers for a high security fingerprint verification system. Pattern Recognit Lett. 1999;20:1371–1379.
9. Jain AK, Ross A, Prabhakar S. An introduction to biometric recognition. IEEE Trans Circuits Syst Video Technol. 2004;14:4–20.
10. Turk M, Pentland A. Eigenfaces for recognition. J Cogn Neurosci. 1991;3:71–86.
11. Penev PS, Atick JJ. Local feature analysis: A general statistical theory for object representation. Network 1996;7:477–500.
12. Lawrence S, Giles CL, Tsoi AC, Back AD. Face recognition: A convolutional neural-network approach. IEEE Trans Neural Netw. 1997;8:98–113.
13. Belhumeur PN, Hespanha JP, Kriegman DJ. Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection. IEEE Trans Pattern Anal Mach Intell. 1997;19:711–720.
14. Markets Insider. The global facial recognition market is expected to grow from USD 4.05 billion in 2017 to USD 7.76 billion by 2022. Available at: http://markets.businessinsider.com/news/stocks/the-global-facial-recognition-market-is-expected-to-grow-from-usd-4-05-billion-in-2017-to-usd-7-76-billion-by-2022-1008840327. Accessed August 5, 2018.
15. Face Recognition Vendor Test (FRVT) 1:N 2018 Evaluation. NIST Projects/Programs. Available at: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt-1n-2018-evaluation. Accessed August 18, 2018.
16. Apple, Inc. Face ID Security. November 2017. Available at: https://images.apple.com/business/docs/FaceID_Security_Guide.pdf. Accessed April 23, 2018.
17. Taigman Y, Yang M, Ranzato M. DeepFace: Closing the gap to human-level performance in face verification. In 2014 IEEE Conference on Computer Vision and Pattern Recognition, June 23–28, 2014: Columbus, Ohio; 1701–1708.
18. Wikipedia. DeepFace. August 30, 2017. Available at: https://en.wikipedia.org/wiki/DeepFace. Accessed April 2, 2018.
19. Jacob M; Euractiv. Facial recognition gains grounds in Europe, among big-brother fears. Available at: https://www.euractiv.com/section/data-protection/news/facial-recognition-gains-grounds-in-europe-among-big-brother-fears/. Accessed May 19, 2018.
20. Buckley B, Hunter M. Say cheese! Privacy and facial recognition. Comput Law Secur Rev. 2011;27:637–640.
21. Orlando International Airport (press release). Orlando International Airport will be first to utilize biometrics to expedite international travel. Available at: https://www.orlandoairports.net/press/2018/04/18/orlando-international-airport-will-be-first-to-utilize-biometrics-to-expedite-international-travel/. Accessed August 5, 2018.
22. Bennett KA. Can facial recognition technology be used to fight the new war against terrorism? NC JOLT 2001;16.
23. Gaines S, Williams S; Georgetown Law Center on Privacy & Technology. The perpetual line-up. October 18, 2016. Available at: https://www.perpetuallineup.org/. Accessed April 6, 2018.
24. O’Toole AJ, An X, Dunlop JP, Natu VS, Phillips PJ. Comparing face recognition algorithms to humans on challenging tasks. Trans Appl Percept. 2012;9:1–15.
25. Blanton A, Allen KC, Miller T, Kalka ND, Jain AK. A comparison of human and automated face verification accuracy on unconstrained image sets. Paper presented at: IEEE Conference on Computer Vision and Pattern Recognition Workshops; June 26–July 1, 2016; Las Vegas, Nev.
26. Klontz JC, Jain AK. A case study on unconstrained facial recognition using the Boston Marathon bombings suspects. Michigan State University Technical Report. 2013.East Lansing, Mich: Michigan State University.
27. Hirose M. Privacy in public spaces: The reasonable expectation of privacy against the dragnet use of facial recognition technology. Conn Law Rev. 2017;49:1591–1620.
28. Royakkers L, Timmer J, Kool L, van Est R. Societal and ethical issues of digitization. Ethics Inform Technol. 2018;20:127–142.
29. Nissenbaum H. Protecting privacy in an information age: The problem of privacy in public. Law Philos. 1998;17:559–596.
30. Brey P. Ethical aspects of facial recognition systems in public places. J Info Commun Ethics Soc. 2004;2:97–109.
31. Federal Trade Commission. Facing facts: Best practices for common uses of facial recognition technologies. Available at: https://www.ftc.gov/sites/default/files/documents/reports/facing-facts-best-practices-common-uses-facial-recognition-technologies/121022facialtechrpt.pdf. Accessed April 16, 2018.
32. Los Angeles Times. Advertisers start using facial recognition to tailor pitches. August 21, 2011. Available at: http://articles.latimes.com/2011/aug/21/business/la-fi-facial-recognition-20110821. Accessed April 10, 2018.
33. FACEPTION. Our technology. Available at: https://www.faception.com/our-technology. Accessed April 18, 2018.
34. Kruszka P, Addissie YA, McGinn DE, et al. 22q11.2 deletion syndrome in diverse populations. Am J Med Genet A 2017;173:879–888.
35. Valentine M, Bihm DCJ, Wolf L, et al. Computer-aided recognition of facial attributes for fetal alcohol spectrum disorders. Pediatrics 2017;140:e20162028.
36. Kong X, Gong S, Su L, Howard N, Kong Y. Automatic detection of acromegaly from facial photographs using machine learning methods. EBioMedicine 2018;27:94–102.
37. Ferry Q, Steinberg J, Webber C, et al. Diagnostically relevant facial gestalt information from ordinary photos. Elife 2014;3:e02020.
38. Wang Y, Kosinski M. Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. J Pers Soc Psychol. 2018;114:246–257.
39. Parkhi OM, Vedaldi A, Zisserman A. Deep face recognition. Paper presented at: British Machine Vision Conference 2015; September 7–10, 2015; Swansea, United Kingdom.
40. Cao NT, Ton-That AH, Choi HI. An effective facial expression recognition approach for intelligent game systems. Int J Comput Vis Robot. 2016;6:223–234.
41. Buolamwini J, Gebru T. Gender shades: Intersectional accuracy disparities in commercial gender classification. Proc Mach Learn Res. 2018;81:77–91.
42. American Society of Plastic Surgeons. 2017 cosmetic plastic surgery statistics. Available at: https://www.plasticsurgery.org/documents/News/Statistics/2017/plastic-surgery-statistics-report-2017.pdf. Accessed April 28, 2018.
43. Nappi M, Ricciardi S, Tistarelli M. Deceiving faces: When plastic surgery challenges face recognition. Image Vision Comput. 2016;54:71–82.
44. Image Analysis and Biometrics Lab @ IIIT Delhi. Available at: http://www.iab-rubric.org/resources.html. Accessed April 24, 2018.
45. Singh R, Vatsa M, Bhatt HS, Bharadwaj S, Noore A, Hooreyezdan SS. Plastic surgery: A new dimension to face recognition. IEEE Trans Inform Forens Security 2010;5:441–448.
46. Singh R, Vatsa M, Noore A. Effect of plastic surgery on face recognition: A preliminary study. In 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Miami, Fla.; June 20–25, 2009: 72–77.
47. Aggarwal G, Biswas S, Flynn PJ, Bowyer KW. A sparse representation approach to face matching across plastic surgery. In 2012 IEEE Workshop on the Applications of Computer Vision, January 9–12, 2012: Breckenridge, Colo.; 113–119.
48. Jillela R, Ross A. Mitigating effects of plastic surgery: Fusing face and ocular biometrics. In 2012 IEEE Fifth International Conference on Biometrics: Theory, Applications and Systems (BTAS), September 23–27, 2012: Arlington, Va.; 402–411.
49. Liu J, Harris A, Kanwisher N. Perception of face parts and face configurations: An FMRI study. J Cogn Neurosci. 2010;22:203–211.
50. Bhatt HS, Bharadwaj S, Singh R, Vatsa M. Recognizing surgically altered face images using multiobjective evolutionary algorithm. IEEE Trans Inform Forens Security 2013;8:89–100.
51. Moeini A, Faez K, Moeini H. Face recognition across makeup and plastic surgery from real-world images. J Electronic Imag. 2015;24:053028.
52. Ashcraft B. How South Korean plastic surgeons make passport photos worthless. Available at: https://kotaku.com/how-south-korean-plastic-surgeons-make-passport-photos-1563323919. Accessed April 19, 2018.
53. Ryan PJ, Turner MJ, Gibbons AJ, Ricanek K Jr. Orthognathic surgery and the biometric e-passport: A change in surgical practice. Br J Oral Maxillofac Surg. 2014;52:384.
54. Kemelmacher-Shlizerman I, Seitz SM, Miller D, Brossard E. The MegaFace benchmark: 1 million faces for recognition at scale. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, June 27–30, 2016: Las Vegas, Nev.; 4873–4882.

Supplemental Digital Content

Back to Top | Article Outline
©2019American Society of Plastic Surgeons