Current facial palsy assessment is nonstandardized, and although novel grading systems have been frequently presented over the past six decades, no scale has earned universal acceptance.1 Although serious attempts at consensus development have been undertaken, clinician-graded scales are limited by subjectivity and observer bias. The House-Brackmann scale is the most widely used tool for facial palsy assessment in the United States, but is frequently critiqued for its gross nature and inability to chart regional changes after different treatments.2–6 The electronic facial paralysis assessment tool (eFACE) combines strengths of earlier clinician-graded scales into a digital interface.7 The eFACE scale assesses static and dynamic symmetry, regional changes, and synkinesis with high intraobserver, interobserver, and test-retest reliability.8 Furthermore, it correlates well with the Sunnybrook Facial Grading System and expert-graded disfigurement scores.9,10 However, like other clinician-graded facial palsy assessment scales, the eFACE remains subjective and has not gained universal acceptance. Quantitative computer-generated facial position and movement analysis would be desirable to achieve conformity in facial palsy assessment and to study comparative effectiveness among medical, surgical, and physical therapies. As noted by Dusseldorp et al., automated facial palsy assessment is important for comprehensive facial palsy outcomes analysis along with layperson assessments, clinician grading, and patient-reported outcome measures.11
Although objective facial palsy assessment scales have been developed for both research and clinical purposes, their use is limited because the systems are typically time-intensive and require complex and expensive software and equipment. Our group developed the Massachusetts Eye and Ear Infirmary Facegram software in 2012, but this tool has not been widely adopted because of the lack of automated facial landmark placement and the need for an individual to calculate different facial measurements.12 Other groups have used videography to develop an objective facial palsy assessment scale, but none of these systems has been integrated into clinical practice because of the complexity of the equipment, setup, and data analysis.13–16
Recent advances in machine learning have enabled automated facial landmark localization, quantification of facial movements, and assessment of emotive expressions.17–20 Researchers have used these advances in machine learning techniques to create Emotrics, a high-throughput software platform with automated facial landmark localization and computation of facial measurements. Emotrics evaluates frontal view patient photographs, performing a complete analysis in less than 5 seconds. Furthermore, it can analyze multiple photographs, permitting automated calculation of facial movements such as oral commissure excursion and brow elevation (Fig. 1).21 Although Emotrics was initially described as having been trained using a database of normal photographs, it has subsequently been trained on patients with facial palsy to significantly improve facial landmark localization accuracy in this population.22–24 When performing analyses with Emotrics, users can choose the model for patients with facial palsy, or the iBUG database for normal faces.
The goal of this project was to leverage our significant experience in examining and treating patients with facial movement disorders to develop an automated facial palsy grading tool, capable of providing rapid assessment of facial function and symmetry. We compared facial palsy assessment using nine parameters of the clinician-graded eFACE scale to nine comparable measures determined by the machine learning–derived facial analysis program, Emotrics, under the hypothesis that the automated algorithm would yield rapid, automated, and accurate facial assessments. A universal and objective facial function assessment tool is necessary to allow practitioners to further understand the progression, prognosis, and treatment outcomes of facial palsy.
PATIENTS AND METHODS
Approval was obtained from the Massachusetts Eye and Ear Infirmary Institutional Review Board before beginning this study. All patients provided written consent to have their data and photographs included in this project.
The Emotrics software was trained to recognize the positions of 68 facial landmarks and the irides in patients with and without facial palsy.21 The software computes a set of facial measurements from the landmarks and facial midline, using the difference between two points in a single photograph, and can analyze differences between two photographs to calculate facial motions such as brow elevation, eye closure, and oral commissure movement with smile. The measurements are provided in pixels but the software converts them into real-world values (i.e., millimeters) by assuming the iris diameter is 11.77 mm, the mean iris diameter in the human population.
Using the Emotrics software platform, the auto-eFACE software platform was constructed to assign total, static, dynamic, and synkinesis scores from eight frontal view clinical photographs (i.e., patient at rest, brow elevation, gentle eye closure, tight eye closure, patient instructed to give “best smile,” patient instructed to give “biggest smile,” puckered lips saying “oooo” and showing bottom teeth saying “eeee”). In contrast to Emotrics, auto-eFACE does not compute real-world measurements but assigns scores from ratios of the facial landmark positions on the healthy and affected sides. This eliminates variability based on interocular symmetry. For static auto-eFACE measures, the software computes the scores from ratios of facial measurements in the “rest” photograph. For the dynamic and synkinesis metrics, auto-eFACE uses ratios of facial measurements taken at rest and from another photograph (e.g., rest and biggest smile photographs for oral commissure movement with smile) to calculate the scores. (See Figure, Supplemental Digital Content 1, which shows examples of the auto-eFACE software: the home screen of the auto-eFACE software where the frontal view photographs demonstrating the eight standard facial expressions for facial analysis are uploaded to calculate a patient’s score. Photographs can be added from files or by means of drag-and-drop (a); the eight standard photographs uploaded for a patient without facial palsy (b); the auto-eFACE score report, which is generated in less than 5 seconds (c). The only manual input required is the healthy side of the face, https://links.lww.com/PRS/E323.) The auto-eFACE program provides scores for nine facial metrics that are routinely assessed during the clinical eFACE evaluation, as this assessment tool is well-validated and practitioners worldwide are familiar with it.7,25,26 Six variables in the clinician-graded eFACE scale were unable to be measured by Emotrics because their determination does not rely on simple dot landmarks: nasolabial fold depth at rest; nasolabial fold depth and orientation with smile; and midfacial, mentalis, and platysmal synkinesis. These variables were not included in the auto-eFACE program.
The Massachusetts Eye and Ear Infirmary Standard Facial Palsy Dataset, an open-source database that categorizes patients by facial palsy severity according to their clinician-graded eFACE score, was used to test the auto-eFACE software.27 The eight-photograph series of each of 10 normal faces (eFACE score 96 to 100), five completely flaccid faces (eFACE score <60), and five severely synkinetic faces (eFACE score <60), were analyzed. The 160 photographs—all frontal-view clinical photographs taken with the head in a neutral position against a blue background using a Nikon D7100 (Nikon Corp., Tokyo, Japan) camera—underwent automatic landmark tracking using the machine learning derived–facial analysis software Emotrics, and each marked photograph was inspected for landmark accuracy by a clinician. This single step is not yet fully automated; it involves verification and, if necessary, adjustment of the landmark dots placed by Emotrics. The process takes less than 30 seconds on average. The auto-eFACE software then automatically assigned each patient total, static, dynamic, and synkinesis facial palsy scores.
Clinician-Graded eFACE, Modified-eFACE, and Statistical Analysis
Descriptive measurements were calculated for the auto-eFACE scores, and the completely flaccid and severe synkinesis groups were compared to normal patients using a nonparametric test of the medians. The normality of data was confirmed using the Shapiro-Wilk test; a nonparametric test was used because of the small sample size.
The clinician-graded eFACE is a reliable, repeatable, and validated tool for assessing facial function and symmetry.7–9,26 To compare the auto-eFACE score to this validated clinician-graded scale, we compared the auto-eFACE scores of the 20 patients to previously documented eFACE scores. The photographs used to calculate the auto-eFACE scores were taken on the same day that the clinician-graded eFACE evaluation was performed. The clinician-graded eFACE scores were adjusted by discarding the six eFACE variables unable to be measured by Emotrics, and comparisons were made between the modified-eFACE and auto-eFACE scores. The Wilcoxon signed rank test was used to compare the patients’ eFACE, modified-eFACE, and auto-eFACE scores. All statistical tests were performed using IBM SPSS Version 26 (IBM Corp., Armonk, N.Y.).
Ten patients were included in our analysis without facial palsy (five men; age range, 24 to 92 years). The median clinician-graded eFACE score for the normal group was 100.00 ± 1.16 and the median modified-eFACE score was 100.00 ± 1.58. The difference between the eFACE and modified-eFACE scores was not significant (95 percent CI, −0.697 to 0.963; p = 0.725). For the patients with complete flaccid facial palsy and severe synkinesis, the eFACE and modified-eFACE scores reflected the decreased facial function, as expected. The median eFACE score for patients with flaccid facial palsy was 55.20 ± 3.34 and the median modified-eFACE score was 52.20 ± 3.39. For patients with severe synkinesis, the median eFACE score was 56.27 ± 4.65 and the median modified-eFACE score was 54.22 ± 5.35. The flaccid facial palsy and severe synkinesis groups had statistically significant worse eFACE and modified-eFACE scores compared to the patients without facial palsy [severe synkinesis eFACE, 95 percent CI, −51.73 to −40.19 (p = 0.000); severe synkinesis modified-eFACE, 95 percent CI, −52.33 to −39.03 (p = 0.000); complete flaccid facial palsy eFACE, 95 percent CI, −48.38 to −40.08 (p = 0.000); complete flaccid facial palsy modified-eFACE, 95 percent CI, −50.02 to −41.60 (0.000)].
The median auto-eFACE score for normal faces was 93.83 ± 4.37, which was significantly worse than the mean modified-eFACE scores for this same group [auto-eFACE, 92.81 (interquartile range, 3.23); modified eFACE, 99.50 (interquartile range, 0); p = 0.005]. Careful review of patient photographs revealed minor facial asymmetries that the clinician had a tendency to disregard when performing eFACE grading on normal faces; these asymmetries were confirmed by the Emotrics software’s landmark measurements (Fig. 2). The median auto-eFACE scores for patients with complete flaccid facial palsy and severe synkinesis were 59.96 ± 5.80 and 62.35 ± 9.35, respectively. The auto-eFACE reflected significantly worse facial function in patients with flaccid facial palsy and severe synkinesis compared to the group of normal faces (severe synkinesis, 95 percent CI, −40.76 to −17.53 (p = 0.01); complete flaccid facial palsy, 95 percent CI, 39.81 to −25.40 (p = 0.01)]. The auto-eFACE reported better mean facial symmetry in patients with both flaccid paralysis and severe synkinesis than the modified clinician-graded eFACE; this result trended toward significance [complete flaccid facial palsy, auto-eFACE, mean 60.20 (interquartile range, 10.11); modified-eFACE, mean 53.69 (interquartile range, 5.03) (p = 0.080); severe synkinesis, auto-eFACE, mean 63.66 (interquartile range, 18.36); modified-eFACE, mean 53.82 (interquartile range, 10.00) (p = 0.080)] (Table 1).
Table 1. -
Modified Clinician-Graded eFACE and Auto-eFACE Scores*
|Complete flaccid facial palsy
IQR, interquartile range.
*A score of 100 represents perfect facial symmetry.
†Clinician-graded eFACE modified to discount six variables unable to be measured by Emotrics.
‡Calculated using the Wilcoxon signed rank test.
Since Botman and Jongkees first reported a facial nerve grading scale in 1955, facial palsy has been an exceedingly difficult problem to characterize, quantify, and follow longitudinally.28 A recent review of the literature found 19 unique facial nerve grading scales, yet none enjoys universal acceptance.1,29 Although the American Academy of Otolaryngology–Head and Neck Surgery endorses the House-Brackmann scale as the standard facial palsy grading system, this scale was developed to characterize facial palsy after ablative vestibular schwannoma surgery using six general categories.30 Furthermore, the interrater and intrarater reliability of the scale is only fair.3,31,32 Despite this important weakness, the House-Brackmann scale remains the most widely used scale by providers and is erroneously applied to a wide range of facial palsy causes.29
The ideal facial palsy outcome panel should include patient-reported measures, layperson assessment, spontaneous smile analysis, and clinician-graded and automated scales.11 Automated analysis of facial palsy patients is only one component of an ideal facial palsy outcome panel. Layperson assessments help clinicians understand how people view faces of patients with facial palsy and which features are the most important to correct.33 Clinician-graded facial palsy evaluations will always be subject to observer bias and human error, but provide facial palsy assessment by clinical experts familiar with the spectrum, natural evolution, and treatment effects of the disorder. Furthermore, many clinician-graded scales have already been shown to have excellent reliability, interobserver and intraobserver repeatability, and correlation with facial disfigurement ratings.8,10,32 Patient-reported outcome measures are critical to understand the implications of facial palsy and the effects of treatment from the patient’s perspective, and expert panels are beginning to realize the need to incorporate these measures into facial palsy outcomes panels.25,34
Although the need for an automated and objective facial palsy scale has been consistently recognized, such a grading system remains elusive.1,12 Prior attempts using photographs, videos, or three-dimensional stereophotogrammetry have fallen short of gaining widespread acceptance, as they have been time consuming, complex, and costly.12–16,35,36 The clinical audiogram for otologists and the electrocardiogram for cardiologists serve as excellent examples of how an automated and objective assessment tool permits improved understanding of disease processes, treatments, and outcomes, and improves communication between providers.37–39 An automated facial palsy assessment tool that could be used alongside clinician-graded scales and other metrics would similarly increase our understanding of the disorder and its treatments.
In the present work, we developed and tested an easy-to-use, rapid, and automated facial assessment tool using Emotrics, a high-throughput software platform with automated facial landmark localization and computation of facial measurements.21 Analysis of facial landmarks and movements using this software is not subject to the same degree of observer bias inherent in clinician-graded scales, although formal establishment of this decreased bias must be validated in future studies. Emotrics is freely available for download along with tutorial videos on the Sir Charles Bell Society website (http://www.sircharlesbell.com/); the auto-eFACE program will also be made freely available to providers.
Our results demonstrate that the auto-eFACE software easily differentiates normal faces from those with flaccid facial palsy and severe synkinesis. When we compared the auto-eFACE scores to clinician-graded assessments using the eFACE and a modified-eFACE—to reflect only the nine measures graded by the auto-eFACE software—we found that the automated system predicted more facial landmark asymmetry in normal patients, and less landmark asymmetry in patients with severe synkinesis and complete flaccid paralysis. When we rescrutinized the normal patient photographs where asymmetry was detected by auto-eFACE, minor facial asymmetries were identified on this second look. This phenomenon is consistent with the findings of many groups that demonstrate that adults and children prefer attractive faces and assign desirable personality traits and characteristics to attractive individuals.40–44 Furthermore, these judgments—normal, attractive, and disfigured—are made quickly by observers and are consistent between different cultures; longer exposure to a face only strengthens initial judgments made by the observer.45,46 By contrast, people with facial disfigurement are stigmatized, seen as less attractive, and perceived as having negative personality traits.44,47,48 Thus, the auto-eFACE provides an automated, objective, and potentially unbiased facial assessment compared to clinician-graded scales. Although this study demonstrates that the auto-eFACE software is able to pick up subtle asymmetries overlooked by clinicians during facial palsy evaluations, it does not study whether these asymmetries are clinically significant; more attention to this finding is merited in future studies.
The auto-eFACE facial palsy grading software is an excellent new tool that provides automated and objective assessments of facial function in patients with facial movement disorders; however, there are limitations to this software. Emotrics is not able to measure nasolabial fold depth at rest; nasolabial fold depth and orientation with smile; or midfacial, mentalis, and platysmal synkinesis, as they are not landmark dot-based variables, and thus these variables were not included in the auto-eFACE score. Two of the excluded variables—nasolabial fold depth at rest and nasolabial fold orientation with smiling—have been shown to correlate well with overall facial disfigurement.10 However, the modified-eFACE discounting the above six variables was virtually identical to the overall clinician-graded eFACE score, suggesting that other variables in the scale may convey the disfigurement associated with nasolabial fold depth at rest and orientation with smile. It is necessary to further investigate whether other variables, such as oral commissure position at rest, convey the disfigurement associated with nasolabial fold depth and orientation abnormalities. A second limitation is that we assessed only facial palsy patients with complete flaccid facial palsy and severe synkinesis with the auto-eFACE program. However, given the minor asymmetries detected in normal faces by the auto-eFACE software, we expect the program will reliably evaluate and differentiate faces with mild and moderate palsy from normal faces. Further studies are planned to determine the software’s ability to accurately evaluate mild and moderate facial palsy; to detect changes after facial reanimation surgery, physical therapy, and spontaneous recovery; and to correlate the auto-eFACE score to overall facial disfigurement.
Artificial intelligence was used to generate an automated facial function report, the auto-eFACE. Scores were obtained through machine learning–derived facial landmark tracking and algorithms using the ratios of facial measurements on the healthy and affected sides. The automated system predicted more facial landmark asymmetry in normal patients, and less landmark asymmetry in patients with severe synkinesis and complete flaccid paralysis, compared to clinician grading. This rapid and easy-to-use automated assessment tool holds promise for the standardization of facial palsy outcome measures, and works to complement clinician-graded scales, layperson assessments, and patient-reported outcome measures in facial palsy outcomes panels.
Patients provided written consent for the use of their images.
1. Fattah AY, Gurusinghe AD, Gavilan J, et al. Facial nerve grading instruments: Systematic review of the literature and suggestion for uniformity. Plast Reconstr Surg. 2015;135:569–579.
2. House JW, Brackmann DE. Facial nerve grading system. Otolaryngol Head Neck Surg. 1985;93:146–147.
3. Kanerva M, Poussa T, Pitkäranta A. Sunnybrook and House-Brackmann facial grading systems: Intrarater repeatability and interrater agreement. Otolaryngol Head Neck Surg. 2006;135:865–871.
4. Croxson G, May M, Mester SJ. Grading facial nerve function: House-Brackmann versus Burres-Fisch methods. Am J Otol. 1990;11:240–246.
5. Ross BG, Fradet G, Nedzelski JM. Development of a sensitive clinical facial grading system. Otolaryngol Head Neck Surg. 1996;114:380–386.
6. Henstrom DK, Skilbeck CJ, Weinberg J, Knox C, Cheney ML, Hadlock TA. Good correlation between original and modified House Brackmann facial grading systems. Laryngoscope. 2011;121:47–50.
7. Banks CA, Bhama PK, Park J, Hadlock CR, Hadlock TA. Clinician-graded electronic facial paralysis assessment: The eFACE. Plast Reconstr Surg. 2015;136:223e–230e.
8. Banks CA, Jowett N, Hadlock TA. Test-retest reliability and agreement between in-person and video assessment of facial mimetic function using the eFACE facial grading system. JAMA Facial Plast Surg. 2017;19:206–211.
9. Gaudin RA, Robinson M, Banks CA, Baiungo J, Jowett N, Hadlock TA. Emerging vs time-tested methods of facial grading among patients with facial paralysis. JAMA Facial Plast Surg. 2016;18:251–257.
10. Banks CA, Jowett N, Hadlock CR, Hadlock TA. Weighting of facial grading variables to disfigurement in facial palsy. JAMA Facial Plast Surg. 2016;18:292–298.
11. Dusseldorp JR, van Veen MM, Mohan S, Hadlock TA. Outcome tracking in facial palsy. Otolaryngol Clin North Am. 2018;51:1033–1050.
12. Hadlock TA, Urban LS. Toward a universal, automated facial measurement tool in facial reanimation. Arch Facial Plast Surg. 2012;14:277–282.
13. Frey M, Jenny A, Giovanoli P, Stüssi E. Development of a new documentation system for facial movements as a basis for the international registry for neuromuscular reconstruction in the face. Plast Reconstr Surg. 1994;93:1334–1349.
14. Frey M, Giovanoli P, Gerber H, Slameczka M, Stüssi E. Three-dimensional video analysis of facial movements: A new method to assess the quantity and quality of the smile. Plast Reconstr Surg. 1999;104:2032–2039.
15. Hontanilla B, Aubá C. Automatic three-dimensional quantitative analysis for evaluation of facial movement. J Plast Reconstr Aesthet Surg. 2008;61:18–30.
16. Sforza C, Galante D, Shirai YF, Ferrario VF. A three-dimensional study of facial mimicry in healthy young adults. J Craniomaxillofac Surg. 2010;38:409–415.
17. Kazemi V, Sullivan J. One millisecond face alignment with an ensemble of regression trees. In: Proceedings of the 27th IEEE Conference on Computer Vision and Pattern Recognition; 2014:Columbus, OH: IEEE; 1867–1874.
18. King DE. Dlib-ml: A machine learning toolkit. JMLR. 2009;10:1755–1758.
19. Sagonas C, Tzimiropoulos G, Zafeiriou S, Pantic M. A semi-automatic methodology for facial landmark annotation. In: Proceedings of the 26th IEEE Conference on Computer Vision and Pattern Recognition. 2013:Piscataway, NJ: IEEE; 896–903.
20. Pantic M, Rothkrantz LJM. Automatic analysis of facial expressions: The state of the art. IEEE Trans Pattern Anal Mach Intell. 2000;22:1424–1445.
21. Guarin DL, Dusseldorp J, Hadlock TA, Jowett N. A machine learning approach for automated facial measurements in facial palsy. JAMA Facial Plast Surg. 2018;20:335–337.
22. Greene JJ, Tavares J, Guarin DL, Hadlock T. Clinician and automated assessments of facial function following eyelid weight placement. JAMA Facial Plast Surg. 2019;21:387–392.
23. Greene JJ, Tavares J, Guarin DL, Jowett N, Hadlock T. Surgical refinement following free gracilis transfer for smile reanimation. Ann Plast Surg. 2018;81:329–334.
24. Guarin DL, Yunusova Y, Taati B, et al. Toward an automatic system for computer-aided assessment in facial palsy. Facial Plast Surg Aesthet Med. 2020;22:42–49.
25. Butler DP, De la Torre A, Borschel GH, et al. An international collaborative standardizing patient-centered outcome measures in pediatric facial palsy. JAMA Facial Plast Surg. 2019;21:351–358.
26. Banks CA, Jowett N, Azizzadeh B, et al. Worldwide testing of the eFACE facial nerve clinician-graded scale. Plast Reconstr Surg. 2017;139:491e–498e.
27. Greene JJ, Guarin DL, Tavares J, et al. The spectrum of facial palsy: The MEEI facial palsy photo and video standard set. Laryngoscope. 2020;130:32–37.
28. Botman JW, Jongkees LB. The result of intratemporal treatment of facial palsy. Pract Otorhinolaryngol (Basel). 1955;17:80–100.
29. Fattah AY, Gavilan J, Hadlock TA, et al. Survey of methods of facial palsy documentation in use by members of the Sir Charles Bell Society. Laryngoscope. 2014;124:2247–2251.
30. House JW. Facial nerve grading systems. Laryngoscope. 1983;93:1056–1069.
31. Reitzen SD, Babb JS, Lalwani AK. Significance and reliability of the House-Brackmann grading system for regional facial nerve function. Otolaryngol Head Neck Surg. 2009;140:154–158.
32. Lee LN, Susarla SM, Hohman MH, Henstrom DK, Cheney ML, Hadlock TA. A comparison of facial nerve grading systems. Ann Plast Surg. 2013;70:313–316.
33. Ishii L, Dey J, Boahene KD, Byrne PJ, Ishii M. The social distraction of facial paralysis: Objective measurement of social attention using eye-tracking. Laryngoscope. 2016;126:334–339.
34. Berner JE, Kamalathevan P, Kyriazidis I, Nduka C. Facial synkinesis outcome measures: A systematic review of the available grading systems and a Delphi study to identify the steps towards a consensus. J Plast Reconstr Aesthet Surg. 2019;72:946–963.
35. Verhoeven T, Xi T, Schreurs R, Bergé S, Maal T. Quantification of facial asymmetry: A comparative study of landmark-based and surface-based registrations. J Craniomaxillofac Surg. 2016;44:1131–1136.
36. Codari M, Pucciarelli V, Stangoni F, et al. Facial thirds-based evaluation of facial asymmetry using stereophotogrammetric devices: Application to facial palsy subjects. J Craniomaxillofac Surg. 2017;45:76–81.
37. McWilliams W, Trombetta M, Werts ED, Fuhrer R, Hillman T. Audiometric outcomes for acoustic neuroma patients after single versus multiple fraction stereotactic irradiation. Otol Neurotol. 2011;32:297–300.
38. Steinhubl SR, Waalen J, Edwards AM, et al. Effect of a home-based wearable continuous ECG monitoring patch on detection of undiagnosed atrial fibrillation: The mSToPS randomized clinical trial. JAMA. 2018;320:146–155.
39. Freedman B, Camm J, Calkins H, et al.; AF-Screen Collaborators. Screening for atrial fibrillation: A report of the AF-SCREEN International Collaboration. Circulation. 2017;135:1851–1867.
40. Todorov A, Said CP, Oosterhof NN, Engell AD. Task-invariant brain responses to the social value of faces. J Cogn Neurosci. 2011;23:2766–2781.
41. Todorov A, Baron SG, Oosterhof NN. Evaluating face trustworthiness: A model based approach. Soc Cogn Affect Neurosci. 2008;3:119–127.
42. Stewart LH, Ajina S, Getov S, Bahrami B, Todorov A, Rees G. Unconscious evaluation of faces on social dimensions. J Exp Psychol Gen. 2012;141:715–727.
43. Langlois JH, Kalakanis L, Rubenstein AJ, Larson A, Hallam M, Smoot M. Maxims or myths of beauty? A meta-analytic and theoretical review. Psychol Bull. 2000;126:390–423.
44. Hartung F, Jamrozik A, Rosen ME, Aguirre G, Sarwer DB, Chatterjee A. Behavioural and neural responses to facial disfigurement. Sci Rep. 2019;9:8021.
45. Todorov A, Pakrashi M, Oosterhof NN. Evaluating faces on trustworthiness after minimal time exposure. Social Cogn. 2009;27:813–833.
46. Olson IR, Marshuetz C. Facial attractiveness is appraised in a glance. Emotion. 2005;5:498–502.
47. Rumsey N, Harcourt D. Body image and disfigurement: Issues and interventions. Body Image. 2004;1:83–97.
48. Broder HL, Smith FB, Strauss RP. Developing a behavior rating scale for comparing teachers’ ratings of children with and without craniofacial anomalies. Cleft Palate Craniofac J. 2001;38:560–565.