Patients with facial paralysis manifest different or changing facial states that may vary from flaccid to hypertonic/synkinetic or a mixture of the 2, and depending on the state, they require tailored and often disparate management.1 Traditionally, assessment of the extent of the facial paralysis has been based on an examination of different areas or zones of a patient’s face. These zones include the upper (brow, upper, and lower eyelids), middle (nose, nasolabial folds, lips, and commissures), and lower (lower lip and chin) facial areas.1 In each zone, many different treatments may be performed that range from supportive measures such as physical therapy to more invasive reanimation measures such as nerve grafting and muscle transfer. Unfortunately, over the range of these treatments, there is limited consensus on treatment effectiveness.2 , 3 In fact, functional outcomes following treatment tend to fall short of ideal due to a lack of synchronous symmetrical movements across the face.4
Current methods of assessing facial disability in these patients rely on subjective scales,5–9 and standard 2D photography and videography from which isolated measures are made.10–13 Surgeons’ subjective evaluations of patients’ are obviously very important for clinical diagnosis; however, subjective evaluations tend to introduce bias.14 , 15 In this study, we propose a set of objective measures of facial disability that would supplement surgeons’ subjective assessments to more clearly define the extent of patients’ problems and more precisely assess treatment outcomes. Another problem when measuring the face is that 2D methods by their very nature may not capture the full range of facial soft-tissue movements.16 To overcome this problem, we previously validated novel landmark-based, 3D quantitative (objective) measures and dynamic modeling of facial soft tissues for use in treatment planning and the assessment of outcomes in patients with cleft lip and palate.17–22 These measures have direct applicability for assessing facial impairment and disfigurement in patients with facial soft-tissue paralysis. Thus, the aims of this observational study were 2-fold: (1) To demonstrate a method and measures specific for the quantification of impaired facial soft-tissue movements in patients with facial paralysis; and (2) To quantify the differences in magnitude and velocity of facial soft-tissue movements between patients with facial paralysis (who present at the onset of their paralysis) and control participants. It was hypothesized that the overall methodological approach would provide a more comprehensive, sensitive, and objective analysis of the severity of facial paralysis when compared with current methods and analyses.
MATERIALS AND METHODS
The study sample consisted of 2 groups of participants who are part of a prospective, on-going, observational study (NIH Grant DE025295) designed to track the recovery of facial paralysis over time. The groups were patients with acute, unilateral, flaccid facial paralysis (Bell’s Palsy; n = 20; mean age = 46.6 y, SD = 11.4; 8 males and 12 females) and “normal” control participants (n = 20; mean age = 41.2 y, SD = 18.8; 5 males and 15 females). The patients were recruited from the Facial Nerve Center at Massachusetts Eye and Ear Infirmary, and they were invited to participate in the study by their treating surgeon. Participants in the control group were invited to participate either by personal contact or as a respondent to a posted flyer/advertisement and included patients being treated at Tufts University School of Dental Medicine. The participants were the first 20 subjects recruited in each group. All eligible participants were screened and recruited by telephone based on the selection criteria described in Table 1, and those who agreed to participate attended Tufts University School of Dental Medicine Facial Animation laboratory for testing and data collection. Study consent and Health Insurance Portability and Accountability Act documents were approved by the Tufts Health Sciences Institutional Review Board.
Before testing, the research assistant explained the purpose of the study, and informed consent was obtained from each participant. As part of the testing, dynamic 3D facial movement data were collected from each participant. The patients were followed longitudinally, and their movements were recorded on 3 separate visits: within 6 weeks of onset of their symptoms (baseline visit), and then at 3 and 12 weeks after baseline. The control participants had their movements recorded at a single visit since there was little expectation that their movements would change substantially over a 12-week period. In this study, the baseline facial movement data for the patients (within 6 weeks of onset of paralysis) and the control participants were analyzed.
Data Collection and Processing
A motion tracking system, Motion Analysis system (Fig. 1), was used to measure the facial movements of each participant based on the methods of Trotman et al.21 , 22 The system had 8 Kestral cameras positioned around the face (Fig. 1). Sixty-four, retro-reflective markers were secured to specific facial soft-tissue landmarks of the participants. The cameras, together with Motion Analysis Cortex software, captured the movements of the landmarks during different facial animations at 60 Hz for 4 seconds (Figs. 2). To capture 3D movement data, a minimum of 2 cameras are needed; however, because we were using the system for research purposes we used a larger number of cameras. The participants were instructed to make 10 replications of each of 11 facial animations: Brow raise (br), gentle eye closure (gec), tight eye closure (tec), “ee” sound (ee), “oo” sound (oo), natural smile (nsm), maximum smile (msm), maximum grimace (mgr), maximum lip purse (mlp), maximum check puff (mcp), and maximum mouth opening (mmo). The movement data then were stored for later off-line tracking by a research associate using the Cortex software. During the tracking, for each movement and landmark, a time series of 3-D vectors defined by (x, y, z) were recorded where x, y, and z represented the position in space at 1/60 second (60 Hz) intervals for 4 seconds.
Post-tracking Computations and Analyses
Post-tracking computations were developed by the statistician on the project, Dr. Julian Faraway, to generate mean dynamic movements and vector plots of these movements. The methodology was developed previously,23 and software is available from https://github.com/julianfaraway/facer. Also, measures of maximum displacement, velocity/kinematics, and asymmetry of the landmark data were computed as described below.
The facial “rest” position was calculated for each participant by averaging the initial rest frames (before the participant made any movement) over the 11 animations. Then, for each animation, the frame that had the maximum Procrustes distance from the mean initial rest frame was identified, and the difference between the rest frame and the maximum distance was recorded during the particular animation measured as the average distance moved by each landmark in millimeters (mm). Because larger faces might be expected to have greater movement, the facial movements for each animation were scaled to the averaged face for the entire sample. The maximum displacements of the facial landmarks then were averaged over the 10 replicates of each animation.
For each replicate of an animation, between successive frames of data the unscaled Procrustes distances were measured in millimeters (mm). Then, the 99th percentiles of these distances were computed with the velocity measured as millimeters per second (mm/sec). The 99th percentile was chosen to control for possible outliers. The velocity measurement was averaged over the 10 replicates of each animation.
For both the displacement and velocity measurements, the respective averages were calculated for the following facial regions.
- (1) For the entire facial area of the patients and controls.
- (2) For only the paralyzed side of the face of the patients. For this, the patients were standardized so the paralyzed side was always on the right and was compared with the corresponding right side of the face of the controls.
- (3) For the nonparalyzed side of the face of the patients. For this calculation, because of the standardization in (2), the nonparalyzed side was always the left and was compared with the corresponding left side of the face of the controls.
The patients’ faces were measured at the position of maximum displacement for each animation. The face of each patient was reflected left to right, and the right landmarks were relabeled as the left and the left landmarks as the right. Ordinary Procrustes analysis was used to match the original and reflected facial configurations, and the distance in millimeters (mm) was calculated. A difference of zero for this distance would represent complete symmetry, whereas distances increasing from zero would represent increasing asymmetry. The mean for this measurement was calculated for each patient and animation.
General Statistical and Dynamic Statistical Modeling of Facial Movements
For each subject and mean measurement of movement—displacement, velocity, and asymmetry, the differences between the patients with facial paralysis and controls was assessed using 2 sample t tests. In addition, a statistical visual modeling comparison was generated of each patient’s mean movements to be compared with the mean control movements during each animation. Videos 1 and 2 are examples of the movements of 2 patients (Patients “a” and “b”) during smiling (see video, Supplemental Digital Content 1, which displays patient “a” with left unilateral facial paralysis. Facial movements during the maximum smile animation, http://links.lww.com/PRSGO/A876; see video, Supplemental Digital Content 2, which displays patient “b” with less severe right unilateral facial paralysis. Facial movements during the maximum smile animation, http://links.lww.com/PRSGO/A877). Patient “a” presented with severe paralysis of the entire left side of the face, whereas patient “b” had a less severe paralysis of the right side of the face. Videos 3 and 4 show the statistical modeling comparison for each patient’s mean smile over the ten smile replicates (red dots) compared with the mean smile of the 20 control participants over the 10 smile replicates (black dots), respectively (see video, Supplemental Digital Content 3, which displays a dynamic statistical modeling for patient “a” during the smile animation, http://links.lww.com/PRSGO/A878; see video, Supplemental Digital Content 4, which displays a dynamic statistical modeling for patient “b” during the smile animation, http://links.lww.com/PRSGO/A879). Similar dynamic comparisons can be generated for each animation.
The results comparing the maximum displacement of the entire face between the patients and controls (Table 2) demonstrated that the controls had significantly greater or higher excursive facial movements during the maximum grimace (mgr) and smile (msm) animations, while the gentle eye closure (gec) movements were greater for the patients. The comparison of the patients’ paralyzed side of the face and the similar side of the controls (Table 3) demonstrated that the controls had significantly greater excursive movements during the “ee” sound, tight eye closure (tec), maximum grimace (mgr), lip purse (mlp), and smile (msm, nsm) animations. Only gentle eye closure (gec) was greater for the patients. There were no significant differences between the patients and controls for the nonparalyzed side of the face.
The results comparing the movement velocity between the patients and controls for the entire face (Table 4) demonstrated that the controls had significantly greater movement velocity or faster movements during the brow raise (br), “ee” sound, tight eye closure (tec), and the maximum grimace (mgr), lip purse (mlp), and smile (msm, nsm) animations. The comparison of the paralyzed side of patients’ faces and the similar side of the controls (Table 5) demonstrated that the controls had significantly greater movement velocity or faster movements for the same animations as for the entire face. There were no significant differences in movement velocity between the patients and controls for the nonparalyzed side of the face.
The results for the comparison between the patients’ and control participants asymmetry scores at the maximum of the movement for each animation showed that the patients had significantly greater asymmetry than the controls for all the animations (Table 6). Figure 3 is a plot of the asymmetry scores for each patient at rest and at the maximum excursion for the smile, lip purse, grimace, and check puff animations, and demonstrates that the asymmetry scores for these animations were greater than the scores when the face was at rest. In addition, for each patient, the greatest asymmetry was always at the maximum of the smile animation, followed by lip purse, grimace, and then the cheek puff animation. This pattern was evident even for those patients who had the least asymmetry scores (patients 12 and 17 in the plot). Of particular interest was the statistical modeling comparison. Consider the smiles of patients “a” and “b” in videos 1 and 2, respectively. The statistical modeling comparisons for each patient are seen in videos 3 and 4 where the mean of the 10 replicates of each patient’s smile (red dots) are superimposed on the mean smile for the 10 replicates of all 20 control participants (black dots). During the movement, the greater the differences between the corresponding red and black dots, the greater the paralysis for the patient.
Plots of the mean vectors of movement for the landmarks also were generated. Figure 4 shows the mean vectors of movement for the 64 facial landmarks of the control participants during the smile. The arrows give the direction of the landmark movement and the length of the lines give the amount of displacement. Similar normative plots can be generated for all the animations. Figures 5 and 7 show the plots of the mean vectors of landmark movement for the smiles of patients “a” and “b,” respectively. These plots emphasize that both the direction and magnitude of movements for the patients were altered. The plots in Figures 6 and 8 show the difference in the mean landmark positions between each patient and the controls at the maximum of the smile. The open circles represent the control mean maximum landmark positions, and the lines connect each landmark to its respective mean maximum position in the patient. The longer the line, the greater the patient’s maximum movement differs from the control maximum movement, and the greater the patient’s paralysis for specific regions of the face.
The aims of this study were (1) to demonstrate a comprehensive dynamic set of analyses specific for patients with facial paralysis, especially as it relates to facial soft-tissue asymmetry; and (2) to evaluate facial soft-tissue movements in patients with unilateral facial paralysis using the analyses. Our previous dynamic analyses20–22 were used to analyze facial soft-tissue movements in patients with cleft lip/palate and were limited to the area of the lower face, specifically the upper and lower lips, and chin, where the effects of scarring as a result of the cleft lip repair were most apparent. A dynamic analysis for patients with facial paralysis, as presented here, needed to be more comprehensive to detect the full range of paralysis. In addition, supplemental analyses beyond that of the dynamic “visual” modeling for the patients with cleft lip/palate were developed such as the facial plots of movement vectors to more specifically identify regional paralysis. The clinical utility of this approach is the ability to measure the extent of facial paralysis across the entire face and determine the precise limits of paralyzed regions. The measures can be used to determine and compare the impact of, for example, techniques for facial reanimation. The information provided on movement velocity was exploratory, and despite the fact that treatment and/or surgery may not be able to directly affect the velocity of movement, the velocity together with the plots of the movement vectors provided a more complete picture of patients’ paralysis.
In this regard, the dynamic modeling comparison of the mean smiles of each patient compared with the mean smile of the control subjects proved a sensitive assay, and clearly demonstrated the areas of facial paralysis. For example, patient “a” had little or no movement of the entire left side of the face, whereas patient “b” had less severe paralysis focused on the right forehead and cheek regions of the face. The dynamic modeling demonstrated particular areas or regions of the face that were affected and the effects of synkinesis. This phenomenon of site-specific difference identification was most evident for patient “a” around the circumoral soft tissues of the upper lip, which were distorted and pulled to the nonparalyzed side of the face. Thus, to the observer even the nonparalyzed side of the face was functioning abnormally with compensatory movements that further compounded the patient’s esthetic and functional problems. This finding was similar to that observed for soft-tissue impairment in patients with repaired cleft lip/palate who also demonstrated compensatory movements in regions of the face that were assumed to be unaffected by the cleft muscle defect.22
Plots of the facial movement in terms of vectors were generated to complement the dynamic modeling comparisons. The plot for the normative (control) smile (Fig. 4) has a characteristic pattern. A similar plot of patient’s “a” smile (Fig. 5) demonstrated that the vectors for the landmarks on the right, nonparalyzed, side of the face had directions that were very different from that of the mean control vectors in Figure 4 with reduced displacements and obvious synkinesis of the upper lip and part of the lower lip and chin. For patient “b” with right facial paralysis (Fig.7), the directions of the vectors (arrows) were close to normal but the excursive movement (length of the lines) on the right side of the face was reduced when compared with the control smile. The plots in Figures 6, 8 allow the treating physician to isolate the specific facial regions that were paralyzed—each plot is a comparison of the mean facial landmark positions between the patient and the controls at the maximum of the smile. These plots are very instructive for the clinician to track outcomes of treatments, especially for those patients in need of reanimation surgery. Also, similar plots can be produced for each animation.
The numerical findings of greater excursive movements and greater velocity of movement for the control participants versus the patients was not surprising. It was interesting, however, that when comparing the paralyzed side of the patients’ faces with the corresponding side of the controls, the findings for displacement or excursive movement were much more pronounced indicating greater sensitivity of the analysis with this approach versus comparing the entire face. Only for the gentle eye closure animation were the movements greater in the patients when compared with the controls. Close observation of the patients and controls demonstrated that although the patients had varying degrees of paralysis, they recruited more facial muscles on the nonparalyzed side of the face and had greater magnitude of muscle movement to perform gentle eye closure while the controls had an effortless movement. Facial asymmetry also was greater for the patients. This asymmetry was greatest at the maximum of the different animations versus when the face was in repose. What was not expected was that the asymmetry followed a hierarchal order for each patient with the greatest asymmetry for the smile followed by the lip purse, the grimace, and then the cheek puff animation. This finding may reflect the complexity of muscle movements during these different animations with a greater number of muscles over the face recruited during the smile and the least during the cheek puff.
Concerning the methodology used in this study, 2 additional issues are addressed here. The first is the complexity of the motion capture system and the cost. In this regard, the landmark-based system used in this study to capture the movement data is not new. Frey et al.24 used a similar tracking system to study facial paralysis in 1994. Since then the technology has improved greatly, and several companies produce these systems that are used for many applications.25 In this study, we used an 8 camera system to accommodate our research needs for multiple applications and different patient populations. Our system uses updated hardware and technology, and we have advanced our approach for the analysis of the dynamic data. As stated previously, a minimum of 2 cameras are needed to capture 3D motion data; however, to ensure the face is captured, it is our recommendation that at least 3 cameras should be used. Such a set up would be easily reproducible and cost-effective. Moreover, this technology is becoming less expensive, and there are additional manufacturers and different systems tending to drive costs lower. A fairly newer alternative to landmark-based motion capture systems are systems that capture movements of surfaces either of the face or other regions of the body. These systems are more expensive and currently still in development.25 A second issue is the time required for data collection. Data collection with the system we used was approximately 40 minutes per subject, which included identification and placement of 64 landmarks. There were no complaints from the participants who attended for data collection regarding the time that it took. In addition, the time for post-data-collection tracking of each patient's data was approximately 60–90 minutes. These times, however, were based on the large amount of data that we chose to collect for our research—the tracking of landmarks during 12 animations with each animation repeated 10 times per subject (120 tracked files per subject). The clinician may choose to collect less data (repetitions) on less animations. Data collection and tracking can be reduced by half by collecting half the number of repetitions of the animations.
Dynamic 3D modeling of critical facial landmarks appears to be an effective tool in facial paralysis. Not only does it provide precise profiles of zone-specific asymmetries, but also yields customized reporting, which highlights areas of importance for individual patients. Additionally, applying the tool to patients during the recovery phase shows its sensitivity to changes over time, and will yield quantitative data on the rate and completeness of recovery that have been elusive using simpler modalities.
1. Hohman MH, Hadlock TA. Etiology, diagnosis, and management of facial palsy: 2000 patients at a facial nerve center. Laryngoscope. 2014;124:e283–293. doi: 10.1002/lary.24542. [Epub 2014 Jan 15]. PMID: 24431233.
2. Hadlock T. Facial paralysis: research and future directions. Facial Plast Surg. 2008;24:260–267.
3. Boahene K. Reanimating the paralyzed face. F1000Prime Reports eCollection. 2013;5. doi 10.12703/P5-49. PMID: 24273650. PMCID: PMC3816764.
4. Kim SW, Heller ES, Hohman MH, et al. Detection and perceptual impact of side-to-side facial movement asymmetry. JAMA Facial Plast Surg. 2013;15:411–416.
5. Coulson SE, Croxson GR, Adams RD, et al. Reliability of the “Sydney,” “Sunnybrook,” and “House Brackmann” facial grading systems to assess voluntary movement and synkinesis after facial nerve paralysis. Otolaryngol Head Neck Surg. 2005;132:543–549.
6. Berg T, Jonsson L, Engström M. Agreement between the Sunnybrook, House-Brackmann, and Yanagihara facial nerve grading systems in Bell’s palsy. Otol Neurotol. 2004;25:1020–1026.
7. House JW. Facial nerve grading systems. Laryngoscope. 1983;93:1056–1069.
8. House JW, Brackmann DE. Facial nerve grading system. Otolaryngol Head Neck Surg. 1985;93:146–147.
9. Denlinger RL, VanSwearingen JM, Cohn JF, et al. Puckering and blowing facial expressions in people with facial movement disorders. Phys Ther. 2008;88:909–915.
10. Linstrom CJ. Objective facial motion analysis in patients with facial nerve dysfunction. Laryngoscope. 2002;112:1129–1147.
11. Linstrom CJ, Silverman CA, Colson D. Facial motion analysis with a video and computer system after treatment of acoustic neuroma. Otol Neurotol. 2002;23:572–579.
12. Schmidt KL, Cohn JF, Tian Y. Signal characteristics of spontaneous facial expressions: automatic movement in solitary and social smiles. Biol Psychol. 2003;65:49–66.
13. Schmidt KL, VanSwearingen JM, Levenstein RM. Speed, amplitude, and asymmetry of lip movement in voluntary puckering and blowing expressions: implications for facial assessment. Motor Control. 2005;9:270–280.
14. Trotman C-A, Phillips C, Essick GK, et al. Functional outcomes of cleft lip surgery. Part I: Study design and surgeon ratings of lip disability and the need for lip revision. Cleft Palate-Craniofac J. 2007;44:598–606. doi: 10.1597/06-124.1. PMID: 18177192. PMCID: PMC3646291.
15. Trotman CA, Phillips C, Essick GK, et al. Functional outcomes of cleft lip surgery. Part I: Study design and surgeon ratings of lip disability and need for lip revision. Cleft Palate Craniofac J. 2007;44:598–606.
16. Trotman CA, Gross MM, Moffatt K. Reliability of a three-dimensional method for measuring facial animation: a case report. Angle Orthod. 1996;66:195–198.
17. Gross MM, Trotman CA, Moffatt KS. A comparison of three-dimensional and two-dimensional analyses of facial motion. Angle Orthod. 1996;66:189–194.
18. Ritter K, Trotman CA, Phillips C. Validity of subjective evaluations for the assessment of lip scarring and impairment. Cleft Palate Craniofac J. 2002;39:587–596.
19. Trotman CA, Phillips C, Faraway JJ, et al. Association between subjective and objective measures of lip form and function: an exploratory analysis. Cleft Palate Craniofac J. 2003;40:241–248.
20. Trotman C-A, Faraway JJ, Phillips C. Visual and statistical modeling of facial movement in patients with cleft lip. Cleft Palate Craniofac J. 2005;42:245–254. PMID: 15865457. PMCID: PMC3681529.
21. Trotman C-A, Faraway JJ, Losken HW, et al. Functional outcomes of cleft lip surgery. Part II: Quanteification of nasolabial movement. Cleft Palate Craniofac J. 2007;44:607–616. doi: 10.1597/06-125.1. PMID: 18177193. PMCID: PMC3681516.
22. Morrant DG, Shaw WC. Use of standardized video recordings to assess cleft surgery outcome. Cleft Palate Craniofac J. 1996;33:134–142.
23. Faraway JJ, Trotman CA. Shape change along geodesics with application to cleft lip surgery. J R Stat Soc Ser C Appl Stat. 2011;60:743–755.
24. Frey M, Jenny A, Giovanoli P, et al. Development of a new documentation system for facial movements as a basis for the international registry for neuromuscular reconstruction in the face. Plast Reconstr Surg. 1994;93:1334–1349.
25. Tzou CH, Frey M. Evolution of 3D surface imaging systems in facial plastic surgery. Facial Plast Surg Clin North Am. 2011;19:591–602, vii.
Supplemental Digital Content
Copyright © 2018 The Authors. Published by Wolters Kluwer Health, Inc. on behalf of the American Society of Plastic Surgeons. All rights reserved.