Secondary Logo

Journal Logo

The Effect of High-Fidelity Simulation on Educational Outcomes in an Advanced Cardiovascular Life Support Course

Rodgers, David L., EdD; Securro, Samuel Jr, EdD; Pauley, Rudy D., EdD

doi: 10.1097/SIH.0b013e3181b1b877
Empirical Investigations
Free

Introduction: The use of high-fidelity simulation has been studied in many healthcare education areas. However, the use of this instructional technology in the American Heart Association (AHA) Advanced Cardiovascular Life Support (ACLS) course has not been extensively reported, despite this program being one of the most widely taught standardized medical courses in the United States.

Methods: This study examined high-fidelity versus low-fidelity simulation in the context of an AHA ACLS course to determine subjects’ educational outcomes as judged by expert raters reviewing videos of subjects performing a simulated cardiac arrest event immediately after the conclusion of the course. A purposeful sample of 34 subjects was enrolled in one of two ACLS classes. One class used high-fidelity simulation (n = 16), whereas the other used low-fidelity simulation (n = 18).

Results: The high-fidelity simulation group had a higher overall mean rank score on expert rater video review (M = 59.55 versus M = 44.34). This difference reached a level of statistical significance (P = 0.010, z = −2.592). On item level analysis of the instrument, 9 of 14 items reached levels of significance (P < 0.05).

Conclusions: Expert raters judged students in a high-fidelity simulation-based AHA ACLS course as more competent than students in a low-fidelity course. On item level analysis, items focused on manual tasks or actions in the first 1 to 2 minutes of the cardiac arrest event were more likely to be nonsignificant. As the scenario grew longer and more complex, expert rater scores of the high-fidelity trained team leaders’ confidence, knowledge, and treatment decisions were higher than the low-fidelity team leaders’ score at a statistically significant level.

From the Marshall University Graduate School of Education and Professional Development, South Charleston, West Virginia.

Reprints: David L. Rodgers, EdD, Marshall University Graduate School of Education and Professional Development, 100 Angus E. Peyton Drive, South Charleston, WV, 25303 (e-mail: rodgers1@marshall.edu).

Author Rodgers discloses consultant relationship with Laerdal Medical.

The American Heart Association (AHA) Advanced Cardiovascular Life Support (ACLS) training programs have been suggested as a potential use for high-fidelity patient simulation.1–3 However, despite these suggestions, the application of high-fidelity patient simulation into one of the most common and long-standing multidisciplinary medical training programs in the United States has not been extensively reported.

The AHA ACLS training program was first conducted in 19744,5 and now is a common training program used to teach advanced level healthcare providers the skills and knowledge needed to respond to critical cardiopulmonary emergencies. Since its development, the ACLS course has served as the template for other short courses in resuscitation, including the American College of Surgeons Advanced Trauma Life Support course.6

Franklin1 reported that of all the short-course certification programs such as ACLS, ATLS, PALS, and other similar programs, ACLS has the greatest potential for high-fidelity patient simulation use. He stated, “As sites gain access to an advanced patient simulator, the training and testing phases of ACLS are becoming more realistic. There are several advantages to the use of this technology, particularly with regard to a course such as ACLS (p. 399).” Schumacher3 also reported that simulation could be an “effective strategy and tool for teaching advanced cardiac life support (p. 174).” Although descriptive reports of using simulation in an ACLS course have been published,7,8 no experimental studies on the use of high-fidelity simulation in an AHA ACLS course have been reported.

Several studies have been conducted using high-fidelity simulation in ACLS-like courses or using ACLS-like scenarios.9–15 Three of these published reports covered ACLS course material in a manner that closely resembled an AHA ACLS course. These investigations found that high-fidelity simulation in ACLS-like training produced better educational outcomes or higher compliance with ACLS guidelines than lower fidelity simulation.13–15 However, none were an AHA ACLS course that included all the elements of the ACLS course, which resulted in issuance of an AHA course completion card (D. Wayne, personal communication, 2008).

High-fidelity patient simulators are expensive educational tools that must be used appropriately to achieve their full utility. The AHA ACLS course represents the fundamental educational foundation of advanced emergency cardiovascular care. Determining whether the use of high-fidelity simulators in ACLS is efficacious in regard to improved learning outcomes is significant for training centers as they determine the best use of their often limited resources.

This study is unique in that it was conducted in an AHA ACLS course with the intent on issuing course completion certification to students who qualified. The study proposed the following hypothesis:

Students who use high-fidelity patient simulators will have better competence as demonstrated in postintervention skills assessments graded by an expert rater, when compared with students who used low-fidelity mannequins in an AHA ACLS program.

Back to Top | Article Outline

METHODS

A causal-comparative design was used to compare the educational outcomes between the groupings. Random selection of subjects to the groups was not possible because of limited scheduling options for subjects on the two weekends when the study was conducted. Subjects self-selected into groupings but were “blind” to the differences between treatments in the groupings.

Back to Top | Article Outline

Sample

The sample for this study was a purposeful sample of senior nursing students enrolled at four collegiate nursing education programs in central West Virginia. Recruitment of subjects was through communications with nursing program directors and faculty at the participating institutions. Nursing students were purposefully selected because of the naivety of the sample members in content knowledge and simulation technology.

Each participant was informed that they would be participating in the course as part of an experimental study evaluating teaching methods. All subjects were unaware as to the specifics of what methods were being investigated. The appropriate institutional review board approvals were obtained before the study. Informed consent was obtained from each participant. Because of the need to match pretest and posttest scores on the ACLS written evaluation, subject anonymity in all areas of this study was not possible; however, subject confidentiality was maintained. Subjects who successfully completed the ACLS course, including the posttest, received ACLS certification.

Initial participation included 37 subjects with 20 in the low-fidelity mannequin group and 17 in the high-fidelity simulation group. Distribution of the subjects according to nursing school for the low-fidelity mannequin group was six from school A, six from school B, seven from school C, and one from school D. Distribution of the subjects according to nursing school for high-fidelity simulation group was nine from school A, two from school B, and six from school C. Both groups presented with similar demographic data (Table 1).

Table 1

Table 1

The low-fidelity mannequin group had two subjects who did not complete the program. One subject became ill during the course and withdrew, and another subject withdrew due to a family issue. The final n for the low-fidelity mannequin group was 18. The high-fidelity simulation group had one student withdraw due to illness on the morning of the second course day. The final n for the high-fidelity simulation group was 16. The combined n for the study was 34.

A pretest was administered to both groups to determine knowledge equivalency before the start of the study courses. The pretest was one of two versions of the ACLS Written Evaluation. The mean score for the low-fidelity mannequin group was 72.0 (SD = 9.60). The mean score for the high-fidelity simulation group was 61.5 (SD = 10.82). There was a statistical difference in subjects’ ACLS knowledge before the study courses that favored the low-fidelity mannequin group, t(32) = 3.00, P = 0.005.

Back to Top | Article Outline

Intervention

The intervention for this study was the AHA ACLS course (Appendix A). The independent variable was the type of educational technology used, with one group receiving ACLS training using high-fidelity simulators and the other group using low-fidelity simulation. The course was conducted in accordance with the rules and requirements of the AHA Program Administration Manual16 and in adherence to additional requirements found in the AHA ACLS Instructor Manual.17 Experienced ACLS Instructors taught the courses.

All components of the two intervention courses were identical with one exception. In one course, high-fidelity patient simulators (SimMan, Laerdal Medical, Stavanger, Norway) were used with all features of the simulators activated and accessible to the subjects, including palpable pulses, chest excursion on breathing, and mannequin generated voice. Subjects had to acquire all clinical information needed for completion of scenarios from the simulator. In the other course, the high-fidelity patient simulators were not activated, with the exception of the electrocardiograph (ECG) function, which emulated a basic rhythm generator commonly used in ACLS courses. In this state, these devices did not function in a high-fidelity manner. They were used as static, low-fidelity mannequins more traditionally used in ACLS courses. Subjects were reliant on obtaining a significant portion of the clinical information needed for the scenario by asking questions of the ACLS Instructor. In both groups, all activities such as cardiopulmonary resuscitation (CPR), defibrillation, and medication administration were performed.

One key element in the successful use of high-fidelity patient simulation is the debriefing process that follows the practice scenario. The simulator used in this study had audio/video playback of the scenario linked to the key performance objectives. Because this investigation focused on the impact of simulation technology on learning outcomes, both groups participated in instructor facilitated debriefings at the conclusion of teaching scenarios. Format of the debriefings was the same except for the source of subjects’ performance information. For the high-fidelity simulation group, this debriefing used all the information resources available from the simulator including simulator generated logs and video. For the low-fidelity mannequin group, the debriefing information resource was a written log of the scenario. By providing a debriefing opportunity to both groups, it further limited the differences between the groups to the use of technology.

Subjects attended both days of a 2-day ACLS Provider Course. After final evaluation scenarios, all subjects had an additional skills performance video recorded. These scenarios were performed with the simulator fully activated for both groups. This was done to allow expert raters to judge the ability of subjects to transfer resuscitation skills and knowledge to simulated life-like situations. Because the lower fidelity mannequin group did not have experience with the simulator in the activated mode, an orientation to the simulator was conducted before this evaluation and included an opportunity to interact with the simulator in a noncardiac arrest scenario. A panel of three expert raters scored each subject using a modified ACLS Mega Code Performance Score Sheet. Each expert rater was an experienced ACLS instructor with either AHA ACLS Regional Faculty or Training Center Faculty status and did not participate as an instructor in either course. Results from the three expert raters were combined to create mean scores for each subject on each item response for the ACLS Mega Code Performance Score Sheet, so that each subject had one set of scores to be used in the comparisons. These mean scores were then compiled to produce an overall mean rank score for each group.

The evaluation instrument used in this study was a modified version of the AHA ACLS Mega Code Performance Score Sheet (Appendix B). The ACLS Mega Code Performance Score Sheet is the standard skill evaluation assessment instrument used in all ACLS courses. It was modified for use in this study by changing the item responses from yes/no responses to a range response from 1 being not competent to 7 being highly competent. Additional modifications were made by consolidating common objectives from four different ACLS Mega Code Performance Sheets to a single document. Expert rater responses were solicited on two additional items: overall team leader performance and team functioning. Pilot testing of this modified form was conducted with ACLS course instructors before the intervention courses to determine ease of use and clarity.

The ACLS Written Examination was used as a written pretest and posttest instrument. There are two versions of the ACLS Written Examination. One version was used as a pretest, and the other version was used as a posttest. The ACLS Written Examination is provided by the AHA and is the written evaluation instrument used in all ACLS courses.

Back to Top | Article Outline

Statistical Analysis

The ACLS Mega Code Performance Score Sheet instrument produced an overall mean score for its respective groups. Rank or ordinal data, along with small, nonrandomized samples typify nonparametric data and related statistical assumptions. Therefore, the Mann-Whitney U test was chosen to compare differences in overall ranks across groupings, with a P level set at <0.05. In addition to analyzing the overall mean scores for the ACLS Mega Code Performance Score Sheet, analysis at the item level was done to determine whether there were differences in individual skills between the two groups with significance set at the P < 0.05 level.

Internal consistency of the evaluation instrument was determined by Cronbach’s α. A Pearson r2 correlation was measured to determine interrater reliability.

The pretest, posttest, and differences between pretest and posttest scores on the ACLS Written Test were also reviewed. A t test for independent samples was used to determine statistical significance, which was set at the P < 0.05 level.

Back to Top | Article Outline

RESULTS

The hypothesis was tested using the modified ACLS Mega Code Performance Score Sheet. The expert raters completed this score sheet while watching audio and video recordings of each group working through a cardiac arrest scenario with patient simulation. Recordings were randomized, and the expert raters were blinded to group assignment. Cronbach’s α for the expert raters’ scoring was 0.832, indicated a high degree of internal consistency. Pearson r2 correlations were: raters 1 and 2, 0.54; raters 1 and 3, 0.24; and raters 2 and 3, 0.44.

Descriptive statistics on the scores given by the expert raters for each group are shown in Table 2.

Table 2

Table 2

The Mann-Whitney U mean rank score for the low-fidelity mannequin group was 44.34. The mean rank score for the high-fidelity simulation group was 59.55. The difference had a level of statistical significance (P = 0.010, z = −2.592). Individual item scores were also calculated and are reported in Table 3. All items showed a higher mean rank for the simulation group over the low-fidelity mannequin group. Nine of the 14 items reported differences that were significant. The nine items were:

Table 3

Table 3

  • Item 1—The team leader assured that high-quality CPR was in progress.
  • Item 5—The team leader recognized the initial ECG rhythm.
  • Item 8—The team leader followed the appropriate ACLS algorithm.
  • Item 9—The team leader recognized the ECG rhythm changes.
  • Item 10—The team leader provided appropriate post arrest care.
  • Item 11—The team leader demonstrated confidence.
  • Item 12—The team leader appeared knowledgeable.
  • Item 13—Expert rater overall feeling about this team leader’s performance.
  • Item 14—Expert rater overall feeling about this Team’s performance.

For both groups, there was significant improvement in knowledge on the written posttest scores compared with the written pretest scores. For the low-fidelity simulation group, the pretest mean was 72.00 (SD = 9.60) with a posttest mean of 87.78 (SD = 9.05). This result demonstrated that the low-fidelity course could improve cognitive knowledge, t(17) = −5.984, P = 0.000. For the high-fidelity simulation group, the pretest mean was 61.50 (SD = 10.82) with a posttest mean of 90.00 (SD = 7.59). This result demonstrated that the high-fidelity course could improve cognitive knowledge, t(15) = −10.442, P = 0.000.

Neither group performed significantly better than the other on the posttest (P = 0.447). However, when comparing the amount of improvement in pretest with that of the posttest scores, the high-fidelity simulation group’s cognitive knowledge improvement was statistically significant over the low-fidelity mannequin group, t(32) = −3.348, P = 0.002. The low-fidelity group improved their scores by a mean of 15.78 (SD = 11.19), and the high-fidelity group improved by a mean of 28.5 (SD = 10.92). This improvement negated the significant precourse knowledge advantage, which the low-fidelity mannequin group had on the pretest.

All participants achieved course requirements for successful completion of ACLS and were awarded AHA ACLS certification cards. Three participants in the low-fidelity group and one participant in the high-fidelity group required remediation on the final written evaluation before successful completion.

Back to Top | Article Outline

DISCUSSION

The results for the evaluation instrument as a whole supported the hypothesis that students who use high-fidelity patient simulators achieve better competence as demonstrated in postintervention skills assessments graded by an expert rater, when compared with students who used low-fidelity mannequins in an AHA ACLS program. On reviewing the findings at the item level, several significant findings favoring the high-fidelity simulation group over the low-fidelity mannequin group emerged. Although all 14 items scored higher for the high-fidelity simulation group, nine items indicated statistically significant differences between the groups that favored the high-fidelity simulation group.

There were some commonalities found with the significant and nonsignificant grouped items. Activities that required psychomotor skills to be managed by the team or team leader (attaching the monitor, basic airway management, and using defibrillation) did not generate significant differences between the groups. Another common point in the nonsignificant items was interventions and actions that took place in the opening minutes of the scenarios did not indicate a significant difference favoring either group. This included the ability of the team leader to assign roles and order the correct treatment for the initial rhythm. Actions that took place later in the scenario such as appropriately managing subsequent ECG rhythm changes and postresuscitation care significantly favored the high- fidelity simulation group.

Expert rater scoring of basic psychomotor skills in a cardiac arrest showed similar results between the high-fidelity simulation and low-fidelity mannequins groups. Additionally, both high-fidelity and low-fidelity simulation did equally as well in teaching team leaders the knowledge and skills to manage the opening minutes of a cardiac arrest. However, as the event progressed and the complexity of the scenario increased by way of rhythm changes and needs for second-line therapies, high-fidelity simulation proved to be better than low-fidelity simulation as demonstrated by the knowledge and actions of the team leader. Other authors have suggested that full-environment high-fidelity simulation may not be needed for all cognitive learning objectives.18

High-fidelity simulation is not routinely used for teaching basic skills such as basic airway management, chest compressions, and defibrillation. These data indicate that the use of lower fidelity mannequins may be as efficacious as higher fidelity mannequins in teaching basic level resuscitation skills. However, as the situation evolved and became more complex, the high-fidelity simulation group was viewed as being significantly more knowledgeable and capable on managing the scenario. The scenarios used in the ACLS courses also required the integration of cognitive knowledge, basic psychomotor skills, and critical thinking rather than examining each as an individual function. Simulation has been noted as an effective means to integrate these three skill sets.19

One explanation of why the high-fidelity group scored higher is related to the simulator supplied cues and to the level of psychologic fidelity in the learning simulations. Through the use of a higher level of fidelity, one group had a more immersive learning experience. Without the need for instructors to supply cues such as presence of breathing or pulses, the learners in the high-fidelity group were able to concentrate their activities on the simulated patient without having the distraction of requesting clinical data from the instructor. Others authors have noted that with adequate correct cues being supplied by the simulator, the level of psychologic fidelity in the simulation is raised.20,21

Despite starting at a significant disadvantage on baseline ACLS knowledge as determined by the pretest, the high-fidelity simulation group improved cognitive knowledge as demonstrated on the written evaluation at a statistically significantly better rate than the low-fidelity mannequin group. This improvement resulted in written posttest scores that showed both groups near even in their ACLS knowledge. This improvement in knowledge combined with the expert raters’ determination that the high-fidelity simulation group demonstrated significantly more overall knowledge and confidence at the item level analysis supports high-fidelity simulation as an effective means for learning advanced emergency cardiovascular care.

The design of the investigation had several limitations. It was located at one AHA Training Center. There are more than 3600 AHA Training Centers in the United States. Although the AHA Emergency Cardiovascular Care programs, including ACLS, are regulated to ensure consistency, there is a limitation to generalizing this study to the greater population of ACLS courses nationwide. The sample group was small with only 34 participants enrolled and was limited to only one type of professional healthcare provider (senior nursing students) with limited healthcare experience. Generalizability to other healthcare professions and to healthcare providers with varying levels of experience will be limited. Although subjects were blinded to the intervention and demographic data showed both groups were highly homogenous, the study was causal comparative and not a randomized study. Although the low-fidelity group did have the opportunity to learn how to use the simulator in a high-fidelity mode before testing, the high-fidelity group did have more practice with the simulator in high-fidelity mode. The impact of this additional practice with the simulator was not determined.

The findings suggest several opportunities for future research. This study examined an immediate posttest after the intervention programs but did not address knowledge and skill retention. Determining whether there are lasting effects on knowledge and skills at later time periods after the intervention would be valuable. Although all subjects were in their final semester of nursing education, the high-fidelity group had a higher percentage of students from 2-year nursing programs. Exploration on whether this finding influenced the outcome would be warranted. Only one variable was manipulated: simulation technology. Other variables that are often associated with high-fidelity simulation such as the realism of the learning environment, the realistic look of the mannequin, and the use of a full range of actual clinical equipment may be useful. Results showed there was a difference in the efficacy of using simulation in basic or advanced skills. Additional research focusing on this difference would be valuable. The technology available to the high-fidelity group was present both in the scenario and in the debriefing. Future research could separate these areas of technology.

High-fidelity patient simulation is an expensive resource. Finding the most appropriate areas to use this technology is important for healthcare education directors. There may be certain skills that mannequins of lower fidelity might be a more cost-effective option without losing efficacy. Conversely, there are skills and knowledge that can be enhanced by using high-fidelity simulation.

The results of this study suggest that:

  • Psychomotor skills such as CPR, basic airway management, defibrillation, and application of monitoring equipment are learned equally well with either high-fidelity or low-fidelity simulation.
  • Observed knowledge of cardiac arrest management is improved with the use of high-fidelity simulation.
  • Observed confidence of healthcare providers as demonstrated in simulated cardiac arrest events is improved with high-fidelity simulation.
  • As cardiac arrest scenarios progressed and became more complex, the high-fidelity simulation group differentiated itself by showing significantly better observed performance capabilities over the low-fidelity mannequin group.

High-fidelity simulation had a greater impact on observed student knowledge after the first minutes of a simulated cardiac arrest event. High-fidelity simulation also improved posttest scores over pretest scores at a higher rate than did low-fidelity simulation. High-fidelity simulation has a role in ACLS. However, learning stations that emphasize basic skills such as CPR and basic airway management may by equally well addressed with mannequins of lower fidelity.

Back to Top | Article Outline

ACKNOWLEDGMENTS

The authors wish to thank Barbara McKee, David Matics, Louis Robinson, and Katrina Craddock for their assistance with this study and the instructors of the Charleston Area Medical Center Health Education and Research Institute ACLS program for their participation.

Partial funding for this study was supplied by the Charleston Area Medical Center Health Education and Research Institute Research Appropriations Committee.

Back to Top | Article Outline

REFERENCES

1. Franklin GA. Simulation in life support protocols. In: Loyd GE, Lake CL, Greenburg RB, eds. Practical Health Care Simulations. Philadelphia, PA: Elsevier/Mosby; 2004:393–404.
2. Kapur PA, Steadman RH. Patient simulator competency testing: ready for takeoff? Anesth Analg 1998;86:1157–1159.
3. Schumacher L. Simulation in nursing education: nursing in a learning resource center. In: Loyd GE, Lake CL, Greenburg RB, eds. Practical Health Care Simulations. Philadelphia, PA: Elsevier/Mosby; 2004:169–175.
4. Carveth SW. Standards for cardiopulmonary resuscitation and emergency cardiac care. JAMA 1974;227:796–797.
5. Carveth SW, Burnap TK, Bechtel J. Training in advanced cardiac life support. JAMA 1976;235:2311–2315.
6. Collicott PE, Hughes I. Training in advanced trauma life support. JAMA 1980;243:1156–1159.
7. Hwang JCF, Bencken B. Integrating simulation with existing clinical education programs: dream and develop while keeping the focus on your vision. In: Kyle RR, Murray WB, eds. Clinical Simulation: Operations, Engineering, and Management. Burlington, MA: Academic Press; 2008:95–105.
8. Ferguson S, Beeman L, Eichorn M, Jaramillo Y, Wright M. Simulation in nursing education: high-fidelity simulation across clinical settings and educational levels. In: Loyd GE, Lake CL, Greenburg RB, eds. Practical Health Care Simulations. Philadelphia, PA: Elsevier/Mosby; 2004:184–204.
9. DeVita MA, Schaefer J, Lutz J, et al. Improving medical emergency team (MET) performance using a novel curriculum and a computerized human patient simulator. Qual Saf Health Care 2005;14:326–331.
10. Mayo PH, Hackney JE, Mueck JT, et al. Achieving house staff competence in emergency airway management: results of a teaching program using a computerized patient simulator. Crit Care Med 2004;32:2422–2427.
11. Mueller MP, Christ T, Dobrev D, et al. Teaching antiarrhythmic therapy and ECG in simulator-based interdisciplinary undergraduate medical education. Br J Anaesth 2005;95:300–304.
12. O’Brien G, Haughton A, Flanagan B. Interns’ perceptions of performance and confidence in participating in and managing simulated and real cardiac arrest situations. Med Teach 2001;23:389–395.
13. Wayne DB, Butter J, Siddall VJ, et al. Mastery learning of advanced cardiac life support skills by internal medicine residents using simulation technology and deliberate practice. J Gen Intern Med 2006;21:251–256.
14. Wayne DB, Butter J, Siddall VJ, et al. Simulation-based training of internal medicine residents in Advanced Cardiac Life Support protocols: a randomized trial. Teach Learn Med 2005;17:202–208.
15. Wayne DB, Didwania A, Feinglass J, et al. Simulation-based education improves quality of care during cardiac arrest team responses at an academic teaching hospital: a case-control study. Chest 2008;133:56–61.
16. American Heart Association. Emergency Cardiovascular Care Program Administration Manual: Guidelines for Program Administration and Training. 3rd ed. Dallas, TX: American Heart Association, 2004.
17. American Heart Association. In: Field JM, Doto F, eds. Advanced Cardiovascular Life Support Instructor Manual. Dallas, TX: American Heart Association; 2006.
18. Salas E, Burke CS. Simulation for training is effective when. Qual Saf Health Care 2002;11:119–120.
19. McCausland LL, Curran CC, Cataldi P. Use of a human patient simulator for undergraduate nurse education. Int J Nurs Educ Scholarsh 2004;1:Article 23. Available at: http://www.bepress.com/ijnes/vol1/iss1/art23/. Retrieved March 23, 2007.
20. Halamek LP. The simulated delivery-room environment as the future modality for acquiring and maintaining skills in fetal and neonatal resuscitation. Semin Fetal Neonatal Med 2008;13:448–453.
21. Maran NJ, Glavin RJ. Low- to high-fidelity simulation—a continuum of medical education? Medical Educ 2003;37:22–28.
Figure

Figure

Figure

Figure

Keywords:

Simulation; Nurses; Nursing Education; Health Education; Clinical Teaching; Experiential Learning; Teaching Methods; Learning Strategies

© 2009 Lippincott Williams & Wilkins, Inc.