Secondary Logo

Journal Logo

Training and Competency Assessment in Electronic Fetal Monitoring: A National Survey

Murphy, Allison A. MD; Halamek, Louis P. MD; Lyell, Deirdre J. MD; Druzin, Maurice L. MD

ORIGINAL RESEARCH
Free

OBJECTIVE To investigate current patterns of training and competency assessment in electronic fetal monitoring (EFM) for obstetrics and gynecology residents and maternal–fetal medicine fellows.

METHODS A questionnaire was mailed to the directors of all 254 accredited US residencies in obstetrics and gynecology and 61 accredited US fellowships in maternal–fetal medicine. Questions focused on the methods used for teaching and assessing competency in EFM.

RESULTS Two hundred thirty-nine programs (76%) responded to the survey. Clinical experience is used by 219 programs (92%) to teach EFM, both initially and on an ongoing basis. Significantly more residencies than fellowships use written materials and lectures to teach EFM. More than half of all programs require trainees to participate in some type of EFM training at least every 6 months; 23 programs (10%) have no requirement at all. Subjective evaluation is used by 174 programs (73%) to assess competency in EFM. Written or oral examinations, skills checklists, and logbooks are used exclusively by residencies as means of competency assessment. Two thirds of all programs assess EFM skills at least every 6 months; 40 programs (17%), the majority of which are fellowships, have no formal requirement.

CONCLUSION Most US training programs use supervised clinical experience as both their primary source of teaching EFM and their principal competency assessment tool. Residencies are more likely to have formal instruction and assessment than are fellowships. Few programs are using novel strategies (eg, computers or simulators) in their curriculum

United States training programs use supervised clinical experienceas their primary teaching and assessment tool for electronic fetal monitoring interpretation.

Departments of Pediatrics and Gynecology and Obstetrics, Stanford University School of Medicine, Palo Alto, California.

Address reprint requests to: Allison A. Murphy, MD, Department of Pediatrics, Division of Neonatal and Developmental Medicine, Stanford University School of Medicine, 750 Welch Road, Suite 315, Palo Alto, CA 94304; E-mail: mdmural@stanford.edu.

Received October 18, 2002. Received in revised form December 24, 2002. Accepted January 2, 2003.

Since the early 1800s, efforts to assess fetal well-being during labor have been undertaken to optimize neonatal outcomes. For decades, intermittent auscultation of the fetal heart rate (FHR) was the primary method of intrapartum fetal surveillance. Electronic fetal monitoring(EFM) was introduced in the 1970s, and despite inconclusive findings regarding its efficacy in randomized clinical trials, 1–6 its use has continued to rise. In 1998, EFM was used in the care of 84% of women who gave birth in the United States, a 23% increase over 1989 figures. 7

Failure to appreciate the significance of FHR tracings can have troublesome consequences: an increased rate of operative deliveries and poor neonatal outcomes in selected populations 8 along with resultant litigation. The American College of Obstetricians and Gynecologists (ACOG) 1999 survey of professional liability claims against physicians reported that EFM was a factor in 43% of all lawsuits filed alleging obstetric malpractice, 52% of cases involving a stillborn fetus or neonatal death, and 66% of cases involving a neurologically impaired infant. 9

Competence in EFM has become a standard of obstetric practice. The Accreditation Council for Graduate Medical Education, ACOG, and the American Board of Obstetrics and Gynecology all expect obstetrics and gynecology residents and maternal–fetal medicine fellows to become proficient in EFM. 10–12 How mastery of EFM skills is achieved is not specified. The responsibility falls on each institution to provide basic and continuing education in EFM, validate competency, and monitor practice.

We sought to obtain a comprehensive view of how obstetrics and gynecology residents and maternal–fetal medicine fellows in the United States are trained and assessed in EFM interpretation.

Back to Top | Article Outline

MATERIALS AND METHODS

A five-question survey (Figure 1) was designed to elicit information regarding instructional methods used for initial acquisition and maintenance of EFM skills, tools used for competency assessment, and the frequency of training and assessment. The survey was examined for content validity by a panel of medical and nursing experts in EFM. Respondents could select from several choices (making more than one selection where applicable) and add text comments in the space provided.

Murphy

Murphy

Surveys were mailed between November 2001 and February 2002 to the program director at each of the 254 obstetrics and gynecology residencies listed on the Accreditation Council for Graduate Medical Education Web page 13 and 61 fellowship programs listed on the Society for Maternal–Fetal Medicine Web page. 14 Each mailing included a personalized cover letter explaining the purpose of the survey and a self-addressed, stamped return envelope. Surveys were coded to identify nonresponders; programs that did not respond initially received a second mailing. All replies received before March 31, 2002 were included in the analysis.

The responses for each question were tallied and the percentage of programs responding were calculated for all programs, residencies alone, and fellowships alone. Comparisons between 1) responding and nonresponding programs and 2) responding residencies and fellowships were conducted using χ2 tests for two independent proportions. Statistical analysis was performed using SAS for Windows 8 (SAS Institute Inc., Cary, NC).

Back to Top | Article Outline

RESULTS

The overall response rate was 76% (239 of 315 programs). Seventy-seven percent (196 of 254) of residencies responded, compared with 71% (43 of 61) of fellowships. The responding programs did not differ significantly in location, size, or affiliation from the nonresponding programs (Tables 1 and 2).

Table 1

Table 1

Table 2

Table 2

Electronic fetal monitoring is most commonly taught through clinical experience (Table 3), a method used by 219 programs (92%). Five programs (2%) use this as their only approach. Structured lecture, seminar, or in-service training is provided by 208 programs (87%) Case studies with FHR tracing review during a regularly scheduled conference or as part of a morbidity and mortality review are used by 204 programs (85%). More than half of programs use written materials. Less commonly used methods are computer-assisted tutorials, real-time interactive simulation, and instructional videotapes. One fellowship reported that they have no formal program for teaching EFM. Significantly more residencies than fellowships use lectures and written materials to teach EFM (P = .03).

Table 3

Table 3

To maintain EFM skills, clinical experience was again reported to be the most commonly used method, with 219 programs (92%) using this approach (Table 4). Case studies with FHR tracing review are also frequently used at forums such as morning rounds, department grand rounds, conferences, and morbidity and mortality reviews. Structured lectures, seminars, and in-services are used less frequently to maintain EFM skills; a similar trend is seen for written materials. Real-time interactive simulation, computer-assisted tutorials, and instructional videotapes are rarely used. Significantly more residencies use written materials (P = .01) to maintain EFM skills than do fellowships.

Table 4

Table 4

One hundred two programs (43%) reported that their residents and fellows participate in continuing education activities pertaining to EFM at least monthly (Table 5). An additional 55 programs (23%), the majority of which are residency programs (P = 02), provide training every 2–6 months. Twenty-three programs (10%) reported that they do not require continuing EFM education or did not respond to the question.

Table 5

Table 5

Competency in EFM is most commonly assessed by faculty or peer review (Table 6). Various types of quality assurance programs (eg, morbidity and mortality review) are also used frequently. Written or oral examinations (P = .001), skills checklists (P = .01), and logbooks are used exclusively by residencies. One third of fellowships (14 of 43) and 12% (24 of 196) of residencies (P = .001) reported that they do not formally assess competency in EFM.

Table 6

Table 6

Competency assessment is described as an ongoing process by many program directors. Seventy-six programs (32%) have monthly assessments, whereas an additional 82 programs (34%) assess their trainees every 2–6 months (Table 7).

Table 7

Table 7

Back to Top | Article Outline

DISCUSSION

There have been repeated calls for improved educational programs and competency validation in EFM, 15,16 but according to our survey, most US training programs still use supervised clinical experiences as both their primary(and, in some cases, only) source of teaching EFM and their principal competency assessment tool.

The ideal format for teaching EFM is unknown. The traditional model of medical education places a trainee in a supervised environment with the expectation that guided experience will lead to acquisition of skills adequate for independent, competent practice in the community. This “learn by doing” approach assumes that FHR tracings encountered during training will be varied enough to illustrate all of the important principles and patterns of EFM and that experienced individuals will be on hand to review the tracings and teach from them. Often neither is the case. 17 Didactic teaching sessions are particularly helpful for novices learning EFM but are extremely time intensive and, hence (as reflected in our survey), not practical for skill maintenance. Physicians are motivated to learn in response to specific problems posed by particular patients. 18 Perhaps it is on this basis that the program directors we surveyed reported that they frequently use case studies with strip review as a tool for teaching EFM. Computer programs, when compared to an equivalent lecture, have been shown to produce similar knowledge gains in EFM skills in nearly half the time 19 but, according to our survey, are not widely used for training physicians in the United States.

The optimal frequency for providing continuing education in EFM is, likewise, unknown. Beckley et al 20 found that knowledge pertaining to EFM interpretation learned from a computer program was retained up to 7 months. Trepanier et al 21 determined that participants in a 6-hour-long EFM workshop lost little of their pedantic knowledge after a 6-month period, but that their practical clinical skills dropped by almost 10% in the same time frame. Two thirds of the programs we surveyed reported that their constituents participate in EFM training activities at least every 6 months. Maternal–fetal medicine fellows are more focused in their training and, hence, participate in these activities more frequently than obstetrics and gynecology residents. One residency program director reported that their residents “may rotate on Oncology and go for 5 months without reading a [FHR] strip.” Other factors stated by respondents that affect the frequency of EFM instruction include volume of births and difficulty of cases.

Clinical competence is defined as the capability to perform acceptably those duties directly related to patient care. 22 Since 1994, the Joint Commission on Accreditation of Healthcare Organizations has mandated yearly validation of core competencies for nurses, 23 and formalized assessment in EFM has already been incorporated into nursing practice. 24 Physicians are not usually employed by hospitals, so they less commonly have similar processes or standards. 15 The typical expectation for physician competence has been graduation from an accredited medical school, completion of a residency program, successful completion of board examinations, and maintenance of a state license to practice medicine. But standards are changing. The Outcome Project of the Accreditation Council for Graduate Medical Education is a long-term initiative designed to increase emphasis on competency-based assessment of residents, thereby improving patient care. 25 Beginning in July 2002, subspecialty programs (including obstetrics and gynecology) were required to adopt new competency program requirements. As a consequence, we found in our survey that residency programs are more likely to formally assess their constituents than are fellowship programs.

We recognize the limitations of this survey. The findings are drawn exclusively from the subjective experience and perspectives of residency and fellowship program directors. Input from other participants in the educational process (such as residents, fellows, and faculty) was not solicited. Despite the overall candor of responses and comments, we acknowledge that individual respondents may have chosen to emphasize the strengths rather than the weaknesses of their respective programs. However, because this group of physicians is frequently surveyed in efforts to gauge training activities, we believe their answers represent the accepted true status of their programs. Another limitation is that only 76% of program directors responded. Our sample, however, was balanced both geographically and in program size and affiliation. It is unlikely, though unprovable, that the nonresponders have more detailed formal programs for EFM education and assessment. The largest obstacle in devising a survey to accurately characterize training and assessment was establishing specific definitions for each training and assessment method. We selected the terms in an attempt to minimize variation in interpretation of survey questions, but despite our best efforts, this variation will undoubtedly exist and influence program director's responses. “Clinical experience” was meant to encompass skills gained while taking care of real patients in labor. This type of training is customarily supervised by an attending physician, fellow, or senior-level resident, though the degree of supervision can vary. “Written materials” was intended to include textbooks, manuscripts, pamphlets, and similar types of textual instructional resources. We meant for “written examination” to refer to paper-based testing materials. “Formal” EFM training and assessment was intended to refer to a program's prescribed educational agenda and trainee evaluation.

A 1989 survey of Canadian hospitals found that EFM training programs for physicians varied widely and called for standardized educational programs and practice protocols for teaching hospitals in that country. 26 This survey documents a lack of formalized education in US physician training programs for this common obstetric skill and will, hopefully, serve as a starting point for critically evaluating and improving EFM teaching and assessment methods.

Back to Top | Article Outline

REFERENCES

1. Leveno KJ, Cunningham FG, Nelson S, Roark M, Williams ML, Guzick D, et al. A prospective comparison of selective and universal electronic fetal monitoring in 34,995 pregnancies. N Engl J Med 1986;315:615–9.
2. MacDonald D, Grant A, Sheridan-Pereira M, Boylan P, Chalmers I. The Dublin randomized controlled trial of intrapartum fetal heart rate monitoring. Am J Obstet Gynecol 1985;152:524–39.
3. Haverkamp AD, Orleans M, Langendoerfer S, McFee J, Murphy J, Thompson HE. A controlled trial of the differential effects of intrapartum fetal monitoring. Am J Obstet Gynecol 1979;134:399–412.
4. Neldam S, Osler M, Hansen PK, Nim J, Smith SF, Hertel J. Intrapartum fetal heart rate monitoring in a combined low and high-risk population: A controlled clinical trial. Eur J Obstet Gynecol Reprod Biol 1986;23:1–11.
5. Kelso IM, Parsons RJ, Lawrence GF, Arora SS, Edmonds DK, Cooke ID. An assessment of continuous fetal heart rate monitoring in labor. A randomized trial. Am J Obstet Gynecol 1978;131:526–32.
6. Luthy DA, Shy KK, van Belle G, Larson EB, Hughes JP, Benedetti TJ, et al. A randomized trial of electronic fetal monitoring in preterm labor. Obstet Gynecol 1987;69:687–95.
7. Curtin SC, Mathews TJ. U.S. obstetric procedures, 1998. Birth 2000;27:136–8.
8. Thacker SB, Stroup D, Chang M. Continuous electronic heart rate monitoring for fetal assessment during labor. Cochrane Database Syst Rev 2001;2:CD000063.
9. American College of Obstetricians and Gynecologists. 1999 survey of professional liability: Claims data tabulations. Washington: American College of Obstetricians and Gynecologists, 1999.
10. Accreditation Council for Graduate Medical Education Program requirements for residency education in obstetrics and gynecology. Available at: http://www.acgme.org/req/220pr701.asp. Accessed 2002 Mar 4.
11. Council on Resident Education in Obstetrics and Gynecology. Educational objectives: Core curriculum in obstetrics and gynecology. Washington: Council on Resident Education in Obstetrics and Gynecology, 2002.
12. American Board of Obstetrics and Gynecology. Guide to learning in maternal-fetal medicine. Dallas: American Board of Obstetrics and Gynecology, 1996.
13. Accreditation Council for Graduate Medical Education Programs by specialty. Available at: http://www.acgme.org/adspublic. Accessed 2001 Sep 18.
14. Society for Maternal–Fetal Medicine. Fellowship directory. Available at: http://www.smfm.org/index.cfm?Zone=careers–nav=fellow. Accessed 2001 Sep 18.
15. Vogler JH. National standardization of fetal monitoring terminology and competency validation: A call for interdisciplinary collaboration. J Perinatol 1997;17:228–32.
16. Cibils LA. On intrapartum fetal monitoring. Am J Obstet Gynecol 1996;174:1382–9.
17. Catanzarite VA. FMTUTOR: A computer-aided instructional system for teaching fetal monitor interpretation. Am J Obstet Gynecol 1987;156:1045–8.
18. Slotnick HB. How doctors learn: Physicians’ self-directed learning episodes. Acad Med 1999;74:1106–17.
19. Murray ML, Higgins P. Computer versus lecture: Strategies for teaching fetal monitoring. J Perinatol 1996;16:15–9.
20. Beckley S, Stenhouse E, Greene K. The development and evaluation of a computer-assisted teaching programme for intrapartum fetal monitoring. BJOG 2000;107:1138–44.
21. Trepanier MJ, Niday P, Davies B, Sprague A, Nimrod C, Dulberg C, et al. Evaluation of a fetal monitoring education program. J Obstet Gynecol Neonatal Nurs 1996;25:137–44.
22. National Library of Medicine. Clinical competence (MeSH). Available at: http://www.ncbi.nlm.nih.gov/entrez/meshbrowser.cgi?retrievestring=&mbdetail=n&term=clinical=competence. Accessed 2002 Sep 1.
23. Afriat CI, Simpson KR, Chez BF, Miller LA. Electronic fetal monitoring competency—to validate or not to validate: The opinions of experts. J Perinat Neonatal Nurs 1994;8:1–16.
24. Schmidt JV. The development of AWHONN's fetal heart monitoring principles and practices workshop. Association of Women's Health, Obstetric and Neonatal Nurses. JObstet Gynecol Neonatal Nurs 2000;29:509–15.
25. Accreditation Council for Graduate Medical Education. Outcome Project. Available at: http://acgem.org/outcome/project/proHome.asp. Accessed 2002 Aug 28.
26. Davies BL, Niday PA, Nimrod CA, Drake ER, Sprague AE, Trepanier MJ. Electronic fetal monitoring: A Canadian survey. CMAJ 1993;148:1737–42.
27. US Census Bureau. Census 2000 urban and rural classification. Available at: http://www.census.gov/geo/www/ua/ua_2k.html. Accessed 2002 Dec 6.
    © 2003 The American College of Obstetricians and Gynecologists