Secondary Logo

Journal Logo

AAPA Members can view Full text articles for FREE. Not a Member? Join today!
Original Research

Emergency physician evaluation of PA and NP practice patterns

Phillips, Andrew W. MD, MEd, FAAEM; Klauer, Kevin M. DO, EJD, FACEP; Kessler, Chad S. MD, MHPE, FACEP

Author Information
Journal of the American Academy of Physician Assistants: May 2018 - Volume 31 - Issue 5 - p 38-43
doi: 10.1097/01.JAA.0000532118.98379.f1
  • Free


The emergency medicine workforce includes physician assistants (PAs) and NPs in a variety of workforce and supervisory models.1-4 However, training standards, scope of practice, and degree of physician supervision remain ill-defined nationally. To date, no comprehensive assessment exists.

Understanding the roles of PAs and NPs in the ED is key to informing future formal, standardized role policies and training needs for PAs and NPs. Certificates of added qualifications (CAQs) are available for PAs and fellowships for NPs but without a baseline understanding of what PA and NP roles are nationally. Variation in roles and education can lead to a mismatch of under- or overqualification for a given role.

We sought to establish a baseline record of current PA and NP staffing models, practice patterns, and practice scope as the first step to inform future policy recommendations at a national level, using the American College of Emergency Physicians (ACEP) council as a nationally representative sampling frame.


Survey tool reliability and validity evidence

Survey tools were created and tested according to standard practices described in detail below.5,6

One author (AWP) conducted exploratory interviews with six emergency medicine providers, including a PA and an NP. The data were analyzed simultaneously using a grounded theory approach by two authors formally trained in qualitative methods (AWP and CK). Grounded theory is an inductive methodology using broad questions (such as “What roles are PAs and NPs doing in the ED?”) and used here to develop themes from the answers to inform survey question design.7

From the thematic analysis (Table 1) and an extensive literature search, we (AWP and CK) produced a nine-question survey designed for use with an electronic audience response system during dedicated time at the ACEP Council meeting. A second, longer survey of 26 questions could be completed online when time permitted during and after the 2-day annual meeting. Only demographics were replicated in the two surveys. Both surveys were assessed for content validity by PAs, NPs, and physicians who were experts in the field of PA and NP practice in the ED.8

Survey preparation interview themes

After adjustments for question clarity recommended by the stakeholder panel and a pilot test among emergency physicians, the surveys were assessed for reliability among emergency medicine physician leaders. Members of this panel included current and former department chairs, residency and fellowship program directors, and national emergency medicine society leaders who were specifically chosen to be as close as possible to the ACEP council sampling frame.

The test-retest kappa for the audience response system survey was 0.895 (95% confidence interval [CI], 0.837 to 0.953), P < .001, and the test-retest kappa for the online survey was 0.873 (95% CI, 0.843 to 0.903), P < .001, both of which are considered excellent agreement.9 Additionally, the interrater reliability for the online survey was assessed with intraclass correlation, which also was considered excellent at 0.978 (95% CI 0.965, 0.987).10

Respondents received notification about the survey by verbal announcement during the council meeting the day before the survey and in an announcement the day the survey was launched. One additional e-mail reminder was sent 1 week after the initial launch. Respondents received no compensation. The audience response system survey required about 10 minutes during the council meeting; the entire group moved at the same pace because each subsequent question was presented to the group simultaneously. Online survey time was about 8 minutes based on pilot testing times. Councilors had to be physically present to respond to the audience response system survey because it is an in-person response system.

We used both an audience response system survey and an online survey for two reasons. First, this let us reduce the question burden by dividing it into two segments. Second, we knew that the response rate for the audience response system survey would be high because the survey was taken during the council meeting. We were then able to compare answers to the same questions in two surveys to evaluate for evidence of nonresponse bias. Reported responses are from the online survey unless specifically mentioned as from the audience response system survey.

Sampling frame

All ACEP councilors as of October 2015 were invited to participate. No other exclusion criteria applied. Respondents were asked to provide responses with respect to their current primary clinical location unless otherwise specified.

Statistical analyses

Basic descriptive data were calculated and compared with chi-square and t-tests as appropriate. Power analyses were run with G∗Power3.11

We decided a priori to perform population comparisons and a wave analysis to evaluate for nonresponse bias according to standard survey practices.12,13


Response rate and demographics

Of the 364 councilors, 331 were signed in to the meeting at the time of the audience response system survey and responded to the survey, yielding a response rate of 90.1% per the American Association of Public Opinion Research's definition 6.14 For the online survey, 208 of 371 registered councilors responded, yielding a 56.1% response rate. Table 2 shows the respondent demographics, which were similar between groups, as analyzed for the nonresponse bias assessment. About 72% of respondents were male, and the mean age was 47.7 years. For comparison, the 2007 emergency medicine workforce study findings were 79% male and mean age 43.7 years.15

Councilor demographics

Staffing and supervision

Of the 163 councilors' EDs that use PAs or NPs, 72.4% have both on staff, 17.2% have only PAs, and 10.4% have only NPs. Of the 118 respondents whose EDs have NPs and PAs, 59.4% reported having NPs without prior experience as emergency medicine providers, as did 63.6% for PAs.

Just over 51% of the 327 respondents to the audience response system survey reported that they generally regarded PAs and NPs as subordinate in relation to attending physicians. Nearly 12% considered the relationship equivalent, 0.92% considered it similar to that of a medical student, 22.9% like working with a resident, and 13.2% reported an unspecified relationship not described by the previously mentioned categories.

In about 30% of represented EDs, PAs and NPs see patients who are ESI level 1 (most urgent, requiring resuscitation) (Table 3). More than 90% of PAs and NPs see patients who are ESI levels 3 (urgent) to 5 (nonurgent). Supervisory models vary, with some EDs requiring real-time physician presentation for every patient and others not requiring any physician knowledge of the patient's presence for the same ESI level (Table 3).

Staffing and supervision based on ESI level and specific patient classifications (observation and triage)

NP and PA comparisons

No statistically significant difference was found in the type of provider (PA or NP) hired between different ED settings (chi-square[4] = 6.014, P = .198, [1-beta] = 0.88 for moderate effect size omega = 0.3 and alpha = 0.05) (Table 4).

Employed PAs and NPs by ED setting

Slightly more than 63% of councilors reported that their institutions hired less-experienced (less than 5 years' experience) PAs and 58.1% reported hiring less-experienced NPs (chi-square[1] = 66.564, P < .001). The ED setting was not significantly associated with hiring less-experienced PAs (yes for 64.3% rural, 60.8% suburban, and 65.4% urban of 117 respondents, chi-square[2] = .242, P = .886), nor for less experienced NPs (yes for 71.4% rural, 54.9% suburban, 57.7% urban of 117 respondents, chi-square[2] = 1.240, P = .538, [1-beta] = .84 with moderate effect size omega = .3 and alpha = .05 for both comparisons).

Respondents reported that less-experienced PAs and NPs used significantly more resources than more-experienced PAs and NPs, and NPs at all experience levels were reported to use more resources than PAs at all experience levels (Table 5). Resources were defined as “labs, imaging, and consultations” in the survey instrument. For the latter analysis, we only included the 117 respondents who had worked with both PAs and NPs with fewer and greater than 5 years' experience so fair comparisons could be assessed. PAs and NPs with less than 5 years' experience were reported to use more resources than their respective counterparts with 5 or more years' experience (chi-square[4] = 20.840, P < .001 and chi-square[4] = 26.657, P < .001, respectively). NPs were reported to use more resources than their PA counterparts, regardless of years of experience (chi-square[4] = 105.292, P < .001 for fewer than 5 years' experience, and chi-square[4] = 120.415, P < .001 for 5 or more years' experience).

Resource use by practitioner type and years of experience

Additional clinical training was reported necessary in a greater percentage of NPs than PAs according to the 95 respondents who had worked with both NPs and PAs and were familiar with fellowships and CAQs (chi-square[4] = 80.596, P < .001). Just over 41% of the 95 respondents considered an emergency medicine fellowship necessary for only new graduate NPs, and an additional 41.1% thought it was necessary for all NPs. Only 17.9% thought the fellowships were not necessary for NPs of any experience level. Conversely, 36.8% thought only new graduate PAs needed the emergency medicine CAQ; 40% thought all PAs needed one; and 23.2% reported that this credential was not necessary for PAs of any experience level.

NPs were reported to have inadequate supervision more frequently than PAs, based on 128 responses (42.2% inadequate, 57% adequate, 0.8% too much supervision for NPs; 31.3% inadequate, 68% adequate, 0.8% too much supervision for PAs; chi-square[4] = 200.787, P < .001).

Scope of practice

The majority (59.3%) of 152 respondents reported that the scope of practice for PAs and NPs at their primary clinical site was not changing; 31.6% reported an increasing scope of practice, and 3.3% reported a decreasing scope of practice. Table 6 outlines various procedures and the percentage of respondents who reported being aware of at least one instance in which a PA or NP performed a procedure solo, without any direct physician oversight.

Percentage of EDs in which various procedures may be performed independently by at least one PA or NP on staff

Nonresponse bias analysis

We first performed a population comparison. No difference was found in representation by either sex between official council records, the audience response system survey, or the online survey (chi-square[2] = 0.877, P = .645, [1-beta] = 0.99 for moderate effect size omega = .3 and alpha = .05). Nor was a difference in mean age found between the three groups (F[2,907] = .367, P = .693, [1-beta] = 0.99 for moderate effect size f = .25 and alpha = .05). We further analyzed and found no difference in practice setting represented between those who responded to the audience response system versus online surveys (chi-square[2] = 2.55, P = .279, [1-beta] = 0.99 for moderate effect size omega = .3 and alpha = .05). Practice setting information was not readily available for all official councilors.

Next, we performed a wave analysis comparing responses cast on the first day the survey opened with responses cast after the reminder e-mail 1 week later. No difference was found between waves, neither with respect to type of provider hired (chi-square[2] = 5.252, P = .072, [1-beta] = 0.823 for moderate effect size omega = .3 and alpha = .05) nor whether PAs or NPs saw patients who were ESI level 1 (chi-square[2] = 5.073, P = .079, [1-beta] = 0.74 for moderate effect size omega = .3 and alpha = .05).


Our national survey of emergency medicine leaders confirmed anecdotal evidence of marked PA and NP staffing and scope-of-practice variation across US EDs. The few variables that were not significantly different were in the context of strongly powered analyses.

Team structure

The team structure varied greatly across EDs on multiple levels. Table 3 is very telling of the differences in how emergency medicine care teams with PAs or NPs manage EDs. For example, 63.7% of EDs represented do not include ESI level 1 patients in the scope of practice for PAs or NPs; however, almost 10% of those that include level 1 patients in PA and NP scope of practice do not require a physician to be notified when a PA or NP is caring for a level 1 patient. Based on the current data, we do not know which factors are associated with such discrepancies, such as setting (for example, rural). The scope-of-practice discrepancy finding is further supported by discrepancies in training expectations (such as fellowships and CAQ requirements), because different training is needed to fulfill different scopes of practice. Both fellowships (for NPs) and CAQs (for PAs) are encouraged by their respective societies, the American Academy of Emergency Nurse Practitioners and the Society of Emergency Physician Assistants. Both represent postgraduate programs and are not intended by the societies to be considered a minimum requirement to work in an ED.16,17

Another important team structure component revealed in our study is the perceived hierarchy of providers. About half of the respondents considered PAs and NPs to be subordinate in relation to physicians—a relationship described in our survey as more hierarchical than with a medical student or resident. This is in contrast to about 10% who reported an equivalent relationship. The discrepancy may be related to PAs' and NPs' varying experience levels and also may be a result of the variation in supervisory agreements between PAs and NPs and between states. Licensure and supervisory agreements for PAs define PAs as dependent practitioners in all 50 states but NPs are only required to have collaborative agreements with physicians in about half of states.3 Additionally, PAs are governed by the state medical boards but NPs are governed by the state nursing boards and the Nursing Practice Act.3 PAs have not taken an organized approach to pursuing autonomous practice, in contrast to the recent NP movement for more autonomy.3,18

The various team structure differences have practical implications. The different scopes of practice and training requirements suggest that there will be mismatch between the two, likely in both directions, such that some PAs or NPs are overprepared and others underprepared for practice in EDs across the country. No uniform definition of competence for PAs and NPs to perform specific procedures exists in US EDs.

The increasing scope of practice for PAs and NPs in nearly 32% of EDs represented in our study may suggest that the system is working, despite the discrepancies, although other market pressures may be driving the increase.3,19,20 Namely, healthcare systems have been seeking cost-saving alternatives to staff EDs in response to projected physician shortages (including emergency physicians) and growing operational expenses.3 Market pressures may have driven the increase in PA and NP integration regardless of practice differences between physicians, PAs, and NPs and between PAs and NPs.

Comparing PAs and NPs

Our study further allowed a nationally representative evaluation of PAs and NPs in real-world practice conditions. Previous studies have evaluated PA and NP scope and preparedness separately.18,21-24 However, our representative sample of emergency medicine leaders reported that NPs use more resources than PAs regardless of years in practice and that NPs needed additional clinical training more often than PAs. These subjective assessments are supported by hiring practices as well, because more represented EDs have less-experienced PAs than less-experienced NPs. The willingness to hire less-experienced PAs over less-experienced NPs is consistent with our assessments that PAs have more favorable work characteristics. The assessments also are in the context of NPs reportedly having inadequate supervision more frequently than PAs, consistent with the assessment finding that NPs more often required additional training.

The differences in practice and preparation between PAs and NPs were a secondary endpoint without a full set of dedicated questions but the findings warrant further exploration. At first glance, the findings may be concerning because of the increasing unsupervised, independent practice for NPs nationally by some professional organizations and state statutes and regulations. Although this phenomenon is seen primarily in primary care and not in emergency medicine, this independent practice expectation by NPs also may be contributing, to a certain degree, to the differences noted between PAs and NPs and may lead back to a possible mismatch of varying expectations and training. Other more plausible and measureable explanations exist. For example, our findings may be explained by PAs and NPs seeing different types of patients. They also could be explained by different levels of autonomy sought by PAs and NPs, which is the more likely explanation, especially for the finding that NPs, according to emergency physicians, more often had inadequate supervision. The NP model is described fundamentally as more autonomous than the PA model, not requiring physician collaboration in about half of states and to the extent that some medical centers are run entirely by NPs.3,25 Physician collaborations are considered “scope of practice barriers” in the recent Advanced Practice Registered Nursing Consensus Model.18 Differences between NPs and PAs are inevitable because their educational models are distinct. However, ED operational models tend to incorporate a collaborative model with PAs and NPs, requiring various degrees of supervision regardless of the type of practitioner or the variability in state regulatory supervisory and collaborative agreements. However, more autonomous practice by PAs and NPs, without direct supervision, may be seen in certain geographic regions and those with physician shortages.3


We chose to survey the ACEP council as a representative group of emergency physicians from across the United States with general knowledge and expertise in practice and administrative matters. The council's demographics are similar to the population of emergency physicians. The survey provided a subjective assessment of roles and performance, similar to job evaluations or resident physician milestone assessments. Additional studies using chart reviews would add collateral data to our understanding of PAs and NPs in the ED. The strength of our approach is that the opinions are those of emergency physicians across the United States. Finally, the sampling frame for this study excluded PAs and NPs, which will need to be addressed in future work.


Stark differences exist in EDs across the United States with regard to scope of practice, expectations, team dynamics, and training requirements for PAs and NPs. Differences in resource use and autonomy in practice were noted between PAs and NPs. Our findings provide a foundation on which further evaluation can inform policy recommendations for the most appropriate training and experience standards, scope of practice, and levels of supervision for NPs and PAs in US EDs.


1. American College of Emergency Physicians. Guidelines regarding the role of physician assistants and advanced practice registered nurses in the emergency department. Accessed February 1, 2018.
2. Hum M. Guidelines for Mid-Level Providers in the Emergency Department. Banner Health, Phoenix, AZ; 2011.
3. Klauer K. Innovative staffing in emergency departments: the role of midlevel providers. CJEM. 2013;15(3):134–140.
4. Steiner IP, Blitz S, Nichols DN, et al. Introducing a nurse practitioner into an urban Canadian emergency department. CJEM. 2008;10(4):355–363.
5. Artino AR Jr, La Rochelle JS, Dezee KJ, Gehlbach H. Developing questionnaires for educational research: AMEE Guide No. 87. 2014;36(6):463–474.
6. Dillman DA, ed. Mail and Internet Surveys. 2nd ed. New York, NY: John Wiley and Sons, Inc., 2000.
7. Corbin JM, Strauss A. Grounded theory research: procedures, canons, and evaluative criteria. Qual Sociol. 1990;13(1):3–21.
8. Carmines EG, Zeller RA. Reliability and Validity Assessment. 1st ed. London, UK: SAGE Publications; 1979.
9. Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33(1):159–174.
10. Cicchetti DV. Guidelines, criteria, and rules of thumb for evaluating normed and standardized assessment instruments in psychology. Psychol Assess. 1994;6(4):284–290.
11. Faul F, Erdfelder E, Lang AG, Buchner A. G∗Power 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav Res Methods. 2007;39(2):175–191.
12. Halbesleben JR, Whitman MV. Evaluating survey quality in health services research: a decision framework for assessing nonresponse bias. Health Serv Res. 2013;48(3):913–930.
13. Phillips AW, Reddy S, Durning SJ. Improving response Rates and evaluating Nonresponse Bias in surveys: AMEE Guide No. 102. Med Teach. 2016;38(3):217–228.
14. American Association of Public Opinion Research. Standard definitions: final dispositions of case codes and outcome rates for surveys. Accessed March 13, 2018.
15. Counselman FL, Marco CA, Patrick VC, et al. A study of the workforce in emergency medicine: 2007. Am J Emerg Med. 2009;27(6):691–700.
16. American Academy of Emergency Nurse Practitioners. Fellowship programs. Accessed February 1, 2018.
17. Society of Emergency Physician Assistants. Statement on NCCPA's CAQ in emergency medicine. Accessed February 1, 2018.
18. Hoyt KS, Proehl JA. Advanced Practice Registered Nursing Consensus model: where are we now. Adv Emerg Nurs J. 2011;33(2):107–108.
19. Borwick J. New titles, old struggles: other “midlevel” providers emerging. Clin Rev. 2008;18:16–17.
20. Brown DF, Sullivan AF, Espinola JA, Camargo CA Jr. Continued rise in the use of mid-level providers in US emergency departments, 1993-2009. Int J Emerg Med. 2012;5(1):21.
21. Burlingame PA. The Effectiveness of Placing a Mid-Level Provider in Triage as an Intervention to Improve Patient Flow in the Emergency Department [ProQuest Dissertations and Theses]. 2009.
22. Cole FL, Ramirez EG. Nurse practitioners in emergency care. Top Emerg Med. 2005;27:95–100.
23. Ducharme J, Alder RJ, Pelletier C, et al. The impact on patient flow after the integration of nurse practitioners and physician assistants in 6 Ontario emergency departments. CJEM. 2009;11(5):455–461.
    24. Hamden K, Jeanmonod D, Gualtieri D, Jeanmonod R. Comparison of resident and mid-level provider productivity in a high-acuity emergency department setting. Emerg Med J. 2014;31(3):216–219.
    25. Hansen-Turton T, Bailey DN, Torres N, Ritter A. Nurse-managed health centers. Am J Nurs. 2010;110(9):23–26.

    PAs; physician assistants; NPs; scope of practice; practice patterns; emergency medicine

    Copyright © 2018 American Academy of Physician Assistants