Secondary Logo

Journal Logo

AAPA Members can view Full text articles for FREE. Not a Member? Join today!
Original Research

Electronic medical documentation

Attitudes and trends among PAs in outpatient settings

Smith, Phillip MHS, PA-C

Author Information
Journal of the American Academy of Physician Assistants: February 2020 - Volume 33 - Issue 2 - p 38-41
doi: 10.1097/01.JAA.0000651740.84610.dc
  • Free


Healthcare-associated documentation is an essential component of modern healthcare delivery systems. The major clinician-derived contribution to this system is patient encounter documentation. Increasingly, the documentation generation process involves the assistance of electronic health record systems. The methods used for electronic-assisted documentation can be classified based on predominant entry interface into free-text and structured data formats.1 Free-text entry allows manual or voice recognition assisted entries to be made and maintains the same form and syntax once the final document is generated. Structured data formats rely on individual questions with defined answer choices often using radio buttons for ease of entry. In some structured data entry systems, the radio button answer choices are converted into narrative sentence structure once the document is generated.

The history of present illness (HPI) section is the crux of clinician-generated medical documentation. This section includes a narrative account of the circumstances and subjective information prompting the patient-provider encounter. As such, it often is narrative in nature and attempts to provide a chronologic narration of a series of complex events. Criticism of advancements in electronic-assisted documentation centers on the erosion of the narrative fabric of the medical record in general and more specifically the HPI section.2 This criticism is especially relevant in the outpatient setting, where coherent narrative accounts of patient complaints and activities are required for maintaining care continuity.

As PAs take on an increasing role in the delivery of outpatient care, it becomes important to quantify clinician satisfaction with encounter-generated documentation. Satisfaction with the medical documentation creation process as well as the end document itself have been shown to be imperative to electronic health record adoption and continued use among clinicians.3 Satisfaction requires two key parameters: an accurate account of the provided information and an easy-to-read presentation format.4,5 Given the narrative nature, variable methods of generation, and effect on continuity of care, the HPI section provides an ideal platform to measure the parameters of satisfaction.


This study surveyed a convenience sample of clinically active PAs who use electronic-assisted documentation in the creation of outpatient-derived encounter documents. This study was granted exempt status by the Mount St. Joseph University institutional review board. Possible participants were identified at regional and national profession-specific conferences. Electronic invitations to complete an anonymous online survey were sent to each potential respondent and included a link to the survey instrument. Variables in the survey instrument included categorical measure of average daily patient volume (less than 10, 11-30, 31-50, and more than 50) and clinical experience (0-1 years, 2-4 years, 5-9 years, and 10 or more years), a classification of the predominant documentation style used to construct HPI sections (free-text, voice recognition assisted, and structured), and assessments of self-described accuracy and readability of generated HPI sections (0 to 100). Definitions of accuracy and readability were included in the survey instrument as previously described.4,5 Briefly, readability was assessed using the Flesch-Kincaid readability scale, which includes categorical narrative descriptions of text ranging from easy to very difficult and a corresponding numeric range equivalency for each category. For accuracy, participants were asked to report the degree of agreement between intended content and actual electronic-assisted HPI content on a 100-point scale, with 100 indicating complete agreement and 0 indicating no agreement.

Data analysis was accomplished using SPSS version 22. Descriptive statistics were reported for interpretive context. Due to the similarity in generated HPI syntax between free-text and voice recognition formats, these groups were combined into a single free-text category for analysis. Categorical data for clinical experience were combined into two groups: under 5 years (consisting of the 0-1 years and 2-4 years groups) and 5 years or more (consisting of the 5-9 years, and 10 or more years groups) for analysis. Similarly, the 31-50 and 50 or more clinic volume measures were combined for analysis into one combined more than 30 group. Comparative analysis was accomplished using chi-square and two-tailed independent t-test where applicable. Statistical significance was set at P < .001. Confidence intervals (CIs) were calculated with an alpha set to 0.05.


Based on contact information obtained at regional and national conferences, an electronic survey was distributed to 246 PAs who provide outpatient care. Of the 246 potential respondents, 129 completed the survey for a response rate of 52.4%.

Descriptive statistics of the total population

Of the 129 respondents, 93 (72.1%) used a free-text format and 36 (27.9%) used structured data entry to generate the HPI section of outpatient encounters. The overall self-accessed accuracy and readability of HPI documentation regardless of documentation style was 82.46 (CI 95%, 79.47-85.44) and 82.13 (CI 95%, 78.42-85.84), respectively. Figure 1 summarizes accuracy and readability data with respect to documentation style preference. Statistically significant (P < .001) decreases in accuracy and readability were found in the subset using structured data entry formats.

Self-assessed accuracy and readability data for all respondents (N = 129). Error bars indicate 95% CI.

Descriptive statistics based on clinical experience

Stratifying the total respondent population based on experience showed that 16.3% (21) had less than 5 years clinical experience and 83.7% (108) had 5 or more years experience (Figure 2). Self-assessed accuracy and readability data by experience level and documentation style are shown in Figure 3. A statistically significant (P < .001) decrease of both self-assessed variables in the structured data entry format subset was maintained after stratification by experience. No statistically significant difference was noted between experience groups with respect to either self-assessed variable.

Distribution of documentation style stratified by experience level.
Self-assessed accuracy (A) and readability (B) stratified by clinical experience. Error bars indicate 95% CI.

Analysis of experienced clinicians

Of the 108 providers who indicated 5 or more years clinical experience, 27.7% (30) reported seeing fewer than 10 patients, 61.1% (66) reported seeing between 11 and 30 patients, and 11.1% (12) indicated seeing more than 30 patients per day in the clinic setting. Figure 4 summarizes the preferred documentation format of experienced providers by patient volume and indicates a statistically significant (chi-square = 17.195, P < .001) shift from free-text formats to structured data entry formats as patient volume per day increases. Figure 5 shows data for self-assessed accuracy and readability stratified by both documentation format and patient volume. Statistically significant decreases in both accuracy and readability with respect to structured entry formats were noted in all volume stratifications, with the exception of the readability measure in the 11 to 30 patients/day category. No statistically significant change in self-assessed accuracy or readability was found in the free-text entry group regardless of patient volume. A statistically significant decrease in both accuracy and readability was found in the structured entry format group when comparing the 11 to 30 patients/day group to the more than 30 patients/day group (Accuracy: 11 to 30 patients/day = 68.4-81.3, more than 30 patients/day = 29.4-59.2; and readability: 11 to 30 patients/day = 62.2-84.4, more than 30 patients/day = 23.5-51.2).

The distribution of documentation styles among experienced clinicians stratified by patient volume.
Self-assessed accuracy (A) and readability (B) in experienced clinicians stratified by patient volume. Error bars indicate 95% CI. P value < .001 unless otherwise specified (∗).


As clinicians see increasing numbers of patients in less time, the availability of resources to produce accurate and reliable documentation becomes problematic. Electronic health records in some cases are mandated by Medicare and Medicaid reimbursement incentive regulations as a tool to facilitate more efficient and complete medical documents. However, successful implementation and use require user support.6 To begin to understand the effect of electronic-assisted documentation, we looked at self-assessed satisfaction using previously defined measures with outpatient encounter documents prepared by PAs and NPs. In evaluating the components of a typical outpatient encounter, the HPI section represented an essential component with extreme variability both in content and documentation style. This became the ideal section in which to measure clinician satisfaction and to delineate changes in satisfaction with respect to patient volume and documentation style.

We report here a decrease in both accuracy and readability among providers using a structured data platform, regardless of experience or patient volume. A shift to this less-accurate and less-readable style of documentation also was noted and found to be more exaggerated as patient load increased in the outpatient setting. The motivating factors for this shift remain unclear. The trend does indicate the possibility that external forces differentially select for structured data entry formats despite self-reported decreases in two important measures of satisfaction. As clinicians rely more heavily on electronic medical documentation to maintain continuity of care, any factors affecting the overall quality of those documents have the potential to adversely affect patient safety.


This study was small in number by design and focused on a specific aspect of medical documentation with respect to both type of patient seen (outpatient) and section of documentation evaluated (HPI). Our focus was on a highly narrative portion of the medical document and it is difficult to extrapolate our findings to other less narrative sections. The small response sample size may limit the ability to generalize results or draw conclusions from these data, particularly in small sample size subgroups. Additional research is suggested and should focus on the inclusion of more significant numbers of clinicians with less than 5 years clinical experience to fully elucidate the effect of patient volume on satisfaction across experience groups. Follow-up studies are suggested to delineate factors driving selection of documentation style among clinicians regardless of clinic setting and patient volume.


The data presented here demonstrate that among experienced clinicians, increased patient volume is associated with a higher likelihood of using documentation styles that result in lower accuracy and readability scores compared with documentation styles favored in lower patient volume settings. The implications are that higher patient load favors a documentation style that adversely affects continuity of care in the outpatient setting by generating medical records that fail to record complex patient narratives in an accurate and easily readable fashion. Additional studies are warranted, using larger sample sizes to increase generalizability.


1. Linder JA, Schnipper JL, Middleton B. Method of electronic health record documentation and quality of primary care. J Am Med Inform Assoc. 2012;19(6):1019–1024.
2. Jamieson T, Ailon J, Chien V, Mourad O. An electronic documentation system improves the quality of admission notes: a randomized trial. J Am Med Inform Assoc. 2017;24(1):123–129.
3. Boonstra A, Broekhuis M. Barriers to the acceptance of electronic medical records by physicians from systematic review to taxonomy and interventions. BMC Health Serv Res. 2010;10:231.
4. Robinson KE, Kersey JA. Novel electronic health record (EHR) education intervention in large healthcare organization improves quality, efficiency, time, and impact on burnout. Medicine. 2018;97(38):e12319.
5. Liu W, Walsh T. The impact of implementation of a clinically integrated problem-based neonatal electronic health record on documentation metrics, provider satisfaction, and hospital reimbursement: a quality improvement project. JMIR Med Inform. 2018;6(2):e40.
6. Priestman W, Sridharan S, Vigne H, et al. What to expect from electronic patient record system implementation: lessons learned from published evidence. J Innovative Health Inform. 2018;25(2):92–104.

electronic health records; medical documentation; healthcare informatics; PA; patient encounter; structured data

Copyright © 2020 American Academy of Physician Assistants