Skip Navigation LinksHome > Blogs > Musings
Musings: Blog of the JAAPA Editorial Board
Monday, August 25, 2014
Karen M. Johnson, MA; Joseph M. Shipp, PA-C, MPAS
 
The Center for Sustainment of Trauma and Readiness Skills (C-STARS) in St. Louis, Mo., is a small Air Force unit geographically separated from its command due to its specialized mission. The CSTARS Simulation Center is based at the St. Louis University School of Medicine, and trains Air Force medical providers, nurses, and technicians in trauma medicine. We are budgeted to support medical deployment training requirements of the Air Force; therefore, our task performance requirements and user needs are taken from the Emergency Medical Squadron model our platform simulates. We use task trainers and high-fidelity manikins to train students in skills for preparation with live patients. Our students go from the point of equipment and manikin familiarization on their first day of training directly into their first of five simulation scenarios the very next day.
 
In all situations our efforts with the simulators strive to improve how quickly and efficiently the patient is addressed and assessed with proper interventions, especially under duress, and as a team. Both the high- and low-fidelity simulators enable our cadre to change up the level of difficulty according to the student’s skill level, giving the “out of practice” student a safe environment to build confidence. This is paramount within our program because students have varied levels of experience. And although our students rotate through the St. Louis University Hospital ED, it can never mimic what our military medics will see in a wartime environment. Consequently, our high-fidelity simulators are moulaged to mimic everything from explosive blast wounds to a traumatic brain injury. However, our task trainers (low-fidelity) more legitimately call for the hands-on skills needed to address the injuries mentioned above. For example, an explosive blast injury may cause a tension pneumothorax, which would call for chest tube placement. The high-fidelity simulator we use to mimic this injury already comes with holes for the chest tube and cannot be cut on. However, the chest tube task trainer can be cut on, sewed up, and reconfigured multiple times for several learners. For this reason, we teach all our course providers the fundamental skills in preparation for deployment.
 
We average one physician assistant (PA) per 2-week class. In most cases, the PA will come to our course with little or no trauma experience. PA class members come to C-STARS from both the active duty component as well as National Guard units throughout the United States and overseas. The PA class members from the National Guard are assigned to units that are tasked to respond to homeland emergencies and natural disasters. Active duty PA class members are assigned to a wide variety of specialties ranging from primary care to emergency medicine. The C-STARS platform is designed to enhance knowledge and skills associated with trauma resuscitation needed during any given deployment activity, both overseas and within the United States. The simulation center provides instruction and the ability to practice trauma resuscitation according to advanced trauma life support guidelines in a safe, non-threatening environment while also enhancing communication skills, teamwork, and managing chaos in a stressful environment. In addition to trauma resuscitation, PA class members utilize low fidelity training manikins to gain experience in skills such as central line insertion, chest tube placement, focused assessment with sonography for trauma (FAST) examinations, and endotracheal intubation. At the end of the 2-week training program, a mass casualty capstone simulation allows the PA to perform in the role of triage officer, managing multiple casualties with limited resources.
 
At the Center for Sustainment of Trauma and Readiness Skills (CSTARS) at St. Louis (Mo.) University Hospital, Karen M. Johnson is a simulation operator and Joseph M. Shipp is simulation director. The views expressed in this blog post are those of the author and may not reflect AAPA policies.

Monday, August 18, 2014
Jennifer M. Coombs, PhD, PA-C
 
Technology in healthcare holds great promise for increased productivity and decreased errors. I have the great fortune of getting to see a variety of clinics as I visit PA students in their clinical rotations. Although some clinics that I visit have not adopted new electronic medical records (EMRs), that is now increasingly rare. Medical providers have a love-hate relationship with their medical records. For the most part, EMRs fall into three categories:

• The good—systems that are fast, well-integrated into other systems, and don’t have too many bells and whistles to overcome for the increasingly busy and complex offices.
• The bad—systems that require opening lots of screens, frequent changing back and forth between screens, require repeated log-ins with a password, and are exceedingly slow.
• The ugly—medical records that crash and are not reliably saved, records that can easily be marked incorrectly, and systems that encourage copying and pasting from templates, making it impossible to  determine which information is relevant and which is filler.
 
In the past, medical providers complained about how clunky their EMRs were and how much time was lost typing into the record. I have seen some of our students handle the medical records much faster than in the past, using their tech-savvy skills to create shortcuts for things they do repeatedly. These shortcuts can be paths to medical errors, but for the most part, when implemented carefully and reread for accuracy, they can be timesavers.
 
Here is an example of an efficient office practice using a PA student and physician preceptor that I observed the other day. The physician was fortunate enough to have a well-functioning assistant who took patients to the examination rooms and obtained initial histories efficiently. He used three rooms. After a patient was put into the first room, he sent the student in to do the history and physical.

The student signed into the record and begin writing in the chart. The physician would see another patient while the student did the history and physical examination. The student talked to the patient and told the patient that he needed to check with his preceptor and would be back soon. Then the student and the preceptor would head into the room and the student would present the case in front of the patient. I know this is not the usual way of doing things, but the physician would check in with the patient and do the physical again while the student was presenting. Also the physician would write things down on a whiteboard in the room. I have also seen scribes used in this capacity, to write down everything that is said by the student and the physician in the room. In this example, there was no scribe.

The preceptor then sat down at the computer with the EMR and wrote the prescriptions while the student talked to the patient. Then the preceptor switched seats with the student, who finished up the chart note on the EMR and printed the prescriptions. I saw them switch seats at the computer three times. When they left the room, the patient, the student, and the preceptor had all talked and rechecked the plan, the physician had repeated the physical examination, and the student and the preceptor had read and reread the prescriptions. The preceptor picked up an error in one of the prescriptions when he reread the printed script.

Medicine is an increasingly complex system and EMRs have drastically changed the way healthcare is delivered. Initially, the technology can be bulky and counterintuitive. Problem-solving specific issues with individual medical records can help providers increase their productivity at work and even incorporate students efficiently into their busy practices.
 
Jennifer M. Coombs is an assistant professor in the Division of Physician Assistant Studies, Department of Family and Preventive Medicine at the University of Utah School of Medicine in Salt Lake City. The views expressed in this blog post are those of the author and may not reflect AAPA policies.

Monday, August 11, 2014
Lawrence Herman, MPA, PA-C, DFAAPA

A recent study presented at the annual scientific sessions of the American Diabetes Association in San Francisco compared outcomes in patients with diabetes who were treated within the Veterans Administration healthcare system. Presenter Dr. Lawrence S. Phillips said the study of 19,238 patients treated over 4 years compared changes in hemoglobin A1C levels and found that management performed by PAs and NPs is “as good as that provided by physicians.”
 
The VA is the single largest employer of PAs and NPs in the country, and diabetes affects more than one in five VA patients, so this is not trivial. Dr. Phillips and colleagues analyzed VA data to identify patients who saw a physician, NP, or PA for more than 50% of healthcare visits since a new diagnosis of diabetes in 2008-2012.
 
Researchers analyzed average A1C levels at the time of diabetes diagnosis, at the initiation of medications, and during follow-up years. They found that after adjusting for patient characteristics, average A1C levels did not differ significantly between provider groups. Dr. Phillips, a professor of medicine at Emory University in Atlanta and a well-respected endocrinologist, also found that patient populations treated by all three groups were statistically comparable.
 
According to the study, patient demographics were similar between provider groups, with a mean age of 69 years, 95% male, and mean body mass index of 32.4 kg/m2. The cohort seen by physicians had a statistically smaller percentage of white patients (77%) than did the cohort seen by physician assistants (81%).
 
Results revealed that baseline A1C levels were 7.2%, 7.14%, and 7.18% for patients seen by physicians, PAs, and NPs, respectively. By the end of the study in 2012, those levels had dropped to 6.78%, 6.75%, and 6.75%, not statistically significant.
 
Similarly so, patient comorbidity levels, as measured by the Charlson Comorbidity Index, were virtually identical in each provider group. The proportion of patients on any diabetes therapy was similar between groups: 81% with physicians, 79% with PAs, and 80% with nurse practitioners.
 
This is not a perfect study and there are potentially confounding variables. As an example, a physician (perhaps an endocrinologist) may have seen the same patient for a minor percentage of visits and actually directed the patient care plan. Clearly, this could have affected outcomes. And statistically significant differences were found in some data.  Specifically, the study found a significantly smaller proportion of patients seeing PAs were on insulin (12%), compared with those seeing physicians (15%) or NPs (14%). PAs also referred a significantly smaller proportion of patients to diabetes specialists (5%), compared with physicians (8%) or NPs (7%).
 
This was a very large cohort of patients seen over a fairly lengthy period of time primarily by one of three different groups of providers. So this study does provide yet another piece of data indicating that, at least in this setting and with the diagnosis of diabetes, outcomes for patients who see PAs and NPs are comparable to those who see physicians. Furthermore and perhaps just as significant, it may be another piece of evidence that PAs and NPs are educated to treat not simply colds and runny noses but at least some of the most complicated patient populations with multiple comorbidities and with comparable outcomes.
 
Lawrence Herman is an associate professor and chair of the Department of Physician Assistant Studies at the New York Institute of Technology in Old Westbury, N.Y., and chair of the board of directors immediate past president of AAPA. The views expressed in this blog post are those of the author and may not reflect AAPA policies.
 

Monday, July 21, 2014

Kristine A. Himmerick, MPAS, PA-C

Graduation season is upon us. For PA students, this means robes, funny hats, proud families, and a big sigh of relief after surviving 2 to 3 years of intensive medical learning. Looming large just around the bend from the graduation stage is another monumental task…the Physician Assistant National Certifying Examination (PANCE). When the celebrations are over, the content blueprint will be waiting. High-stakes certification examination preparation can be a stressful time, and as a PA educator, I am often asked by students how to prepare for the examination.

The reality is that more than 90% of new graduates pass the PANCE on the first try. PA researchers have explored various measures as predictors of PANCE success. Performance on the Physician Assistant Clinical Knowledge Rating and Assessment Tool (PACKRAT) is consistently the best, although far from perfect, predictor of performance on the PANCE across several studies.1-3 Other factors that have been less consistently correlated with PANCE scores are program-specific summative examinations, results of previous multiple-choice examinations in the didactic year, grade point average before and during the PA program, graduate record examination scores, years of healthcare experience, grades on prerequisite courses, and demographics.1-4 None of these predictors can guarantee individual success and a structured study strategy is important to guide preparation for the examination.

One problem students have in preparing for the PANCE is trying to use too many resources. I recommend that you depend on one high-quality resource from each of three categories: a review book, a primary medical reference, and a question book or website. Depending on three main references will help keep studying focused and avoid desktop clutter.

I recommend selecting one board review book. Many PANCE preparation review books are available. Select a review book that has been recently published, covers the NCCPA content blueprint topics, and follows an outline format that you find intuitive.

Next, select one primary medical reference as a go-to book for everyday studying. This might be the medicine text that you studied from in your didactic year, such as Harrison’s Principles of Internal Medicine, Cecil Textbook of Medicine, Current Medical Diagnosis and Treatment, UpToDate online, or many others. Choose a medical reference book with a format that fits your learning style.

Finally, select a single practice question resource. Many vendors are happy to take your money to provide you with practice PANCE questions. You do not need them all! Choose one book or online source that is easy to use and fits your budget.

Once you have selected these three main study references, use these four tips to develop a good study plan:

• Write out a schedule to cover all NCCPA blueprint topics between now and the date of the examination. Plan to spend more time on the largest topics on the blueprint (cardiology, pulmonology, gastrointestinal, and musculoskeletal) and topic areas that are difficult for you.

• Spend time every day studying from each of the three categories. First, use your review text to read about one blueprint topic (or portion of a topic for larger categories). Then move to indepth reading from the medicine text. End each study session with practice questions.

• Take practice questions every day. Preparing for the examination requires learning the question format and style. Complete practice questions in learning mode (reading the answer and rationale after each question) and in test mode (complete 30+ questions at a 1-minute-per-question pace). Writing your own questions is a great way to learn to be a better test taker.

• Keep your study time active. Know your learning style and capitalize on your strengths. For example, don’t spend hours writing flashcards if you are an auditory learner. Find ways to study by comparing and contrasting diseases and treatments. Think about patients you saw during clerkships that embody the disease process you are studying.

Those very patients are awaiting your arrival on the scene as a certified PA. So take a couple days to celebrate the accomplishment of graduating with a highly coveted PA degree, clean up the confetti, and then hit the books (again)!

REFERENCES

1. Higgins R, Moser S, Dereczyk A, et al. Admission variables as predictors of PANCE scores in physician assistant programs: a comparison study across universities. J Physician Assist Educ. 2010;21(1):10-17.

2. Massey SL, Lee L, Young S, Holmerud D. The relationship between formative and summative examination and PANCE results: a multi-program study. J Physician Assist Educ. 2013;24(1):24-34.

3. Ennulat CW, Garrubba C, DeLong D. Evaluation of multiple variables predicting the likelihood of passage and failure of PANCE. J Physician Assist Educ. 2011;22(1):7-18.

4. Brown G, Imel B, Nelson A, et al. Correlations between PANCE performance, physician assistant program grade point average, and selection criteria. J Physician Assist Educ. 2013;24(1):42-44.

Kristine A. Himmerick is an assistant clinical professor in the PA program at Northern Arizona University’s Phoenix Biomedical Campus. The views expressed in this blog post are those of the author and may not reflect AAPA policies.
 


Monday, July 14, 2014
Amy M. Klingler, MS, PA-C
 
Like it or not, we live in a world where appearances matter. Opinions and assessments are made in the blink of an eye. Working in rural family medicine and urgent care, I have a moment (which I refer to as “sick/not sick”) when I walk into the examination room and make snap judgments about the patients in front of me. Are they sick? Meaning, do they need an immediate intervention? Or are they not sick? Do we have time to examine, assess, and discuss the issue at hand?
 
Because the actual time we have to spend with our patients can be counted on our fingers, we must ingest information about them through a variety of sensory clues. Understandably, our patients are making similar assessments about us. When we get too focused on our job of diagnosing and treating, we may forget that in every encounter, our patients are judging us and deciding whether or not to trust us with very personal and private information about themselves. I believe that our appearance and that of our surroundings can profoundly affect our interactions with patients.
 
Opinions conflict about the iconic white coats worn by many healthcare professionals and revered in ceremonies at the initiation of medical education. On one hand are recommendations that white coats, wristwatches, jewelry, and neckties should not be worn in medical settings because they can be vectors of disease transmission. White coats have also been vilified as the cause of “white coat hypertension,” that transient rise in patient BP at the mere sight of the white cotton/polyester blend thigh-length jacket. On the other hand, studies demonstrate that patients have greater trust and confidence in a doctor (or, presumably a PA or NP) who wears professional attire and a white coat, regardless of the infection risk.
 
Personally, I follow the standard at each of the clinics where I work. These offices are located in very different communities and have their own personalities and spectrums of acceptable attire. In my primary practice location at a rural health clinic (in a town where a buttondown shirt identifies you as an outsider), I wear tailored, embroidered scrubs; in another more formal practice location, it’s professional attire; and at the local health department, I wear business casual attire and a white coat. As a PA, I want to project competence, trustworthiness, and humility no matter what I am wearing, but I have to admit, I stand a bit straighter while wearing a white coat; especially one that is laundered at least once per week using a bleach solution.
 
If different clinics have different dispositions, what does your building say about you? Have you ever walked in the front door of your office, just as a patient would? Have you ever sat in the waiting room chairs and looked around? What about lying on the examination table and seeing the room (and the ceiling) from a patient’s perspective?
 
I recently supervised a project to remodel the interior of the Salmon River Clinic, my primary office. This was the first major renovation in the history of the 42-year-old clinic, and took several years to plan and several months to execute. I am grateful that the board of directors who operate the clinic saw the value in updating the facility to one that was more modern, clean, and comfortable (see before-and-after photos at right). My argument in support of the renovation was that patients (consciously or unconsciously) judge the type of care they will receive by the surrounding environment. Because many of the patients I treat are tourists who are meeting me for the first time, I feared that if the office itself was outdated and cluttered, patients would expect substandard care, even before they talked to a member of the staff. Now that the work has been completed, I feel a sense of joy when I walk into work each day and a sense of calm that I don’t have to apologize to patients for the state of the facility. So far, the reactions from locals and visitors have been overwhelmingly positive. I believe that patients and their families feel a little less anxious as they walk into an unexpectedly modern office in a rural Idaho town. The city streets may not be paved (really) and you may be 3 hours from Target (true), but there is a real clinic in town.
 
If appearances make an impression, how do you want to be remembered?
 
Amy M. Klingler practices at the Salmon River Clinic in Stanley, Idaho. The views expressed in this blog post are those of the author and may not reflect AAPA policies.
 
Blogs Archive