Hunt, D. Daniel MD; MacLaren, Carol PhD; Scott, Craig PhD; Marshall, Susan G. MD; Braddock, Clarence H. MD; Sarfaty, Suzanne MD
The dean's letter is the summary of a medical school's evaluation of a graduating student and, as such, has the potential to provide valuable information to residency directors both in the initial selection of a resident and in the ongoing management of that resident's professional growth. The information that is transferred through the dean's letter could permit a residency director to build upon the strengths of a given student and help the student to continue to grow in the areas of weakness as he or she makes the transition from student to resident. However, a dean's letter, even at its best, can reflect only the quality of a medical school's entire evaluation system. Studies have shown1 that there are problems in collecting accurate information about a student's performance in clinical settings. If faculty do not take the time to observe the student's clinical examination skills, then this feedback to the student and to the school is superficial. If the faculty fear legal reprisals and withhold written evaluations that document deficiencies, then none will be reported in the dean's letter. If the dean's letter writer hides failing grades with euphemisms or suppresses negative information,2 then residency directors cannot trust the dean's letter to accurately reflect a student's knowledge, skills, and attitude. Finally, if the dean's letter is not well organized and does not allow a reader to quickly grasp a student's strengths and weaknesses, then the communication of information across the transition from medical student to resident breaks down.
In spite of these limitations, the well-written dean's letter has been shown to be useful. Even though dean's letters are used by residency directors in many different specialties, directors value the same types of information from a student's file, regardless of the residency director's discipline.3 Well-written letters can also predict a student's performance in residency training.4–7 Smith8 reported that data collection scores on a clinical skills examination using standardized patients correlated best with performance as a first-year resident. This type of information is likely to be found only in the dean's letter. Smith also found practically no correlation between the residency ratings and scores on licensing examinations, yet, in the absence of a useful dean's letter, residency directors revert to these licensing examination scores to differentiate among applicants.
Surveys of residency directors continue to document their dissatisfaction with dean's letters as well as the relatively low weight they give to them9–11 in their selections of residents. This appears to be an opportunity lost because, as Brown, Rosinski, and Altman11 point out, while the large majority of medical school graduates perform well as residents, there are always a few in each class who fail to meet residency directors' expectations. When Brown et al. looked at the records of 20 failing residents, they found that most of the problems were personal and motivational rather than those involving skills or knowledge, and with few exceptions, the medical school's dean's letter contained very little information that would have predicted the graduate's poor performance. Yet, of all the material available to a residency director (transcripts, letters of recommendation, licensing examination scores, etc.), the dean's letter is the most likely to describe these limitations. Residency directors' complaints about the quality and usefulness of dean's letters are reminiscent of Friedman's description of these letters in his 1983 commentary, where he described them as a “wonder-land of positive adjectives.”13 In what could be another type of “fantasyland,” the dean's letter could convey honest appraisals of the relatively few students who have struggled and, instead of using these objective assessments to eliminate these students from contention, residency directors could use these candid assessments to design educational experiences for these weaker graduates.
To see how dean's letters had progressed toward a more objective document, Yager14 and Leiden15 evaluated their contents in 1984 and 1986, respectively. Their findings were not encouraging. They reported that dean's letters were inconsistent and that an individual school might produce four or five different types of letters because they used that many different letter writers. While the letter writers and their institutions put considerable resources into the task, they were not able to consistently produce documents valued by those who received them.
To address the disparities described in these two early studies, the Association of American Medical Colleges (AAMC) convened a committee made up of dean's letter writers and residency directors to produce guidelines for dean's letters. Published in 1989,16 these guidelines laid out specific recommendations about the “philosophy” of the letter, such as that a dean's letter should state explicitly that it is a letter of “evaluation” rather than one of “recommendation.” The AAMC's guidelines also gave specific content and format recommendations and requested that each letter contain information to allow the reader to assess how a given student had performed in comparison with his or her graduating classmates.
In 1993, Hunt et al.17 repeated the survey of dean's letter writers conducted by Leiden15 and also looked at a sample of dean's letters from every U.S. medical school to rate how well the AAMC's guidelines had been followed. That study showed that while 95% of the writers responding to the survey acknowledged being aware of the AAMC's guidelines for dean's letters, only 48 of the schools' letters (38%) actually included an explicit statement that the letter was an evaluation. More troubling was that 56 schools (45%) received “fail” grades when their letters were rated for formatting and the provision of comparative performance information. At the conclusion of this analysis in 1993, 75 medical schools that requested feedback were provided with individualized suggestions about how to improve their dean's letters.
Our study reports the results of a new survey of letter writers and a rating of the dean's letters from U.S. medical schools that were produced for the graduating class of 1998. The focus of this study was to determine whether letter writers and their letters were any closer to meeting the 1989 AAMC guidelines.
This project had two parts: a survey of the writers of dean's letters in 1998 and an analysis of the content of the dean's letters mailed for the graduating classes of 1998.
Letter Writers' Survey
A four-page questionnaire was sent to the dean's letter writers at the 124 U.S. medical schools in February 1998. Schools were coded to protect confidentiality, but the coding permitted a targeted second mailing and follow-up phone calls to non-responders.
The questionnaire repeated questions from the 1981 and 1992 surveys in order to follow changes that had taken place in the preparation of dean's letters. The questionnaire covered a variety of topics, including the types of faculty assigned to write the letters, the letter-writing process, the content of the letter, the input students had in the process, and the estimated cost of preparing the letter. Two short questions regarding ERAS (electronic residency application service) that had not been on the previous questionnaires were added.
Analysis of Letters
After the residency match in the spring of 1998, 15 residency directors representing six different disciplines from four medical school in the Western, Central, and Northeast regions of the United States were asked to contribute examples of dean's letters to complete a collection from all U.S. medical schools. Our goal was to collect at least four letters from every U.S. medical school. The collection was not random in its truest sense, but by collecting letters from the files of different disciplines and from residencies across the United States, we hoped to minimize the potential bias of our selection process. For example, while we asked the dean's letter writers to send us one of their letters when returning their questionnaires, in no case did we have more than one letter in a school's file that came directly from the school itself. During the analysis of the actual letters, however, it became apparent that we may have inadvertently drawn letters from residencies that had strong reputations and, thus, we were tending to have a higher representation of letters for stronger students in our sample. While this may have created a possibility of bias about the content of the letters, this was offset by our rating the letters only on structure. However, if a school had different letter structures for the top student than it did for the bottom student, we might have missed that.
All letters were carefully edited to remove the names of students and all references to the schools that had sent them (including all letterheads and logos). All of the investigators received approximately eight hours of training and then two investigators independently rated each set of letters from a school. The lead investigator reviewed all letters and broke ties when raters disagreed on a particular item.
The list of items to be rated for each letter was developed from the 1989 AAMC dean's letter committee report.16 Following these guidelines, the first rating criterion was whether the letter was introduced explicitly as a letter of evaluation or as a letter of recommendation. The second rating criterion was the quality of the formatting and content of the letter. The AAMC's guidelines recommend that each letter cover specific information related to the premedical period of time, basic sciences, clinical rotations, personal qualities, extracurricular activities, and a summary of the information. The guidelines recommend that these six areas be formatted in bold to allow the reader to quickly find the information most relevant to them. Along with this second criterion, raters also assessed the length of the letter and noted when length interfered with its quality and the reader's ability to find relevant material. The third criterion assessed whether the letter satisfied the AAMC's recommendation to provide comparative performance information, allowing the reader to determine how the student's performance compared with those of his or her classmates. The guidelines offer four ways of comparison: (1) class rank; (2) a clustering system, such as a phrase “in the middle third,” “the bottom quarter,” or in “the top 10%”; (3) the use of key words such as “excellent,” “very good,” and “good,” where the school lists all of the code words and the percentage of students in each category; and (4) the provision of information in a table or histogram showing what percentage of students received each grade in the major courses and clerkships. A school using any one of these methods was rated as having satisfied this criterion.
In some letters it was difficult to determine whether a system of codes or key words was in place or whether the writer was simply using a phrase such as “will be an excellent house officer” without implying a relative ranking system. This confusion, in many ways, reflects the same dilemma facing a residency program that receives four or five applicants from a school with a vaguely worded comparative performance code in the summary. Where we could not tell whether a code word had been used by examining the context of the four letters or explicit statements, we concluded that it did not exist.
The raters assigned an “honors,” “pass,” or “fail,” grade to each school. An “honors” rating meant that the school's letters consistently met the AAMC's criteria, provided the reader with appropriate information in each category, indicated that the clerkships were reported in the chronologic order in which they were taken, and provided clear headings that allowed the reader to find specific information quickly. A “pass” rating indicated that the school satisfied the basic AAMC criteria, but information was more difficult to find or limited in its depth of coverage. A “fail” rating indicated that sections were poorly covered or left out, or that there were inconsistencies across the letters.
As each investigator reviewed the letters from a school, he or she made suggestions for improvements. A summary of the results and the suggested improvements were sent to each school during the summer of 1999 along with a form for the writer to respond about whether our observations were accurate and useful and whether they planned to make the recommended changes.
Letter Writers' Survey
Of the 124 questionnaires sent to U.S. medical schools, 83 (66%) were returned after a targeted second mailing and phone contact. This response rate was lower than those in the 1981 study (87%) and the 1992 study (85%). The writers of letters whose actual letters were later rated at a pass or honors level were somewhat more likely to have responded to the questionnaire. Also, there was a slightly higher non-responder rate from the Southeast and Central regions, whereas the Western region provided a much higher response rate. Private schools also had a slightly higher non-responder rate.
All respondents reported that they were familiar with the AAMC's guidelines, and 64 (77%) indicated that they followed the recommendations to some extent. Only four letter writers (5% of respondents) said that they did not use the guidelines, and 12 (15%) chose not to respond to this question. Of the respondents, 52 (62%) said that they introduced their letters as “evaluation” and another six (7%) used both “evaluation” and “recommendation” to characterize their documents. In contrast, our analysis of the actual letters revealed that only 70 schools (56% of all U.S. medical schools) explicitly referred to the “evaluation” concept. Thus, by comparing the 73% of respondents who reported the use of this concept with the 56% of schools that demonstrated its use, the writers of letters who actually used the AAMC's guidelines are shown to be more likely to have responded to this survey.
The process of writing the letter
As in the survey in 1992, we found that deans of student affairs were the faculty members most likely to write the letters or to manage the letter-writing process. The percentage of schools that used multiple letter writers decreased from 65% in 1992 to 41% in 1998. Responses from schools using multiple letter writers explained the advantages of having more faculty involved, saying that it was more likely that the writer would know the student and write a more personal letter. The disadvantages of having multiple letter writers became obvious during the grading of the actual letters when a school did not carefully monitor the content and format of each writer.
The respondents to the 1998 survey estimated that their letters averaged 3.4 pages in length and that students requested an average of 17.4 (SD = 8.4) letters to be sent on their behalf, which was fewer than the average number of letters students requested in 1992 (21). In 1998, some students were reported to have requested more than 100 dean's letters.
Using the respondents' estimates of letters sent and numbers of students graduating, it would appear that close to a quarter of a million letters were sent to residency directors on behalf of the graduating class of 1998. The magnitude of this effort can be appreciated when one calculates that approximately 830,000 pages of information were mailed or sent electronically for the 1998 graduates. In comparison, we estimated that 647,200 pages of information were sent on behalf of the graduating class of 1981, which reflects the shorter length of the letters at that time and fewer graduates.
The respondents were asked to estimate the costs of preparing their dean's letters, but only 54% attempted to answer (compared with 52% in 1981 and 51% in 1992). Many non-responders to this question wrote in comments to the effect that “it was too much to even think about.” In 1992 the estimate was an average of $25,000 per school, and in 1998 the respondents estimated $26,000 per school, with 8% of respondents estimating costs of more than $50,000. Again, however, one must interpret this finding cautiously, based on the small number of people making estimates and the difficulty in accounting for the costs of people's time when they have multiple functions in the medical school.
As in the past two surveys, the majority of schools (88%) allowed students to review the letters before they were mailed. Students were generally limited to correction of factual errors or simple changes in hobbies or lists of extracurricular activities.
Content of the letter from the letter writer's perspective
The questionnaire asked the respondent to indicate the frequencies with which he or she included 15 types of general information, ranging from personality descriptions to class rank. The response scale offered three choices of “never,” “sometimes,” and “always.” Table 1 reports these responses for 1981, 1992, and 1998. Interpersonal skills, responsibility to others, and statements regarding professional growth were less frequently “always” included.
Dean's Letter Ratings
The final sample contained at least two, and in most cases four, letters from each U.S. medical school. Ultimately, rater agreement was 91%. The raters found that 58% of the dean's letters for the 1998 graduating class used the phrase “letter of evaluation” or its equivalent. In 1992, a similar review showed that only 38% of the schools used this phrase.
The average number of pages of the 1998 letter was 3.5, which was similar to the 3.4 pages estimated by the letter writers. Four of the seven letters that were only two pages long were judged inadequate because of the absence of major sections. Two of the three schools that produced seven-page and eight-page letters were also judged inadequate due to extreme length and relatively unnecessary information. These longer letters tended to have excessive information about parents' careers and students' premedical school activities, and they used quotations from the medical school admission interviews.
The use of multiple letter writers added to or detracted from the quality of the dean's letter. When the process was tightly controlled and all of the schools' writers used a standardized format, then the letter benefited from the more personalized nature of the writers' observations. However, among the 41% of the schools that used multiple letter writers, the raters found a higher frequency of schools that failed to adequately meet the AAMC's guidelines because of the variations among the writers (within the same school) and their inconsistencies in formatting.
When evaluating formatting and content quality, the raters looked for consistency across a school's letters and whether there was adequate information in the seven categories of information outlined in the AAMC's guidelines. Table 2 indicates there had been improvements in format and content. In 1992, 21 schools received “fail” ratings for formatting and content issues; in 1998 only six schools received “fail” grades. The top end of the scale improved as well, with more schools moving into the honors range from the pass range.
The number of schools providing comparative performance information increased from 79 in 1992 to 85 in 1998. In 1998, 70 schools (82%) used a histogram or chart showing all grades given for the required clerkships; 22 schools (25%) used a key word and listed what the other key words were and what percentage of students was in each key word category; 16 schools (18%) used a clustering system, e.g., top third; and 14 schools (16%) reported the exact class rankings for their students. Many schools used more than one method of providing comparative performance data, so the percentages we report add up to more than 100%. The area that had the lowest inter-rater agreement was whether key words or coded phrases were used in the summary paragraph. In addition, of the key words found in 92 letters, 70 of these did not indicate what the alternative words were or provide information about how many students were in each category. Thus, based on the AAMC's guidelines, these letters were not given credit for providing comparative performance information unless they met one of the other criteria for this rating.
Table 2 also provides the 1992 and 1998 ratings for the major categories of analysis. In 1992, 55% of the schools received “pass” or “honors” ratings, and in 1998 this improved to 65% of the schools. The gains in these areas were primarily the result of improvements in formatting and coverage of content. In the overall determination of whether a school received a pass or fail rating, the letter's use of the concept of evaluation was not factored in. Although evaluation is important as a philosophical approach to the letter, we felt that the content, format, and ability to provide comparative performance information were more central to the quality of the document.
Residency program directors should be able to rely on dean's letters and feel confident that they reflect students' performances and abilities in an unbiased way. The potential impact of the dean's letter underscores the need for consistency in quality and clarity in language.
Our study sheds light on recent trends in the writing of dean's letters, and examines the impact of the AAMC's guidelines on the quality and consistency of these letters. The limitation of our findings are inherent in the two stages of data collection. The response rate for the present study was lower than it was in 1992. Further, the estimated costs of preparation of the dean's letters must be taken as only an estimate because this was not a cost study in any sense; rather, it was the writers' “best guess.” The limitations of the analysis of the actual letters are built into the difficulty of obtaining enough samples. While at least two letters were collected from residency files for each school, misclassifications could have resulted from the small sample size.
We found that, since our earlier study in 1992, the process of dean's letter writing and the quality of the letters had changed. Our key finding is that an overall increase in adherence to the guidelines for the writing of dean's letters had improved their quality. However, despite these positive changes, we also found substantial room for improvement.
When looking at the letters for the graduating class of 1998, we found evidence that more schools were complying with the AAMC's guidelines (65% of the schools produced adequate letters). We found an interesting, but somewhat more subtle, reflection of this standardization in the trends captured by the surveys of the letter writers. The letter writers reported that they were now less likely to “always” write about personal characteristics of the students, compared with the letter writers of the early 1980s. Qualities such as “interpersonal skills,” “responsibility to others,” “personality descriptions,” and “statements regarding personal and professional growth” were less likely to be used by the dean's letter writers because providing such descriptions more rightfully falls to the teachers who have worked directly with the students. While these terms are still in the dean's letter in the descriptions of the clerkship, they are less likely to be part of the introduction or the summary that the dean's letter writer creates. Topics such as research or extracurricular activities have been commented upon by dean's letter writers at the same frequency or more frequently over the past two decades. These changes may reflect that the role for the dean's letter has become more clearly defined as a “translator of the record,” whereas in the 1980s the dean's letter was more of a “letter of recommendation.”
Despite the increased adherence to the AAMC's guidelines, more than a third of the medical schools in our study still send out dean's letters that are inadequate. By far the most common deficiency in today's dean's letters is the lack of comparative performance information in the letters of schools that cannot or choose not to provide such information. For some of these schools, merely adding a histogram showing the percentage of students achieving each of the possible grades in a course or clerkship would be an easy solution. For others, especially schools that assign only “pass/fail” grades, the answer is not easy, and this guideline may strike at the core of the institution's philosophy on grading and the right of the faculty to create their own evaluations.
Another barrier to the provision of comparative information is the collective discomfort among schools that any mention of a deficit in an otherwise strong student will lessen, or even eliminate, the chances that some students will get the residencies that they want. The applicant cannot be blamed for fearing that an objective evaluation letter from his or her home school might place him or her at a disadvantage when competing with students from other schools that are less objective. Clearly, this dilemma will not be completely solved until more widespread agreement on uniform approaches to evaluation of students and better partnerships between medical schools and residencies across this education transition have been achieved.
While the improvement in the overall quality of the dean's letter is encouraging, perhaps a complete re-evaluation of this process is in order. It has been over a decade since the AAMC's dean's letter committee was convened, and electronic submission systems are now in place that will enable dean's letters to become more uniform in structure. Perhaps now is the time for a new group to convene and revise the decade-old guidelines to take advantage of the electronic technologies available to us. Wagoner and Suriano18 have recommended this, and we concur. If new guidelines emerge, however, a note of caution is in order: dean's letters should not be simplified to the equivalent of ratings on a complex Likert rating scale. We know that most of the problems medical school graduates face in residency are related to attitude and motivational issues.11 We know that new methods of detecting and evaluating unprofessional behaviors in medical students are being developed.19 These personality, attitudinal, and motivational issues can be illustrated only in thoughtful and candid letters that describe the strengths, deficiencies, and suggested remediations that can come about from an effective evaluation system and a clearly written dean's letter.
Future studies should focus on how residency selection committees use the information in a dean's letter that may not be a “wonderland of positive phrases.” Will the residency selection committee summarily reject an applicant because of a remediated problem? Will they deny an interview to the applicant who has encountered difficulties and will need continued help during residency to further his or her development? Or will residency directors use these more candid letters to build upon the first four years of effort and design a curriculum to strengthen the areas of weakness identified in the dean's letter?