Identification of general practitioners' training needs and the use of strategies to meet these needs are essential steps in continuing medical education.1,2 However, there is no generally accepted best method for identifying practitioners' training needs. The simplest method involves collecting the expressed training needs using a questionnaire or individual interview.3,4 But such self-reporting can fail to identify some of the real needs. Perhaps a way to overcome this difficulty is to put general practitioners in their professional situation when expressing their needs. This might enable them to reveal more of their real needs.5
With this hypothesis in mind, in 1999 we arranged for a large group of physicians to keep personal-office-visit diaries, a method that has received little attention in the medical literature.6 Our goal was to obtain the most accurate description of needs by validating the needs that physicians express in general practice situations. One measure of this accuracy is the level of specificity—i.e., level of not being general or vague—with which the general practitioners express their needs. Thus, our aim was to learn whether the use of a personal-office-visit diary could significantly increase the level of specificity with which general practitioners express their training needs.
Using an exhaustive professional telephone database of 3,654 general practitioners in the Rhône—Alpes region of France, we randomly selected a representative sample of 1,218 general practitioners (i.e., one third of the database group), taking into consideration the type of practice: urban (59.3%), rural (26.4%), or mixed suburban (14.3%). Other potentially relevant characteristics—age, gender, etc.—were not available in the database, and therefore the random selection could not be based on these characteristics.
We sent a letter describing the study to the randomly selected practitioners, whom we then telephoned. All general practitioners in active practice, if they agreed to participate, were eligible for inclusion, and they were randomly assigned to either an intervention group or a control group using a computer process with blocks of ten.
Of the 1,218 practitioners, 32 could not be reached by telephone, 39 others were not eligible for inclusion (no longer in practice), and 109 refused to participate. The remaining group of 1,038 physicians was randomly divided into the study group and the control group, 519 in each. A statistically significant difference was observed between the practitioners included and those not included: for the type of practice (p = .048), and for the number of practitioners working in suburban areas (15.2% versus 8.9%) and the number working in rural areas (25.6% versus 31.1%). It was impossible to conduct telephone interviews with 56 (5.4%) of the physicians included: 40 could not be contacted despite several attempts (20 in each group), and 16 withdrew from the study (15 and 1, in the intervention and control groups, respectively). The characteristics of these 56 practitioners were not statistically significantly different from those of the practitioners who completed the study in terms of type of practice or gender.
Of the 982 remaining practitioners, 728 (74.1%) were men; their mean age on the interview day was 45 ± 6.6 years. A total of 672 (68.4%) were members of a continuing medical education association. The types of practices were urban for 580 practitioners (59.1%), suburban for 153 (15.6%), and rural for 249 (25.4%). The characteristics of the practitioners in the two groups were similar.
We designed the study to compare the level of specificity of the general practitioners' training needs collected after using a personal-office-visit diary for two weeks with those expressed by the general practitioners in the control group. The level of specificity was defined simply by defining four levels of increasing specificity for each need recorded in the diary: 0 = need absent; 1 = need associated with the practitioner's medical specialty or specialties; 2 = need associated with a subcategory within the specialty; 3 = need specifically described. For example, epiglottis would be scored 3, pediatric emergency would be scored 2, pediatrics would be scored 1, and no need, 0.
The practitioners randomized to the intervention group were sent their personal-office-visit diaries within two days of their inclusion in the study. They were asked to note every day in their diaries any difficult situations encountered during their office or home visits. After this two-week observation period, the practitioners were asked to read over their notes and then to summarize the difficulties under four training needs expressed in terms of needs in general practice situations. For example, we can imagine a physician who, during the observation period, encountered difficulties in (1) knowing what diet advice to give to two or three diabetic children and (2) knowing what to advise an adolescent anorexic patient brought to the office by her worried mother. This practitioner would note in the office visit diary “Treatment of juvenile diabetes: dietetic” as the first training need and “Effective interventions in the treatment of anorexia nervosa” as the second.
Each practitioner's list of training needs was then collected during a telephone interview immediately after the two weeks in which the diary had been used. The practitioners in the control group were simply asked during a telephone interview, two weeks after inclusion in the study, to identify four medical situations for which they felt they needed training.
The training needs identified were classed on the basis of the medical field or fields concerned. Within each field, subcategories were identified on the basis of the etiology, anatomic location, and how the situation was discovered, using a thesaurus based on the tenth revision of the World Health Organization's International Classification of Diseases.7
All general practitioners who withdrew from the study, and those that were lost to follow up, were excluded from the analyses, since we were unable to collect any data from them. To assess the level of specificity of the training needs stated by a given general practitioner, the “specificity” scores that we assigned to the training-needs statements obtained from that practitioner were added (i.e., so many scores of 1, so many scores of 2, etc.), and the practitioner was the statistical unit in the analysis. The practitioners were considered as having a high level of specificity if they had more replies classed as 2 or 3 than those classed as 0 or 1.
The characteristics of the general practitioners in both groups (gender, age, time since qualification, type of practice, membership in a continuing medical education association) were compared using Student's t-test for quantitative variables and Pearson's chi-square test for qualitative variables. A multivariate logistic regression analysis was performed to estimate the strengths of associations between the level of specificity for each practitioner and the explicative variables (study group, practitioners' characteristics). To calculate the odds ratios (ORs)—i.e., the correlations between level of specificity and the explicative variables—we used the values of the exponential coefficient, β, from the logistic model.
WHAT WE FOUND
Univariate analyses were performed to explore the relationships between the study group (i.e., intervention or control), the practitioners' characteristics, listed above and the level of specificity of each practitioner's expressed training needs. Factors found be significant—i.e., study group (p < .0001), type of practice (p < .0001), membership in a continuing medical education association (p = .002), age (p = .002), and gender (p = .019)—were modeled in a multivariate logistic regression analysis.
The multivariate analysis showed a statistically significant effect for the intervention (OR = 1.72, 95% confidence interval [95% CI] 1.32 to 2.23, p < .0001) on the whole sample. The use of the office-visit diary was, thus, associated with a higher level of specificity in the expressed training needs of the general practitioners who used the diary. Three other factors were found to be independently associated with higher levels of specificity in expressed training needs: urban practice (OR = 1.96, 95% CI 1.44 to 2.66, p < .0001), age under 40 years old (OR = 1.84, 95% CI 1.25 to 2.73, p = .02), and membership in a continuing medical education association (OR = 1.63, 95% CI 1.23 to 2.15, p = 0.0007).
From the 3,928 replies recorded (four replies per practitioner), 2,939 (74.8%) corresponded to an expression of a training need. The rest of the replies had a level of specificity scored 0 (i.e., need absent), and could not be classified as authentic training needs. Some of the medical fields were more often cited in the control group than in the intervention group (e.g., cardiology: 11.4% versus 6.4%; obstetrics—gynecology: 8.2% versus 5.8%; and emergency medicine: 6.5% versus 2.5%), and some were more often cited in the intervention group (e.g., psychiatry: 5.4% versus 10.6%; pediatrics: 7.5% versus 10.0%; and drug addiction: 1.2% versus 5.0%). In addition to diagnostic or therapeutic objectives, the practitioners in both groups expressed 7.5% of their training needs in terms of management of situations and behaviors in general practice.
In the absence of previously identified training needs, the use of the office-visit diary helped the practitioners in this randomly selected population to identify their training needs with better specificity. We assume that this increased specificity could lead to more accurate identification of the practitioners' real needs.
The choice of the sample base was dictated by the method selected for collecting the study data, i.e., telephone interview. Previous studies have shown that this method provides a higher response level from general practitioners than postal questionnaires do.8,9 The difference between the number of practitioners who were potentially eligible and the number included could have resulted in a selection bias, by limiting the inclusion to those who were the most enthusiastic about continuing medical education. However, we can assume that the practitioners included were reasonably representative of the initial sample because of the high inclusion level (85.2%). The three-to-one sex ratio (men:women) for the included practitioners is similar to that reported for the general population in the original professional telephone database. In addition, the only difference in the baseline characteristics between the included and non-included practitioners was that the proportion of practitioners in rural settings was higher in the group of non-included practitioners.
We believe that the results of this study strongly suggest the utility of a personal-office-visit diary for the identification of general practitioners' training needs. The choice of a two-week period to use the diary seems to have been adequate to enable four training needs to be identified. As Al-Shehri suggested after a preliminary study, a personal-office-visit diary is a flexible tool that is not perceived as a threat by the practitioners and that seems to be well adapted for the expression of their individual needs.6
Independent of the intervention, practitioners in urban practice were more precise in the expression of their training needs than were those in rural practice. One possible explanation for this difference could be that practitioners in urban practices may have more time, since they have a higher level of office visits, and fewer home visits. Those who were under 40 years old and those who were members of a continuing medical education association also seemed to expressed their training needs with more specificity, independently of the intervention. It would seem, therefore, that a little experience helps to identify complementary needs in a specific area, and those practitioners who were members of continuing medical education associations were familiar with this approach.
Because the general practitioners' training needs were identified in the context of a intervention trial, an intervention bias may have affected the results, so that these results may not be observed outside the context of a trial. In a controlled intervention trial, the possibility to conclude strongly about the intervention's efficacy is always limited by the difficulty of generalizing the results to practice. Further studies will be required to provide stronger evidence. Moreover, we cannot be sure that these results can be generalized to other settings, even within France. Differences in training, and in the characteristics of general practice in different areas, lead to a wide range of practices. Despite these differences, we believe that it is highly likely that the simple use of a personal-office-visit diary could lead to the identification of practitioners' training needs in different settings.
Self-evaluation of their training needs by general practitioners is not without limitations.10 Tracey demonstrated that, sometimes, there is a large gap between the results of the self-evaluation of knowledge and real insufficiencies in knowledge.11 However, it has also been suggested that the questions used to assess “actual” knowledge should be directly relevant to the daily practices of the practitioners tested.12
The personal-office-visit diary could be useful, because the higher the specificity of the expression of training needs, the easier it is to establish a suitable training program. Such a diary could also be a helpful and inexpensive tool for the design of specific continuing medical education sessions, and it could also aid, in combination with other methods for identifying needs, in the evaluation of clinical practice. The early implementation of such initiatives, followed by repetition at regular intervals, could, therefore, help to increase the efficacy of continuing medical education and help to modify medical practice in general medicine. Further studies are required to (1) evaluate the ease of integration of the personal-office-visit diary into the daily routines of physicians and (2) determine the best conditions for the use of this tool—i.e., just after completion of residency or later—and whether this should be done individually or through a continuing medical education association.