Training of health professionals is a focus of international efforts to improve the quality of care and to reduce the mortality of the 25.8 million people1 who are infected with HIV in sub-Saharan Africa.2-4 Effective ART training is essential to prevent the development of resistance among HIV isolates and to improve the quality of care for HIV-infected patients. In addition, the funding and, perhaps more importantly, the time that professionals spend away from their clinics during training are scarce resources.
Investment in training of health professionals is guided by only a handful of studies on HIV training programs in resource-limited settings.5-9 Additional information is available on doctors' HIV knowledge and attitudes; for recent examples, please see Souville et al10 and Ayaya et al.11 Perhaps the most comprehensive information on the effectiveness of HIV training programs was provided by an evaluation of the United Nations Family Planning Association's HIV/AIDS-related interventions.12,13 Effectiveness of training was handicapped by 6 factors:12
- (1) Absence of task analysis
- (2) Poor trainee selection and high turnover once trained
- (3) Curricula and modules that were rarely custom-made and not based on needs assessments
- (4) Lack of training materials
- (5) Absence of pretest and post-training evaluation
- (6) Insufficient supervision and follow-up training
A review of randomized controlled trials of training interventions for physicians in Europe and North America showed that didactic methods, such as lectures and presentations, were not effective at changing physician practice.14 Interactive methods, such as hands-on practice sessions, case discussion, and role-play, were effective at changing physician practice and in some cases the health outcomes of patients. The AIDS Education and Training Centers in the United States emphasize interactive methods and clinical training such as consultation.15
Didactic training methods predominate in much of sub-Saharan Africa, with notable exceptions such as the Infectious Diseases Institute (IDI) at Makerere University in Kampala, Uganda, and recent clinical mentoring initiatives in some African countries.16,17 From 2002 to 2005, the IDI offered a 4-week course on comprehensive management of HIV including ART to 25 doctors, 6 times a year. Beginning in 2006, the HIV training program for doctors changed to a 3-week Core course with additional 1-week modules.18 The 6 modules are Advanced Clinical Care and ART; ART Program Management; HIV Care for Children and Preventing Maternal-to-Child Transmission; HIV Prevention in Health Care Settings; Research in HIV Care; and Training of Trainers. All courses feature interactive training methods, such as case presentations and clinical rounds. The IDI also offers 1- and 2-week courses on HIV care including ART for other health professionals.
Both the comprehensive management of HIV course and Core course were designed as leadership courses for African doctors with the idea that the alumni would provide leadership in their national ART programs in the near future. Many medical training programs have elements of leadership training; see, for example, the competencies that were endorsed by the Accreditation Council for Graduate Medical Education in the US.19 The IDI courses focus on clinical leadership with the objectives of improving the clinical skills of the trainees and preparing them to share their knowledge and skills with colleagues and patients in their home hospital or clinic.
As a leadership course, the IDI course is independent of the Ugandan ART program. At the same time, the national ART program in Uganda and ART programs in other African countries are central to the success of the IDI course. The IDI course does not include funding for an intervention package to immediately apply clinical and training skills. Outcomes of the course depend on the emergence of other programs to support the alumni financially and professionally.
To evaluate the IDI course, we collected data from four cohorts of doctors in 2004 and 2005, on 2 sets of outcomes: (1) clinical care and (2) training activities. For clinical care, we assessed their clinical skills and whether alumni were actively treating patients with HIV. For training activities, we assessed whether trainees were actively training colleagues and patients and the number of people trained per month.
In 2004 and 2005, the IDI course on comprehensive management of HIV including ART was 4 weeks long, with half of the time devoted to classroom sessions and half to clinical sessions. The course was taught by professors from the Faculty of Medicine at Makerere University and 2 visiting professors sponsored by the Infectious Disease Society of America. Table 1 summarizes the classroom sessions from the October 2004 course, which was representative of those offered during the pilot test. In addition to the sessions listed in Table 1, trainees attended case discussions almost every day, either at case conferences with IDI's adult infectious disease clinic staff, by presenting their own cases, or by discussing cases observed during the clinical sessions.
Clinical sessions were offered at the adult infectious disease clinic, pediatric infectious disease clinic, dermatology clinic, prevention of mother-to-child transmission (PMTCT) clinic, psychiatric clinic, oncology ward, and medical ward of Mulago Hospital. Clinical sessions were also offered at several outpatient care facilities, including the Development of Antiretroviral Therapy (DART) clinic, Hospice Uganda, Joint Clinical Research Center (JCRC), Makerere University/Johns Hopkins University Laboratory, Mildmay International, Mbuya Reach Out, Nsambya Home Care, and The AIDS Support Organization (TASO). Some of the clinical sessions focused on clinical manifestations of HIV that were specific to that facility such as the oncology ward at Mulago Hospital, while others focused on models of care that were practiced by the facility, such as Mbuya Reach Out. (The Core course now distinguishes between these 2 types of clinical sessions as “clinic attachments” and “clinic visits,” respectively.)
Outcome data were collected from three sources: (1) clinical examination at the beginning and end of the course, (2) telephone survey of alumni 1 month after the course, and (3) follow-up sessions 3 to 4 months after the course.
The IDI introduced a clinical examination for doctors in the autumn of 2004 as both a clinical training activity and an evaluation activity. A visit at the IDI adult infectious disease clinic between a trainee and a patient was observed by an IDI faculty member. Four patients were recruited from the clinic and asked to return late in the day when the clinic was less busy. Each patient participated in 3 clinical examinations per day on 2 days and received a stipend to cover transportation and other expenses. The IDI faculty selected patients who represented specific conditions that were a focus of the course, such as hepatotoxicity from treatment with nevirapine. The faculty excluded patients who could not communicate during the visit, such as those with HIV-related dementia. The patients were the same for some trainees but not all cohorts. In most cases, the same faculty member observed both the pretest and posttest clinical examinations of a trainee.
The faculty assessed the trainees with a 17-item checklist. The Clinical Examination Checklist (see Appendix A) was designed by the Training Subcommittee of the Academic Alliance for AIDS Care and Prevention in Africa. The checklist was based on the “Five A's,” which guide behavior change interventions in a primary care setting: (1) assess, (2) advise, (3) agree, (4) arrange, and (5) assist.20 The checklist included five items on clinical care, 6 items on patient management, and 6 items on professionalism and interpersonal skills.
The faculty member assessed the trainee on each item with a 4-point scale, where 4 was “excellent - trainee demonstrates strength/skills,” and 1 was “unsatisfactory - trainee did not perform task completely and requires a lot of support.” The narrow range and meaning of each score were intended to minimize differences in scores across faculty.
After the visit, the faculty member discussed observations with the trainee to clarify decisions made by the trainee and explored reasons for deviating from the checklist.
A telephone survey was conducted by the AIDS Treatment Information Center (ATIC). ATIC is a “warm line” for clinicians where pharmacists respond to queries about ART and other aspects of HIV/AIDS care and prevention using a toll-free call center, as well as fax and Internet resources.21 The term “warm line” means that a call center is open during business hours (eg, weekdays from 8 AM to 5 PM), as opposed to a “hot line” that is open 24 hours a day. Doctors in Uganda generally had better access to cell phones than to land lines or e-mail. An ATIC pharmacist (R. Lukwago) interviewed 46 of 47 alumni by telephone using a structured questionnaire that included the following 4 topics: (1) clinical activities, (2) training activities, (3) monitoring the last HIV patient treated (trainee practice), and (4) brief case studies on the criteria for initiation ART.
A subset of Ugandan alumni who practiced outside of Kampala were invited to a 1½ day session at the IDI 3 to 4 months after the course. Nine alumni from the November 2004 cohort were invited to an April 2005 session, and 7 alumni from the March 2005 cohort were invited to a June 2005 session. A clinical examination was conducted during the follow-up session, and alumni were asked about the training they conducted after completing the IDI program. The follow-up session also included case presentations and interactive sessions on recent advances in HIV care.
Four cohorts of IDI trainees were invited to participate in the evaluation: (1) October 2004, (2) November 2004, (3) March 2005, and (4) April 2005. The telephone survey was limited to alumni who worked in Uganda, because of the cost of intercountry calls in Africa and the analysis focused on that subset of trainees. The clinical examination sample does not include the October 2004 cohort; October was the first time that the clinical examination checklist was used, and it was substantially revised for subsequent cohorts.
Table 2 summarizes the sample size, enrollment rate, and response rate of the total sample and each cohort for the clinical examination, telephone survey, and follow-up sessions. For the clinical examination, 35 of the 54 trainees who worked in Uganda from the last 3 cohorts agreed to participate, for an enrollment rate of 65%. The 32 complete cases made a response rate of 91%; one observation was missing from the November 2004 follow-up session, the March 2005 pretest, and the March 2005 posttest. For the telephone survey, 47 of the 71 trainees who worked in Uganda in all 4 cohorts agreed to participate, for an enrollment rate of 66%. One observation was missing from the October 2004 cohort, for a response rate of 98%. For the follow-up session, clinical examination data were available for 14 out of 16 alumni, for a response rate of 88%. Missing data on 2 cases from the November 2004 follow-up session and March 2005 cohort were noted above.
Descriptive statistics on the clinical examination checklist and telephone survey were analyzed with SPSS-PC. χ2 tests were performed to compare percentages, and paired sample t tests were performed to compare means.
All trainees were recruited to participate in the evaluation with an informed consent process. The evaluation was approved by the University of Washington's Division of Human Subjects and the Makerere University Faculty of Medicine's Research and Ethics Commitee.
Descriptive statistics on the alumni's clinical activity during the month after the course are reported in Table 3. Twenty six percent of the alumni worked in referral hospitals and 35% worked in district hospitals. The subsample who attended the follow-up session represented a higher percentage of doctors who worked in district hospitals when compared to the entire telephone survey sample.
Ninety-three percent of the alumni treated HIV patients during the previous month, and among those who treated HIV patients, they treated an average of 48 patients per week. Hospitals and clinics generally set aside 1 day per week for HIV clinics, so 48 would be the number of HIV patients they saw on a typical HIV-clinic day. Thirty-three percent of the alumni initiated patients on ART, and those who provided this service initiated an average of 7 patients per week. Seventy-two percent of the alumni monitored patients on ART, and those who provided this service monitored an average of 20 patients per week, or 42% of their HIV patient load. These descriptive statistics are similar to those reported for the subsamples in columns 2 and 3 of Table 3.
As shown in Figure 1, a higher percentage of the March and April 2005 alumni reported that they initiated patients on ART than the October and November 2004 alumni, but the difference was not statistically significant. Fifty percent of the 2005 alumni and 21% of the 2004 alumni initiated patients on ART (P = 0.197). Similarly, 82% of the 2005 alumni monitored patients on ART compared to 62% of the 2004 alumni (P = 0.124).
Clinical Examination Data
Two comparisons of the clinical examination data were made to test the effect of the IDI course on the trainees' clinical capacity: (1) pretest scores from the beginning of the 1-month program were compared to the posttest scores at the end, and (2) posttest scores were compared to the follow-up scores of trainees who attended the follow-up session.
Table 4 presents the pretest and posttest results for the three cohorts of trainees with clinical examination data. On the pretest, the three areas with the highest scores were “perform assessment,” “appropriate procedures,” and “nonverbal communication with the patient.” The mean score for four of the items was 3.0 or below and included drug treatment, management plan, development of follow-up plan, and documentation.
In the comparison of pretest and posttest scores, the trainees showed significant improvement on 11 of the 17 areas. The three areas with the highest scores were “establish rapport,” “professionalism,” and “appropriate procedures.” The mean score was 3.26 or higher for all of the areas.
The trainees who attended the follow-up session were a subsample of all trainees. The comparison between the posttest and follow-up scores could be biased if the subsample was generally more skilled than other trainees because the analysis would reflect differences between the samples rather than improvement in skills. To control for the sample differences, comparisons between the posttest scores and the follow-up session scores were conducted with only the subsample who attended the follow-up session (Table 5).
In Table 5, the difference between the pretest and posttest scores was significant for 5 of the 17 areas, but the smaller sample size had less power to detect significant differences. The difference between the posttest score and follow-up score was significant for 3 areas: “patient advice,” “development of a follow-up plan,” and listen to patient. For 2 of the areas-“World Health Organization staging” and “nonverbal communication”-listed in Table 5, the mean score for the follow-up session was 3.93, which means that 13 of the 14 alumni earned an “excellent” score. For two additional areas-“professionalism” and “listen to patient”-the mean score for the follow-up session was 3.84, which means that 12 of the 14 alumni earned an “excellent” score.
Monitoring the Last HIV Patient Treated
The telephone survey included a series of questions about the last HIV patient treated to assess the alumni's practice of monitoring HIV patients. The results are reported in Table 6.
Several of the questions were about the patient's background to provide a context for the monitoring questions. For example, questions about sex, age, and pregnancy status were necessary to understand whether or not a doctor should monitor the patient's use of contraceptives. Among the women of childbearing age who were not pregnant, 28% of the alumni did not know whether or not the patient was using contraceptives.
We selected the patient's weight as a clinical end point to measure the patient's health status. Change in an HIV patient's weight is an important clinical sign of disease progression.22 HIV wasting syndrome was associated with reduced survival among HIV patients in South Africa,23 and recent evidence shows that it occurs in patients on antiretroviral therapy.24 As shown in Table 6, 53% of the alumni did not know whether or not their patient's weight had changed since the last visit. For some alumni, it was the first visit of the patient who was the subject of the interview, so the doctor had no record of the patient's weight for a comparison. Even without a record, the alumni should be encouraged to ask the patient about changes in his/her weight in the last month. Four percent of the alumni did not know whether or not the patient had an appetite.
Brief Case Studies on the Criteria for Initiation ART
The telephone survey included two brief case studies on initiation of ART based on the Ugandan national guidelines:25
- (1) According to the Ugandan ART guidelines, is a 2-year-old, HIV-positive orphan who lives with his/her aunt a candidate for starting ART? He/she is asymptomatic and has no CD4 count.
- (2) According to the Ugandan ART guidelines, is a 2-year-old, HIV-positive orphan who lives with a healthy grandparent a candidate for starting ART? He/she has pediatric Stage II disease and no CD4 count.
Three possible responses were read to the alumni: (1) yes, (2) no, and (3) need more information.
Forty-two percent of the 46 alumni who responded to the telephone survey answered case study 1 correctly, and 13% answered case study 2 correctly. The correct answer to case study 1 was “need more information” because the aunt's ability to help with adherence to treatment and the child's polymerase chain reaction (PCR) were not known. The correct answer to case study 2 was “yes” because the healthy grandparent can help with adherence and a CD-4 (T-cell) count or PCR is not necessary for children with pediatric Stage II disease.
Alumni were asked about their training activities as part of the telephone survey 1 month after the course and during the follow-up session 3 to 4 months after the course. As shown in Table 3, only 35% of the trainees were engaged in training 1 month after the course. As shown in Figure 2, however, a significantly higher percentage of the March and April 2005 alumni (50%) reported conducting a training session than the October and November 2004 alumni (21%) (P = 0.037). The mean number of people trained was not significantly different between the 2005 (8) and 2004 (9) alumni.
During the follow-up sessions, when alumni were asked about training activities, every one of the 16 alumni had conducted one or more training sessions. The mean number of people trained by the November 2004 alumni was 82 during the 4 months after the course, and the mean trained by the March 2005 alumni was 57 during the 3 months after the course, or an average of 20 people per month. At this rate, for every person trained, 100 people would benefit within 5 months after the course.
Note that training activities of the subsample of alumni who attended the follow-up session appeared to be representative of the other trainees. As shown in Table 3, 36% of the subsample who attended the follow-up session conducted a training session 1 month after the course, compared to 35% of the full sample. During the same time period, 50% of the March 2005 alumni who attended the follow-up session and 25% of the November 2004 alumni who attended the follow-up session conducted a training session, which is comparable to the results for the 2005 and 2004 trainees in Figure 2.
The effects of the IDI's comprehensive HIV course were demonstrated by four outcomes: (1) clinical activities, (2) clinical skills, (3) monitoring of HIV patients, and (4) training activities. Considering clinical activities, 93% of the IDI alumni (n = 46) were treating patients with HIV 1 month after the course. This outcome compares favorably to an evaluation of a 6-week WHO training program on HIV in which 75% of the doctors were treating patients with HIV 2 to 5 years after the course.8 In the context of Uganda in recent years, it is likely that IDI alumni were treating patients with HIV before the course, and the success reflected on the quality of the process by which trainees were selected. In other contexts, collecting information on clinical activities before and after the course could be used to distinguish between 2 aspects of training: (1) the quality of the selection process, and (2) the course's effect on recruiting physicians to treat HIV patients.
Measures of ARV care and training activities showed that the 2004 alumni were less active in ARV care and training than the 2005 alumni 1 month after the course. Among the subsample who attended the follow-up session, the 2004 alumni were training as actively as the 2005 alumni 3 to 4 months after the course. Given that the training activities of the subsample who attended the follow-up session was representative of the full sample, the difference between the 1-month and 3- to 4-month follow-up was attributable to the schedule for rolling out the Ugandan ART program. In 2005, the national ART program was extended from district hospitals to higher-level health centers (health center IV), which included funding to train the staff at the health center IVs.26 During the follow-up sessions, both the 2004 and 2005 alumni reported planning and implementing the district-level training programs.
A leadership course falls between training programs that have been the focus of much of the literature on evaluation of training outcomes. In economics literature, education and training programs have focused on individuals, while outcomes have been measured primarily by increases in employment and earnings that accrued to an individual over his/her lifetime.27-29 In the global health literature, training has focused on a specific type of care and was often part of an intervention package that included supervision, supplies, and drugs. For examples, please see Gilson et al30 and Sweat et al.31 Outcomes were measured primarily by improvement in the quality of care with the expectation that improvements would be observed immediately after the intervention and persist for 6 to 12 months.
The appropriate time to evaluate a leadership course is debatable and may fall sometime in between the timing of the evaluations stated in the economics and global health literature. An early evaluation after a course and before a national program is implemented may understate the course's effects. Several observations beginning 1 to 3 months after a course and continuing on a quarterly or semiannual basis for 2 to 5 years may provide the most accurate data on outcomes. To measure the effect on the alumni's professional life, it may also be helpful to collect data on employment and earnings in addition to their clinical and training activities over this time period.
As for clinical skills, the trainees' skills improved between the beginning and end of the IDI course as measured by the Clinical Examination Checklist. Among the subsample of alumni who attended the follow-up session, their clinical skills continued to improve for 3 or 4 months after the course. These additional improvements may reflect that knowledge continued to be assimilated and skills continued to be developed after the course, or they may reflect the effects of additional training or decision-support interventions by other programs. The three areas in which there was significant improvement could have required more experience with patients to develop: “patient advice,” “development of a follow-up plan,” and “listen to patient.” These results support the recommendation to collect outcome data over a longer period of time and to control for additional training after the IDI course.
The clinical examination measured changes in clinical skills and served as a skill-building activity, but concerns have been raised about the reliability of ratings across observers when clinical examinations were used to evaluate residents in the United States.32,33 Some researchers consider unannounced (or blinded) standardized patient encounters to be the “gold standard” for measuring the quality of clinical care.34 Unblinded standardized patient encounters have been used to evaluate an HIV clinical skills training for second-year residents in internal medicine35 and an HIV risk-assessment and counseling workshop for medical students in the United States.36 We sought to improve the accuracy of the ratings in two ways: (1) the clinical examination checklist provided a structured form for reporting observations, and (2) clinical skills were assessed by the IDI faculty. In research in the United States, structured forms provided more accurate ratings than open-ended questions, and faculty ratings were more accurate than ratings by doctors in community hospitals.32,33 It may be possible to improve consistency across faculty by recruiting three or four IDI faculty members to perform the clinical examinations for all cohorts of trainees. The Clinical Examination Checklist can also be used to mentor trainees and alumni in their home clinics or hospitals.
Considering the monitoring of HIV patients, data from the telephone survey showed room for improvement in monitoring contraceptive use among female patients and changes in weight among all patients. In addition, the alumni's lack of knowledge of the Ugandan guidelines for initiation of ART for children underscored the need to exploit the opportunities created by prevention of mother-to-child transmission programs to initiate early interventions for HIV-exposed and HIV-infected children.37
The data on monitoring HIV patients were based on self-reports about the last HIV patient treated. Self-reports on practice could potentially be less accurate than observation of clinical skills by faculty. A questionnaire on the last patient treated, however, was used to evaluate the effect of the French recommendations for non-occupational post-exposure prophylaxis.38 Self-report on clinical practices may serve as a measure of best practices rather than actual practice; some doctors who reported that they knew about a change in the patients' weight may not have actually known, but doctors who reported that they did not know were unlikely to actually know. The last-patient-treated method warrants further research on its validity given that follow-up data on alumni were much less expensive to collect from telephone surveys than from clinical examinations.
Two limitations of the pilot test were noted above: (1) inter-rater reliability across faculty who observed the clinical examinations and (2) accuracy of the self-reported practice of monitoring the last HIV patient treated. In addition to these limitations, the sample size was small, especially the subsample who attended the follow-up session. The subsample size was not large enough to detect significant differences between the pretest and posttest results that were evident in the Uganda sample and may have missed significant differences between the posttest and follow-up session. The IDI continues to conduct the clinical examination at the beginning and end of the Core course, so data from a larger sample will be available for analysis in the future. On the basis of pilot test results, the IDI plans to conduct a quarterly telephone survey with a sample of alumni, which will provide a more complete picture of clinical and training activities over the course of the national ART program in Uganda.
A final limitation is that clinical activities and skills did not necessarily represent actual clinical practice. Recent research in Tanzania showed that neither direct observation of one consultation nor vignettes, which in their research were similar to unblinded standardized patients, represented actual practice in a resource-limited setting.39 Data on actual clinical practice would be challenging to collect because a leadership course that is independent of national ART programs could not rely on regular reporting on the quality of HIV care. If those data were available from a national ART program, they would represent facility performance rather than individual practice. For specialized evaluations of a leadership course, the best source of information on actual practice may be a series of blinded standardized patients with some visits to the trainees' home facility before the course and some after. For routine evaluations, the clinical examinations and telephone survey are a sustainable source of information on clinical activities and skills.
The IDI course clearly improved the clinical skills of the doctors who completed it. The alumni were practicing HIV care and training. It would be possible to improve their capacity for treating HIV-infected children and their practice for monitoring HIV patients in the future.
The authors thank the members of the Training Subcommittee of the Academic Alliance for AIDS Care and Prevention in Africa who designed the Clinical Examination Checklist: Drs. Harriet Mayanja and Edward Mbidde of Makerere University in Uganda, Robert Colebunders of the Institute of Tropical Medicine in Antwerp, and Michael Scheld of the University of Virginia, and Dr. Moses Kamya and Ms. Cecelia Nakitto. We are grateful to Drs. Joshua Baalwa, Sabrina Bakeera-Kitaka, Grace Ndeezi, and William Worodria and to Ms. Sylvia Ntege of Makerere University, who helped with the follow-up training sessions. We acknowledge Dr. David Serwadda and Ms. Edith Bagambe of Makerere University and Drs. Ceppie Merry and Peter Coakley of Trinity College, University of Dublin in Ireland, for their guidance and management of the AIDS Treatment Information Center. Finally, we are grateful to Drs. Winston Cavert of the University of Minnesota, Gabrielle O'Malley of I-TECH, and Brant Viner of the Boston Medical Center for advice on measuring the outcomes of clinical training, two anonymous reviews for comments on the manuscript, and Bobbi Nodell of I-TECH for editing the manuscript.
2. United States Office of the Global AIDS Coordinator. Action Today, a Foundation for Tomorrow: The President's Emergency Plan for AIDS Relief Second Annual Report to Congress. Available at: http://www.state.gov/s/gac/rl/c14960.htm
. Accessed April 30, 2006.
3. United States Office of the Global AIDS Coordinator. Engendering Bold Leadership: The President's Emergency Plan for AIDS Relief First Annual Report to Congress. Available at: http://www.state.gov/s/gac/rl/c14960.htm
. Accessed April 30, 2006.
5. Misra A, Garg S, Singh MM, et al. Effectiveness of training on the knowledge of HIV/AIDS among doctors in Delhi. J Commun Dis
6. Buskin SE, Lin L, Houyuan Y, et al. HIV/AIDS knowledge and attitudes in Chinese medical professionals and students before and after an informational lecture on HIV/AIDS. J Public Health Manag Pract
7. Zell SC. An evaluation of teaching methods utilized during an HIV miniresidency course for Thai physicians. AIDS Educ Prev
8. Stiernborg M. Impact evaluation of an international training course on HIV/AIDS. AIDS Care
9. Sherr L, Christie G, Sher R, et al. Evaluation of the effectiveness of AIDS training and information courses. S Afr Med J
10. Souville M, Msellati P, Carrieri M-P, Brou H, Tape G, Dakouri G, Vidal L, and the Cote D'Ivoire HIV Drug Access Initiative Socio-Behavioural Evaluation Group. Physicians' knowledge and attitudes toward HIV care in the context of the UNAIDS/Ministry of Health Drug Access Initiative in Cote d'Ivoire. AIDS
. 2003;17(Suppl 3):S79-S86.
11. Ayaya SO, Sitienei J, Odero W, et al. Knowledge, attitudes and practices of private medical practitioners on tuberculosis among HIV/AIDS patients in Eldoret, Kenya. East Afr Med J
12. United Nations Population Fund, Office of Oversight and Evaluation. UNFPA Support to HIV/AIDS-Related Interventions
. Evaluation Report, no date; 16:25-30.
13. United Nations Population Fund, Office of Oversight and Evaluation. UNFPA Support to HIV/AIDS-Related Interventions
, Part II: HIV/AIDS-Related Training. Evaluation Findings, 1999;12:1-3. Available at: http://www.unfpa.org/monitoring/pdf/n-issue12.pdf
. Accessed April 30, 2006.
14. Davis D, O'Brien MAT, Freemantle M, et al. Impact of formal continuing medical education: do conferences, workshops, rounds, and other traditional continuing education activities change physician behavior or health care outcomes. JAMA
15. U. S. Department of Health and Human Services, Health Resources and Services Administration. HIV/AIDS Bureau. Programs: The AIDS Education and Training Centers (AETC). Available at: http://hab.hrsa.gov/programs/factsheets/aetc.htm
. Accessed April 30, 2006.
17. Wester CW, Bussmann H, Avalos A, et al. Establishment of a public antiretroviral treatment clinic for adults in urban Botswana: lessons learned. Clin Infect Dis
19. Accreditation Council for Graduate Medical Education. Outcomes Project. Available at: www.acgme.org/Outcome
. Accessed July 23, 2006.
20. Whitlock EP, Orleans CT, Pender N, et al. Evaluating primary care behavioral counseling interventions. Am J Prev Med
22. Wheeler DA, Gibert CL, Launer CA, et al, and the Terry Beirn Community Programs for Clinical Research on AID. Weight loss as a predictor of survival and disease progression in HIV infection. J Acquir Immune Defic Syndr Hum Retrovirol
23. Post FA, Motasim B, Wood R, et al. AIDS in Africa-survival according to AIDS-defining illness. S Afr Med J
24. Wanke CA, Silva M, Knox TA, et al. Weight loss and wasting remain common complications in individuals infected with human immunodeficiency virus in the era of highly active antiretroviral therapy. Clin Infect Dis
25. Katabira ET, Kamya MR (eds). Antiretroviral Treatment and Care Guidelines for Adults and Children
. Kampala, Uganda: Ministry of Health, Republic of Uganda, 2003.
26. Amolo Okero F, Aceng E, Madraa E, et al. Scaling up antiretroviral therapy: experience in Uganda. In: Perspectives and Practice in Antiretroviral Treatment
. Geneva: World Health Organization; 2003. Available at: http://www.who.int/hiv/amds/case3.pdf
. Accessed April 30, 2006.
27. Heckman JJ, Hotz VJ, Dabos M. Do we need experimental data to evaluate the impact of manpower training on earnings? Eval Rev
28. LaLonde R, Maynard R. How precise are evaluations of employment and training programs: Evidence from a field experiment. Eval Rev
29. Becker GS. Human Capital
. New York: National Bureau of Economic Research; 1975.
30. Gilson L, Mkanje R, Grosskurth H, et al. Cost-effectiveness of improved treatment services for sexually transmitted diseases in preventing HIV-1 infection in Mwanza Region, Tanzania. Lancet
31. Sweat M, Gregorich S, Sangiwa G, et al. Cost-effectiveness of voluntary HIV-1 counselling and testing in reducing sexual transmission of HIV-2 in Kenya and Tanzania. Lancet
32. Noel GL, Herbers JE, Caplo MP, et al. How well do internal medicine faculty members evaluate the clinical skills of residents? Ann Intern Med
33. Herbers JE, Noel GL, Cooper GS, et al. How accurate are faculty evaluations of clinical competence? J Gen Intern Med
34. Luck J, Peabody JW. Using standardized patients to measure physicians' practice: validation study using audio recordings. BMJ
35. Dieckhaus KD, Vontell S, Pfeiffer C, et al. The use of standardized patient encounters for evaluation of a clinical education program on the development of HIV/AIDS-related clinical skills. Journal of HIV/AIDS and Social Services
36. Haist SA Jr, Griffith IC, Hoellein AR, et al. Improving students' sexual history inquiry and HIV counseling with an interactive workshop using standardized patients. J Gen Intern Med
38. Laporte A, Jourdan N, Bouvet E, et al. Post-exposure prophylaxis after non-occupational HIV exposure: impact of recommendations on physicians' experiences and attitudes. AIDS
39. Leonard KL, Masatu MC. The use of direct clinical observation and vignettes for health services quality evaluation in developing countries. Soc Sci Med